Skip to content
Snippets Groups Projects
  1. Jun 07, 2019
  2. Jun 06, 2019
  3. Jun 05, 2019
    • Gaetan Trellu's avatar
      Improve Qinling documentation · 557193a7
      Gaetan Trellu authored
      - Remove trusted_cidrs that has just been removed from
      Qinling code.
      - Remove use_api_certificate because it's true by default
      - Improve list syntax
      - Add etcd section
      
      Change-Id: I0426a9d61fbeaa23a1affbc7e981a78283e88263
      557193a7
  4. Jun 04, 2019
  5. May 31, 2019
    • Gaetan Trellu's avatar
      Adds Qinling Ansible role · edb34898
      Gaetan Trellu authored
      Qinling is an OpenStack project to provide "Function as a Service".
      This project aims to provide a platform to support serverless functions.
      
      Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
      Implements: blueprint ansible-qinling-support
      Story: 2005760
      Task: 33468
      edb34898
  6. May 30, 2019
  7. May 21, 2019
    • Mark Goddard's avatar
      Fix quickstart for virtual environments · 0b27baf3
      Mark Goddard authored
      The etc_examples and inventory should be copied from the virtual
      environment rather than the system.
      
      Change-Id: I3ac1e057971b7481a0bce2a15351031e51bf97d6
      Closes-Bug: #1829435
      0b27baf3
  8. May 17, 2019
    • Mark Goddard's avatar
      Fix keystone fernet key rotation scheduling · 6c1442c3
      Mark Goddard authored
      Right now every controller rotates fernet keys. This is nice because
      should any controller die, we know the remaining ones will rotate the
      keys. However, we are currently over-rotating the keys.
      
      When we over rotate keys, we get logs like this:
      
       This is not a recognized Fernet token <token> TokenNotFound
      
      Most clients can recover and get a new token, but some clients (like
      Nova passing tokens to other services) can't do that because it doesn't
      have the password to regenerate a new token.
      
      With three controllers, in crontab in keystone-fernet we see the once a day
      correctly staggered across the three controllers:
      
      ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
      0 0 * * * /usr/bin/fernet-rotate.sh
      ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
      0 8 * * * /usr/bin/fernet-rotate.sh
      ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
      0 16 * * * /usr/bin/fernet-rotate.sh
      
      Currently with three controllers we have this keystone config:
      
      [token]
      expiration = 86400 (although, keystone default is one hour)
      allow_expired_window = 172800 (this is the keystone default)
      
      [fernet_tokens]
      max_active_keys = 4
      
      Currently, kolla-ansible configures key rotation according to the following:
      
         rotation_interval = token_expiration / num_hosts
      
      This means we rotate keys more quickly the more hosts we have, which doesn't
      make much sense.
      
      Keystone docs state:
      
         max_active_keys =
           ((token_expiration + allow_expired_window) / rotation_interval) + 2
      
      For details see:
      https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
      
      Rotation is based on pushing out a staging key, so should any server
      start using that key, other servers will consider that valid. Then each
      server in turn starts using the staging key, each in term demoting the
      existing primary key to a secondary key. Eventually you prune the
      secondary keys when there is no token in the wild that would need to be
      decrypted using that key. So this all makes sense.
      
      This change adds new variables for fernet_token_allow_expired_window and
      fernet_key_rotation_interval, so that we can correctly calculate the
      correct number of active keys. We now set the default rotation interval
      so as to minimise the number of active keys to 3 - one primary, one
      secondary, one buffer.
      
      This change also fixes the fernet cron job generator, which was broken
      in the following cases:
      
      * requesting an interval of more than 1 day resulted in no jobs
      * requesting an interval of more than 60 minutes, unless an exact
        multiple of 60 minutes, resulted in no jobs
      
      It should now be possible to request any interval up to a week divided
      by the number of hosts.
      
      Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
      Closes-Bug: #1809469
      6c1442c3
    • binhong.hua's avatar
      Make kolla-ansible support extra volumes · 12ff28a6
      binhong.hua authored
      When integrating 3rd party component into openstack with kolla-ansible,
      maybe have to mount some extra volumes to container.
      
      Change-Id: I69108209320edad4c4ffa37dabadff62d7340939
      Implements: blueprint support-extra-volumes
      12ff28a6
  9. May 14, 2019
  10. Apr 23, 2019
  11. Apr 09, 2019
    • Mark Goddard's avatar
      Update quickstart instructions · b81a4341
      Mark Goddard authored
      * Recommend using a virtual environment
      * Fix reference to multinode inventory
      * Add explicit use of sudo where necessary
      * Change ownership of /etc/kolla to current user
      
      These changes should make it possible to copy/paste from the quickstart
      to get a working deployment.
      
      Change-Id: Ib3990f9e16eaa1e19a4ad5bfea5bdb2e4bc1c333
      b81a4341
  12. Apr 08, 2019
  13. Mar 14, 2019
    • Scott Solkhon's avatar
      Support separate Swift storage networks · a781c643
      Scott Solkhon authored
      Adds support to seperate Swift access and replication traffic from other storage traffic.
      
      In a deployment where both Ceph and Swift have been deployed,
      this changes adds functionalality to support optional seperation
      of storage network traffic. This adds two new network interfaces
      'swift_storage_interface' and 'swift_replication_interface' which maintain
      backwards compatibility.
      
      The Swift access network interface is configured via 'swift_storage_interface',
      which defaults to 'storage_interface'. The Swift replication network
      interface is configured via 'swift_replication_interface', which
      defaults to 'swift_storage_interface'.
      
      If a separate replication network is used, Kolla Ansible now deploys separate
      replication servers for the accounts, containers and objects, that listen on
      this network. In this case, these services handle only replication traffic, and
      the original account-, container- and object- servers only handle storage
      user requests.
      
      Change-Id: Ib39e081574e030126f2d08f51de89641ddb0d42e
      a781c643
  14. Mar 08, 2019
    • Doug Szumski's avatar
      Support customising Fluentd formatting · c8a22f10
      Doug Szumski authored
      In some scenarios it may be useful to perform custom formatting of logs
      before forwarding them. For example, the JSON formatter plugin can be
      used to convert an event to JSON.
      
      Change-Id: I3dd9240c5910a9477456283b392edc9566882dcd
      c8a22f10
  15. Mar 07, 2019
    • Arkadiy Shinkarev's avatar
      Added ability to skip enabled backends pre-check · 1d9f4f9f
      Arkadiy Shinkarev authored
      When using custom storage backends with cinder.conf overrides file,
      precheck stage in kolla-ansible is fail. This commit adds option
      'skip_cinder_backend_check' (default: False) to cinder role.
      
      Change-Id: Ifee138ad8b281903ea2365441aada044c80c46f0
      1d9f4f9f
  16. Feb 28, 2019
    • Mark Goddard's avatar
      Update links in docs to latest · fba5e1ce
      Mark Goddard authored
      To avoid links to OpenStack docs getting out of date in our docs, use
      the latest version.
      
      Ideally after cutting each stable branch we should change these links to
      use the current release.
      
      Co-Authored-By: Isaiah Inuwa
      Change-Id: Ia1e3c720f4e688861b8f76874a3943b0f4e50b17
      fba5e1ce
  17. Feb 25, 2019
  18. Feb 14, 2019
    • Doug Szumski's avatar
      Automate Monasca documentation for configuring Kafka · ecf00096
      Doug Szumski authored
      Until the Monasca Kafka client fork is removed it is currently required
      to run Kafka in compatibility mode. It is also necessary to disable
      an optimisation in the Kafka brokers to clean up idle connections. This
      is because the optimisation was added after the Monasca Kafka client was
      forked, and the client hasn't been updated since. These settings are now
      applied automatically when Monasca is enabled.
      
      Change-Id: I6935f1fb29f4f731cf3c9a70a0adf4d5812ca55e
      ecf00096
    • Pedro Alvarez's avatar
      Fix link to Manila Guide · 6c6759e9
      Pedro Alvarez authored
      Change-Id: I3defe0c38f41d7335e1cbafb75523c3cd44323ee
      6c6759e9
  19. Feb 07, 2019
  20. Feb 01, 2019
  21. Jan 24, 2019
    • binhong.hua's avatar
      Link kolla_log volume dir to /var/log/kolla · 93e5e8e6
      binhong.hua authored
      The path /var/lib/docker/volumes/kolla_logs/_data/ is too long
      shorter log path will help to debug from log.
      The volume path is compatible with docker-engine and docker-ce.
      
      Change-Id: I9195d5f24d938f5060fe748aac3ae58c79ec5abf
      93e5e8e6
    • binhong.hua's avatar
      add ulimit support for kolla_docker · 3d3f5f16
      binhong.hua authored
      By default, docker containers inherit ulimit from limits of docker
      deamon. On CentOS 7, docker daemon default NOFILE is 1048576.
      It can found in /usr/lib/systemd/system/docker.service.
      The big limit will cause many problem. we should control it in
      production environment.
      
      Change-Id: Iab962446a94ef092977728259d9818b86cfa7f68
      3d3f5f16
  22. Jan 13, 2019
  23. Jan 10, 2019
  24. Jan 08, 2019
  25. Jan 03, 2019
  26. Dec 17, 2018
    • Patrick O'Neill's avatar
      Add support for Quobyte backend to Cinder and Nova · f77cc87e
      Patrick O'Neill authored
      Add an enable_cinder_backend_quobyte option to etc/kolla/globals.yml to
      enable use the Quobyte Cinder backend.
      Change the bind mounts for /var/lib/nova/mnt to include the shared
      propogation if Quobyte is enabled.
      Update the documentation to include a section on configuring the Cinder.
      
      Implements: blueprint cinder-quobyte-backend
      
      Change-Id: I364939407ad244fe81cea40f880effdbcaa8a20d
      f77cc87e
  27. Nov 30, 2018
    • João Feteira's avatar
      Option neutron_plugin_agent: "opendaylight" added · f8f97481
      João Feteira authored
      Added the missing option neutron_plugin_agent: "opendaylight" added to
      the opendaylight documentation page. Without it the deployment would
      not use the neutron_plugin_agent but the default one: openvswitch .
      
      Change-Id: I56a377e1faab9a50f36383ea59b45bf5a9155bcf
      f8f97481
    • Paul Bourke's avatar
      Add note to external ceph docs for pools/keyrings · a47f7010
      Paul Bourke authored
      When using external Ceph the operator must create pools for each service
      and configure keyrings with appropriate permissions. The official Ceph
      docs describe this in detail so let operators know this.
      
      Change-Id: Ic3e52e1fbbf09ec09ac21b5b3067092b195812f1
      a47f7010
  28. Nov 28, 2018
  29. Nov 27, 2018
  30. Nov 23, 2018
  31. Nov 22, 2018
    • Nick Jones's avatar
      Add new option to perform an on-demand backup of MariaDB · f704a780
      Nick Jones authored
      blueprint database-backup-recovery
      
      Introduce a new option, mariadb_backup, which takes a backup of all
      databases hosted in MariaDB.
      
      Backups are performed using XtraBackup, the output of which is saved to
      a dedicated Docker volume on the target host (which defaults to the
      first node in the MariaDB cluster).
      
      It supports either full (the default) or incremental backups.
      
      Change-Id: Ied224c0d19b8734aa72092aaddd530155999dbc3
      f704a780
  32. Nov 21, 2018
  33. Nov 19, 2018
    • Doug Szumski's avatar
      Add missing steps to Vagrant instructions · 205df694
      Doug Szumski authored
      Add a couple of missing steps to complete a Vagrant deployment. In
      the case of the multi-node deployment we could go one step further
      and ensure that the supplied inventory matches the default set of
      nodes created by Vagrant.
      
      Change-Id: Iee878e26989e92e4de06c071704a6794011b6e58
      205df694
Loading