Skip to content
Snippets Groups Projects
  1. Sep 24, 2019
  2. Sep 18, 2019
  3. Sep 17, 2019
  4. Sep 12, 2019
    • Scott Solkhon's avatar
      Enable Swift Recon · d463d3f7
      Scott Solkhon authored
      
      This commit adds the necessary configuration to the Swift account,
      container and object configuration files to enable the Swift recon
      cli.
      
      In order to give the object server on each Swift host access to the
      recon files, a Docker volume is mounted into each container which
      generates them. The volume is then mounted read only into the object
      server container. Note that multiple containers append to the same
      file. This should not be a problem since Swift uses a lock when
      appending.
      
      Change-Id: I343d8f45a78ebc3c11ed0c68fe8bec24f9ea7929
      Co-authored-by: default avatarDoug Szumski <doug@stackhpc.com>
      d463d3f7
  5. Sep 11, 2019
  6. Sep 10, 2019
    • Hongbin Lu's avatar
      Configure Zun for Placement (Train+) · 0f5e0658
      Hongbin Lu authored
      After the integration with placement [1], we need to configure how
      zun-compute is going to work with nova-compute.
      
      * If zun-compute and nova-compute run on the same compute node,
        we need to set 'host_shared_with_nova' as true so that Zun
        will use the resource provider (compute node) created by nova.
        In this mode, containers and VMs could claim allocations against
        the same resource provider.
      * If zun-compute runs on a node without nova-compute, no extra
        configuration is needed. By default, each zun-compute will create
        a resource provider in placement to represent the compute node
        it manages.
      
      [1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
      
      Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
      0f5e0658
  7. Sep 05, 2019
  8. Aug 23, 2019
  9. Aug 22, 2019
    • Krzysztof Klimonda's avatar
      Implement TLS encryption for internal endpoints · b0ecd8b6
      Krzysztof Klimonda authored
      This review is the first one in a series of patches and it introduces an
      optional encryption for internal openstack endpoints, implementing part
      of the add-ssl-internal-network spec.
      
      Change-Id: I6589751626486279bf24725f22e71da8cd7f0a43
      b0ecd8b6
  10. Aug 16, 2019
  11. Aug 15, 2019
  12. Aug 14, 2019
    • Scott Solkhon's avatar
      Add support for Swift S3 API · d72b27f2
      Scott Solkhon authored
      This feature is disabled by default, and can be enabled by setting
      'enable_swift_s3api' to 'true' in globals.yml.
      
      Two middlewares are required for Swift S3 - s3api and s3token. Additionally, we
      need to configure the authtoken middleware to delay auth decisions to give
      s3token a chance to authorise requests using EC2 credentials.
      
      Change-Id: Ib8e8e3a1c2ab383100f3c60ec58066e588d3b4db
      d72b27f2
  13. Aug 06, 2019
  14. Aug 05, 2019
  15. Jul 18, 2019
    • Raimund Hook's avatar
      Updated multi-region docs to include keepalived · 99463849
      Raimund Hook authored
      The keepalived_virtual_router_id should be changed from the default in
      the case of a multi-region deployment where the VIP of the different
      regions resides on the same subnet.
      
      This is not immediately clear - this change should make it more obvious.
      
      Change-Id: Ia4899ba407937d9f27832c9d123701729e89987a
      99463849
  16. Jul 16, 2019
    • Michal Nasiadka's avatar
      ceph-nfs: Add rpcbind to Ubuntu host bootstrap · efcaf400
      Michal Nasiadka authored
      * Ubuntu ships with nfs-ganesha 2.6.0, which requires to do an rpcbind
      udp test on startup (was fixed later)
      * Add rpcbind package to be installed by kolla-ansible bootstrap when
      ceph_nfs is enabled
      * Update Ceph deployment docs with a note
      
      Change-Id: Ic19264191a0ed418fa959fdc122cef543446fbe5
      efcaf400
  17. Jul 15, 2019
  18. Jul 12, 2019
  19. Jul 10, 2019
  20. Jul 04, 2019
    • Mark Goddard's avatar
      Deprecate Ceph deployment · e6d0e610
      Mark Goddard authored
      There are now several good tools for deploying Ceph, including Ceph
      Ansible and ceph-deploy. Maintaining our own Ceph deployment is a
      significant maintenance burden, and we should focus on our core mission
      to deploy OpenStack. Given that this is a significant part of kolla
      ansible currently we will need a long deprecation period and a migration
      path to another tool.
      
      Change-Id: Ic603c85c04d8794580a19f9efaa7a8589565f4f6
      Partially-Implements: blueprint remove-ceph
      e6d0e610
  21. Jul 01, 2019
    • Mark Goddard's avatar
      Bump minimum Ansible version to 2.5 · 0a769dc3
      Mark Goddard authored
      This is necessary for some Ansible tests which were renamed in 2.5 -
      including 'version' and 'successful'.
      
      Change-Id: Iacf88ef5589c7571fcf56ba8b99d3dbe76975195
      0a769dc3
  22. Jun 24, 2019
  23. Jun 20, 2019
    • Doug Szumski's avatar
      Add some notes for users Migrating to Kolla Monasca · c4f488ad
      Doug Szumski authored
      This commit should help guide people migrating to Kolla Monasca
      through the murky depths of the migration process. Since Kolla
      did not support Monasca in Queens, some of these steps which
      could be automated are not.
      
      Change-Id: I79051cca27178c3cf1671f5c603e38baf929c55c
      c4f488ad
  24. Jun 17, 2019
  25. Jun 07, 2019
  26. Jun 06, 2019
  27. Jun 05, 2019
    • Gaetan Trellu's avatar
      Improve Qinling documentation · 557193a7
      Gaetan Trellu authored
      - Remove trusted_cidrs that has just been removed from
      Qinling code.
      - Remove use_api_certificate because it's true by default
      - Improve list syntax
      - Add etcd section
      
      Change-Id: I0426a9d61fbeaa23a1affbc7e981a78283e88263
      557193a7
  28. Jun 04, 2019
  29. May 31, 2019
    • Gaetan Trellu's avatar
      Adds Qinling Ansible role · edb34898
      Gaetan Trellu authored
      Qinling is an OpenStack project to provide "Function as a Service".
      This project aims to provide a platform to support serverless functions.
      
      Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
      Implements: blueprint ansible-qinling-support
      Story: 2005760
      Task: 33468
      edb34898
  30. May 30, 2019
  31. May 21, 2019
    • Mark Goddard's avatar
      Fix quickstart for virtual environments · 0b27baf3
      Mark Goddard authored
      The etc_examples and inventory should be copied from the virtual
      environment rather than the system.
      
      Change-Id: I3ac1e057971b7481a0bce2a15351031e51bf97d6
      Closes-Bug: #1829435
      0b27baf3
  32. May 17, 2019
    • Mark Goddard's avatar
      Fix keystone fernet key rotation scheduling · 6c1442c3
      Mark Goddard authored
      Right now every controller rotates fernet keys. This is nice because
      should any controller die, we know the remaining ones will rotate the
      keys. However, we are currently over-rotating the keys.
      
      When we over rotate keys, we get logs like this:
      
       This is not a recognized Fernet token <token> TokenNotFound
      
      Most clients can recover and get a new token, but some clients (like
      Nova passing tokens to other services) can't do that because it doesn't
      have the password to regenerate a new token.
      
      With three controllers, in crontab in keystone-fernet we see the once a day
      correctly staggered across the three controllers:
      
      ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
      0 0 * * * /usr/bin/fernet-rotate.sh
      ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
      0 8 * * * /usr/bin/fernet-rotate.sh
      ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
      0 16 * * * /usr/bin/fernet-rotate.sh
      
      Currently with three controllers we have this keystone config:
      
      [token]
      expiration = 86400 (although, keystone default is one hour)
      allow_expired_window = 172800 (this is the keystone default)
      
      [fernet_tokens]
      max_active_keys = 4
      
      Currently, kolla-ansible configures key rotation according to the following:
      
         rotation_interval = token_expiration / num_hosts
      
      This means we rotate keys more quickly the more hosts we have, which doesn't
      make much sense.
      
      Keystone docs state:
      
         max_active_keys =
           ((token_expiration + allow_expired_window) / rotation_interval) + 2
      
      For details see:
      https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
      
      Rotation is based on pushing out a staging key, so should any server
      start using that key, other servers will consider that valid. Then each
      server in turn starts using the staging key, each in term demoting the
      existing primary key to a secondary key. Eventually you prune the
      secondary keys when there is no token in the wild that would need to be
      decrypted using that key. So this all makes sense.
      
      This change adds new variables for fernet_token_allow_expired_window and
      fernet_key_rotation_interval, so that we can correctly calculate the
      correct number of active keys. We now set the default rotation interval
      so as to minimise the number of active keys to 3 - one primary, one
      secondary, one buffer.
      
      This change also fixes the fernet cron job generator, which was broken
      in the following cases:
      
      * requesting an interval of more than 1 day resulted in no jobs
      * requesting an interval of more than 60 minutes, unless an exact
        multiple of 60 minutes, resulted in no jobs
      
      It should now be possible to request any interval up to a week divided
      by the number of hosts.
      
      Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
      Closes-Bug: #1809469
      6c1442c3
    • binhong.hua's avatar
      Make kolla-ansible support extra volumes · 12ff28a6
      binhong.hua authored
      When integrating 3rd party component into openstack with kolla-ansible,
      maybe have to mount some extra volumes to container.
      
      Change-Id: I69108209320edad4c4ffa37dabadff62d7340939
      Implements: blueprint support-extra-volumes
      12ff28a6
  33. May 14, 2019
  34. Apr 23, 2019
  35. Apr 09, 2019
    • Mark Goddard's avatar
      Update quickstart instructions · b81a4341
      Mark Goddard authored
      * Recommend using a virtual environment
      * Fix reference to multinode inventory
      * Add explicit use of sudo where necessary
      * Change ownership of /etc/kolla to current user
      
      These changes should make it possible to copy/paste from the quickstart
      to get a working deployment.
      
      Change-Id: Ib3990f9e16eaa1e19a4ad5bfea5bdb2e4bc1c333
      b81a4341
  36. Apr 08, 2019
Loading