Skip to content
Snippets Groups Projects
  1. Nov 12, 2021
  2. Oct 06, 2021
  3. Oct 04, 2021
  4. Oct 01, 2021
    • Mark Goddard's avatar
      monasca: change default of monasca_ntp_server · 1d0171fc
      Mark Goddard authored
      Updates the default value of 'monasca_ntp_server' from
      'external_ntp_servers[0]' to '0.pool.ntp.org'.  This is due to the
      removal of the 'external_ntp_servers' variable as part of the removal of
      Chrony deployment.
      
      Change-Id: I2e7538a2e95c7b8e9280eb051ee634b4313db129
      1d0171fc
  5. Sep 30, 2021
  6. Sep 28, 2021
    • Niklas Hagman's avatar
      Transition Keystone admin user to system scope · 2e933dce
      Niklas Hagman authored
      A system-scoped token implies the user has authorization to act on the
      deployment system. These tokens are useful for interacting with
      resources that affect the deployment as a whole, or exposes resources
      that may otherwise violate project or domain isolation.
      
      Since Queens, the keystone-manage bootstrap command assigns the admin
      role to the admin user with system scope, as well as in the admin
      project. This patch transitions the Keystone admin user from
      authenticating using project scoped tokens to system scoped tokens.
      This is a necessary step towards being able to enable the updated oslo
      policies in services that allow finer grained access to system-level
      resources and APIs.
      
      An etherpad with discussion about the transition to the new oslo
      service policies is:
      
      https://etherpad.opendev.org/p/enabling-system-scope-in-kolla-ansible
      
      
      
      Change-Id: Ib631e2211682862296cce9ea179f2661c90fa585
      Signed-off-by: default avatarNiklas Hagman <ubuntu@post.blinkiz.com>
      2e933dce
  7. Sep 27, 2021
  8. Sep 26, 2021
    • Michal Arbet's avatar
      Add way to change weight of haproxy backend per service · 7c2b4bea
      Michal Arbet authored
      This patch adding option to control weight of haproxy
      backends per service via host variable.
      
      Example:
      
      [control]
      server1 haproxy_nova_api_weight=10
      server2 haproxy_nova_api_weight=2 haproxy_keystone_internal_weight=10
      server3 haproxy_keystone_admin_weight=50
      
      If weight is not defined, everything is working as before.
      
      Change-Id: Ie8cc228198651c57f8ffe3eb060875e45d1f0700
      7c2b4bea
  9. Sep 23, 2021
  10. Sep 22, 2021
  11. Sep 21, 2021
  12. Sep 20, 2021
  13. Sep 16, 2021
  14. Sep 13, 2021
  15. Sep 10, 2021
  16. Sep 07, 2021
    • Michał Nasiadka's avatar
      toolbox: Allow different users logging to ansible.log · 24e6a6ce
      Michał Nasiadka authored
      Currently only operations done with default kolla_toolbox user are logged
      to /var/log/kolla/ansible.log.
      
      In order to fix logging, permissions to ansible.log must allow writing
      for other users in kolla group - and then a separate patch will follow
      to make custom ansible.cfg file usable by other toolbox users.
      
      Partial-Bug: #1942846
      Change-Id: I1be60ac7647b1a838e97f05f15ba5f0e39e8ae3c
      24e6a6ce
  17. Sep 03, 2021
    • Radosław Piliszek's avatar
      Bump libvirtd memlock ulimit · 11d7233c
      Radosław Piliszek authored
      This is required for libvirtd with cgroupsv2 (Debian Bullseye and
      soon others).
      Otherwise, device attachments simply fail.
      The warning message suggests filtering will be disabled but it
      actually just fails the action entirely.
      
      Change-Id: Id1fbd49a31a6e6e51b667f646278b93897c05b21
      Closes-Bug: #1941940
      11d7233c
  18. Aug 30, 2021
    • Radosław Piliszek's avatar
      Restore libvirtd cgroupfs mount · 34c49b9d
      Radosław Piliszek authored
      It was removed in [1] as part of cgroupsv2 cleanup.
      However, the testing did not catch the fact that the legacy
      cgroups behaviour was actually still breaking despite latest
      Docker and setting to use host's cgroups namespace.
      
      [1] 286a03ba
      
      Closes-Bug: #1941706
      Change-Id: I629bb9e70a3fd6bd1e26b2ca22ffcff5e9e8c731
      34c49b9d
  19. Aug 20, 2021
  20. Aug 19, 2021
    • Michal Arbet's avatar
      Rename role haproxy to loadbalancer · ffd53512
      Michal Arbet authored
      For now role haproxy is maintaining haproxy
      and keepalived. In follow-up changes there is also
      proxysql added.
      
      This patch is *only* renaming/moving stuff to more
      prominent role loadbalancer, and moving also specific
      templates to subdirectory.
      
      This was done only to better diff in follow-up
      changes.
      
      Change-Id: I1d39d5bcaefc4016983bf267a2736b742cc3a555
      ffd53512
    • Radosław Piliszek's avatar
      Add ability to retry image pulling · cbb567cb
      Radosław Piliszek authored
      Sometimes, the registries may intermittently fail to deliver the
      images. This is often seen in the CI, though it also happens with
      production deployments, even those with internal registries and/or
      registry mirrors - due to sheer load when trying to pull the
      images from many hosts.
      
      This patchs adds two new vars to control retry behaviour.
      The default has been set to make users happier by default. :-)
      
      Change-Id: I81ad7d8642654f8474f11084c6934aab40243d35
      cbb567cb
    • Radosław Piliszek's avatar
      Remove an unused file · 16a4a9e5
      Radosław Piliszek authored
      It seems to have been mistakenly introduced by
      de00bf49
      "Simplify handler conditionals"
      
      Change-Id: I65b6e322fa11a870f32099bbfd62150cbea4feb5
      16a4a9e5
  21. Aug 18, 2021
  22. Aug 17, 2021
    • Michal Arbet's avatar
      Use Docker healthchecks for keystone-fernet container · 90fd9152
      Michal Arbet authored
      This change enables the use of Docker healthchecks for
      keystone-fernet container. It checks if "key 0" has
      right permissions, and if rsync is able to distribute
      keys to other keystones.
      
      Implements: blueprint container-health-check
      Change-Id: I17bea723d4109e869cd05d211f6f8e4653f46e17
      90fd9152
  23. Aug 16, 2021
  24. Aug 13, 2021
  25. Aug 12, 2021
  26. Aug 10, 2021
    • Radosław Piliszek's avatar
      Refactor and optimise image pulling · 9ff2ecb0
      Radosław Piliszek authored
      We get a nice optimisation by using a filtered loop instead
      of task skipping per service with 'when'.
      
      Partially-Implements: blueprint performance-improvements
      Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
      9ff2ecb0
    • Mark Goddard's avatar
      ironic: Follow up for ironic_enable_keystone_integration · 46df30d8
      Mark Goddard authored
      Follow up for I0c7e9a28876a1d4278fb2ed8555c2b08472864b9 which added a
      ironic_enable_keystone_integration variable to support Ironic in
      multi-region environments. This change skips Keystone service
      registration based on ironic_enable_keystone_integration rather than
      enable_keystone. It also updates the ironic-inspector.conf template to
      use the new variable.
      
      Change-Id: I2ecba4999e194766258ac5beed62877d43829313
      46df30d8
  27. Aug 09, 2021
  28. Aug 06, 2021
    • Victor Morales's avatar
      Remove unused imports in merge_yaml · d15d9430
      Victor Morales authored
      Dumper and Loader are classes seem to be loaded but not used in the merge_yaml
      file. This change removes them for reducing the number of lines.
      
      Change-Id: I87ef305903ab02226fcaa725ece622647d17811c
      d15d9430
    • Ilya Popov's avatar
      Extra var ironic_enable_keystone_integration added. · da4fd2d6
      Ilya Popov authored
      Basically, there are three main installation scenario:
      
      Scenario 1:
      Ironic installation together with other openstack services
      including keystone. In this case variable enable_keystone
      is set to true and keystone service will be installed
      together with ironic installation. It is possible realise this
      scenario, no fix needed
      
      Scenario 2:
      Ironic installation with connection to already installed
      keystone. In this scenario we have to set enable_keystone
      to “No” to prevent from new keystone service installation
      during the ironic installation process. But in other hand,
      we need to have correct sections in ironic.conf to provide
      all information needed to connect to existing keystone.
      But all sections for keystone are added to ironic.conf only
      if enable_keystone var is set to “Yes”. It isn’t possible
      to realise this scenario. Proposed fix provide support for
      this scenario, where multiple regions share the same
      keystone service.
      
      Scenario 3:
      No keystone integration. Ironic don't connect to Keystone.
      It is possible realise this scenario, no fix needed
      
      Proposed solution also keep the default behaviour: if no
      enable_keystone_integration is manually defined by default
      it takes value of enable_keystone variable and all behaviour
      is the same. But if we don't want to install keystone and
      want to connect to existing one at the same time, it will be
      possible to set enable_keystone var to “No”
      (preventing keystone from installation) and at the same
      time set ironic_enable_keystone_integration to Yes to allow
      needed section appear in ironic.conf through templating.
      
      Change-Id: I0c7e9a28876a1d4278fb2ed8555c2b08472864b9
      da4fd2d6
Loading