Skip to content
Snippets Groups Projects
  1. Oct 27, 2021
  2. Oct 01, 2021
  3. Sep 30, 2021
  4. Sep 28, 2021
    • Niklas Hagman's avatar
      Transition Keystone admin user to system scope · 2e933dce
      Niklas Hagman authored
      A system-scoped token implies the user has authorization to act on the
      deployment system. These tokens are useful for interacting with
      resources that affect the deployment as a whole, or exposes resources
      that may otherwise violate project or domain isolation.
      
      Since Queens, the keystone-manage bootstrap command assigns the admin
      role to the admin user with system scope, as well as in the admin
      project. This patch transitions the Keystone admin user from
      authenticating using project scoped tokens to system scoped tokens.
      This is a necessary step towards being able to enable the updated oslo
      policies in services that allow finer grained access to system-level
      resources and APIs.
      
      An etherpad with discussion about the transition to the new oslo
      service policies is:
      
      https://etherpad.opendev.org/p/enabling-system-scope-in-kolla-ansible
      
      
      
      Change-Id: Ib631e2211682862296cce9ea179f2661c90fa585
      Signed-off-by: default avatarNiklas Hagman <ubuntu@post.blinkiz.com>
      2e933dce
  5. Sep 26, 2021
    • Michal Arbet's avatar
      Add way to change weight of haproxy backend per service · 7c2b4bea
      Michal Arbet authored
      This patch adding option to control weight of haproxy
      backends per service via host variable.
      
      Example:
      
      [control]
      server1 haproxy_nova_api_weight=10
      server2 haproxy_nova_api_weight=2 haproxy_keystone_internal_weight=10
      server3 haproxy_keystone_admin_weight=50
      
      If weight is not defined, everything is working as before.
      
      Change-Id: Ie8cc228198651c57f8ffe3eb060875e45d1f0700
      7c2b4bea
  6. Sep 23, 2021
  7. Sep 21, 2021
  8. Sep 16, 2021
  9. Sep 13, 2021
  10. Sep 10, 2021
  11. Sep 07, 2021
    • Michał Nasiadka's avatar
      toolbox: Allow different users logging to ansible.log · 24e6a6ce
      Michał Nasiadka authored
      Currently only operations done with default kolla_toolbox user are logged
      to /var/log/kolla/ansible.log.
      
      In order to fix logging, permissions to ansible.log must allow writing
      for other users in kolla group - and then a separate patch will follow
      to make custom ansible.cfg file usable by other toolbox users.
      
      Partial-Bug: #1942846
      Change-Id: I1be60ac7647b1a838e97f05f15ba5f0e39e8ae3c
      24e6a6ce
  12. Sep 03, 2021
    • Radosław Piliszek's avatar
      Bump libvirtd memlock ulimit · 11d7233c
      Radosław Piliszek authored
      This is required for libvirtd with cgroupsv2 (Debian Bullseye and
      soon others).
      Otherwise, device attachments simply fail.
      The warning message suggests filtering will be disabled but it
      actually just fails the action entirely.
      
      Change-Id: Id1fbd49a31a6e6e51b667f646278b93897c05b21
      Closes-Bug: #1941940
      11d7233c
  13. Aug 30, 2021
    • Radosław Piliszek's avatar
      Restore libvirtd cgroupfs mount · 34c49b9d
      Radosław Piliszek authored
      It was removed in [1] as part of cgroupsv2 cleanup.
      However, the testing did not catch the fact that the legacy
      cgroups behaviour was actually still breaking despite latest
      Docker and setting to use host's cgroups namespace.
      
      [1] 286a03ba
      
      Closes-Bug: #1941706
      Change-Id: I629bb9e70a3fd6bd1e26b2ca22ffcff5e9e8c731
      34c49b9d
  14. Aug 20, 2021
  15. Aug 19, 2021
    • Michal Arbet's avatar
      Rename role haproxy to loadbalancer · ffd53512
      Michal Arbet authored
      For now role haproxy is maintaining haproxy
      and keepalived. In follow-up changes there is also
      proxysql added.
      
      This patch is *only* renaming/moving stuff to more
      prominent role loadbalancer, and moving also specific
      templates to subdirectory.
      
      This was done only to better diff in follow-up
      changes.
      
      Change-Id: I1d39d5bcaefc4016983bf267a2736b742cc3a555
      ffd53512
    • Radosław Piliszek's avatar
      Add ability to retry image pulling · cbb567cb
      Radosław Piliszek authored
      Sometimes, the registries may intermittently fail to deliver the
      images. This is often seen in the CI, though it also happens with
      production deployments, even those with internal registries and/or
      registry mirrors - due to sheer load when trying to pull the
      images from many hosts.
      
      This patchs adds two new vars to control retry behaviour.
      The default has been set to make users happier by default. :-)
      
      Change-Id: I81ad7d8642654f8474f11084c6934aab40243d35
      cbb567cb
    • Radosław Piliszek's avatar
      Remove an unused file · 16a4a9e5
      Radosław Piliszek authored
      It seems to have been mistakenly introduced by
      de00bf49
      "Simplify handler conditionals"
      
      Change-Id: I65b6e322fa11a870f32099bbfd62150cbea4feb5
      16a4a9e5
  16. Aug 18, 2021
  17. Aug 17, 2021
    • Michal Arbet's avatar
      Use Docker healthchecks for keystone-fernet container · 90fd9152
      Michal Arbet authored
      This change enables the use of Docker healthchecks for
      keystone-fernet container. It checks if "key 0" has
      right permissions, and if rsync is able to distribute
      keys to other keystones.
      
      Implements: blueprint container-health-check
      Change-Id: I17bea723d4109e869cd05d211f6f8e4653f46e17
      90fd9152
  18. Aug 16, 2021
  19. Aug 13, 2021
  20. Aug 12, 2021
  21. Aug 10, 2021
    • Radosław Piliszek's avatar
      Refactor and optimise image pulling · 9ff2ecb0
      Radosław Piliszek authored
      We get a nice optimisation by using a filtered loop instead
      of task skipping per service with 'when'.
      
      Partially-Implements: blueprint performance-improvements
      Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
      9ff2ecb0
    • Mark Goddard's avatar
      ironic: Follow up for ironic_enable_keystone_integration · 46df30d8
      Mark Goddard authored
      Follow up for I0c7e9a28876a1d4278fb2ed8555c2b08472864b9 which added a
      ironic_enable_keystone_integration variable to support Ironic in
      multi-region environments. This change skips Keystone service
      registration based on ironic_enable_keystone_integration rather than
      enable_keystone. It also updates the ironic-inspector.conf template to
      use the new variable.
      
      Change-Id: I2ecba4999e194766258ac5beed62877d43829313
      46df30d8
  22. Aug 09, 2021
  23. Aug 06, 2021
    • Ilya Popov's avatar
      Extra var ironic_enable_keystone_integration added. · da4fd2d6
      Ilya Popov authored
      Basically, there are three main installation scenario:
      
      Scenario 1:
      Ironic installation together with other openstack services
      including keystone. In this case variable enable_keystone
      is set to true and keystone service will be installed
      together with ironic installation. It is possible realise this
      scenario, no fix needed
      
      Scenario 2:
      Ironic installation with connection to already installed
      keystone. In this scenario we have to set enable_keystone
      to “No” to prevent from new keystone service installation
      during the ironic installation process. But in other hand,
      we need to have correct sections in ironic.conf to provide
      all information needed to connect to existing keystone.
      But all sections for keystone are added to ironic.conf only
      if enable_keystone var is set to “Yes”. It isn’t possible
      to realise this scenario. Proposed fix provide support for
      this scenario, where multiple regions share the same
      keystone service.
      
      Scenario 3:
      No keystone integration. Ironic don't connect to Keystone.
      It is possible realise this scenario, no fix needed
      
      Proposed solution also keep the default behaviour: if no
      enable_keystone_integration is manually defined by default
      it takes value of enable_keystone variable and all behaviour
      is the same. But if we don't want to install keystone and
      want to connect to existing one at the same time, it will be
      possible to set enable_keystone var to “No”
      (preventing keystone from installation) and at the same
      time set ironic_enable_keystone_integration to Yes to allow
      needed section appear in ironic.conf through templating.
      
      Change-Id: I0c7e9a28876a1d4278fb2ed8555c2b08472864b9
      da4fd2d6
    • Piotr Parczewski's avatar
      Remove deprecated Designate option · 30e0eae8
      Piotr Parczewski authored
      Change-Id: Ib9ea83dd0019a4c4703e673a783c45ab07afe4e7
      30e0eae8
    • Alexander Evseev's avatar
      Elevated privileges required to set owner/group/mode by ansible · 7f98238b
      Alexander Evseev authored
      Elevated (root) privileges are required to set owner/group/mode when
      target owner does not math the user running Ansible. Without it the
      playbook fails with 'Permission denied' error.
      
      Change-Id: Ie7455a5f1ed709dfb9c9d7c653c6f808c00af4c2
      7f98238b
  24. Aug 05, 2021
  25. Aug 02, 2021
    • Michal Arbet's avatar
      Trivial fix horizon's healthcheck when SSL turned on · 6ac4638c
      Michal Arbet authored
      This patch is fixing docker healthcheck for horizon
      by changing value of horizon_listen_port, so
      both apache's virtualhost and healthcheck will have
      same correct port always. Also removing useless
      apache's redirect as all redirects are done on
      haproxy side.
      
      Closes-Bug: #1933846
      Change-Id: Ibb5ad1a5d1bbc74bcb62610d77852d8124c4a323
      6ac4638c
    • Michal Arbet's avatar
      Do not run timesync checks on deployment host · 281c9935
      Michal Arbet authored
      Kolla-ansible install python docker library in role/baremetal
      to group/baremetal, because of this get container facts
      for timesync checks is failing on deployment host.
      
      This patch adding when conditional, so deployment host
      will be skipped as there is no need to run timesync
      checks.
      
      Closes-Bug: #1933347
      Change-Id: Ifefb9c74ee6a80cdbc458992d0196850ddfe7ffa
      281c9935
    • Michal Arbet's avatar
      Fix freezed spice console in horizon · c281a018
      Michal Arbet authored
      This trivial patch is setting "timeout tunnel" in haproxy's
      configuration for spicehtml5proxy. This option extends time
      when spice's websocket connection is closed, so spice will
      not be freezed. Default value is set to 1h as it is in novnc.
      
      Closes-Bug: #1938549
      Change-Id: I3a5cd98ecf4916ebd0748e7c08111ad0e4dca0b2
      c281a018
    • Seena Fallah's avatar
      watcher: add missing become for copying configs · 948e9ae7
      Seena Fallah authored
      
      Signed-off-by: default avatarSeena Fallah <seenafallah@gmail.com>
      Change-Id: Iac1e82710df3ea82c17a6dcbf5d1821362aaa4a5
      948e9ae7
  26. Jul 28, 2021
    • Radosław Piliszek's avatar
      Use more RMQ flags for less busy wait · d7cdad53
      Radosław Piliszek authored
      As mentioned in the Iced014acee7e590c10848e73feca166f48b622dc
      commit message, in Ussuri+ we can use ``+sbwtdcpu none
      +sbwtdio none`` as well. This is due to relying on RMQ-provided
      erlang in version 23.x.
      
      This change adds the extra arguments by default.
      It should be backported down to Ussuri before we do a release with
      Iced014acee7e590c10848e73feca166f48b622dc.
      
      Change-Id: I32e247a6cb34d7f6763b544f247fd408dce2b3a2
      d7cdad53
    • LinPeiWen's avatar
      Delete haproxy_single_service_listen.cfg.j2 template · fca9be38
      LinPeiWen authored
      Delete the "haproxy_single_service_listen.cfg.j2" template,
      which has been replaced by "haproxy_single_service_split.cfg.j2"
      and deprecated in the Victoria version
      
      Change-Id: I3599f85afe9d3045820ea1ea70481ea2500e49ac
      fca9be38
Loading