Skip to content
Snippets Groups Projects
  1. Dec 31, 2021
  2. Sep 16, 2021
  3. Sep 07, 2021
    • Michał Nasiadka's avatar
      toolbox: Allow different users logging to ansible.log · 24e6a6ce
      Michał Nasiadka authored
      Currently only operations done with default kolla_toolbox user are logged
      to /var/log/kolla/ansible.log.
      
      In order to fix logging, permissions to ansible.log must allow writing
      for other users in kolla group - and then a separate patch will follow
      to make custom ansible.cfg file usable by other toolbox users.
      
      Partial-Bug: #1942846
      Change-Id: I1be60ac7647b1a838e97f05f15ba5f0e39e8ae3c
      24e6a6ce
  4. Sep 03, 2021
    • Radosław Piliszek's avatar
      Bump libvirtd memlock ulimit · 11d7233c
      Radosław Piliszek authored
      This is required for libvirtd with cgroupsv2 (Debian Bullseye and
      soon others).
      Otherwise, device attachments simply fail.
      The warning message suggests filtering will be disabled but it
      actually just fails the action entirely.
      
      Change-Id: Id1fbd49a31a6e6e51b667f646278b93897c05b21
      Closes-Bug: #1941940
      11d7233c
  5. Sep 02, 2021
  6. Aug 30, 2021
    • Radosław Piliszek's avatar
      Restore libvirtd cgroupfs mount · 34c49b9d
      Radosław Piliszek authored
      It was removed in [1] as part of cgroupsv2 cleanup.
      However, the testing did not catch the fact that the legacy
      cgroups behaviour was actually still breaking despite latest
      Docker and setting to use host's cgroups namespace.
      
      [1] 286a03ba
      
      Closes-Bug: #1941706
      Change-Id: I629bb9e70a3fd6bd1e26b2ca22ffcff5e9e8c731
      34c49b9d
  7. Aug 25, 2021
    • Mark Goddard's avatar
      Add kolla-ansible gather-facts command · d9a37589
      Mark Goddard authored
      In some situations it may be helpful to populate the fact cache on
      demand. The 'kolla-ansible gather-facts' command may be used to do this.
      
      One specific case where this may be helpful is when running kolla-ansible
      with a --limit argument, since in that case hosts that match the limit
      will gather facts for hosts that fall outside the limit. In the extreme
      case of a limit that matches only one host, it will serially gather
      facts for all other hosts. To avoid this issue, run 'kolla-ansible
      gather-facts' without a limit to populate the fact cache in parallel
      before running the required command with a limit.
      
      Change-Id: I79db9bca23aa1bd45bafa7e7500a90de5a684593
      d9a37589
  8. Aug 20, 2021
  9. Aug 19, 2021
    • Radosław Piliszek's avatar
      Add ability to retry image pulling · cbb567cb
      Radosław Piliszek authored
      Sometimes, the registries may intermittently fail to deliver the
      images. This is often seen in the CI, though it also happens with
      production deployments, even those with internal registries and/or
      registry mirrors - due to sheer load when trying to pull the
      images from many hosts.
      
      This patchs adds two new vars to control retry behaviour.
      The default has been set to make users happier by default. :-)
      
      Change-Id: I81ad7d8642654f8474f11084c6934aab40243d35
      cbb567cb
  10. Aug 18, 2021
  11. Aug 17, 2021
    • Michal Arbet's avatar
      Use Docker healthchecks for keystone-fernet container · 90fd9152
      Michal Arbet authored
      This change enables the use of Docker healthchecks for
      keystone-fernet container. It checks if "key 0" has
      right permissions, and if rsync is able to distribute
      keys to other keystones.
      
      Implements: blueprint container-health-check
      Change-Id: I17bea723d4109e869cd05d211f6f8e4653f46e17
      90fd9152
  12. Aug 16, 2021
  13. Aug 13, 2021
  14. Aug 12, 2021
    • Michal Arbet's avatar
      Trivial fix nova's healthchecks · 85879afc
      Michal Arbet authored
      Kolla-ansible upgrade task is calling different
      handlers as deploy task and these handlers are
      missing healthcheck key. This patch is fixing
      this.
      
      Closes-Bug: #1939679
      Change-Id: Id83d20bfd89c27ccf70a3a79938f428cdb5d40fc
      85879afc
  15. Aug 10, 2021
    • Radosław Piliszek's avatar
      Refactor and optimise image pulling · 9ff2ecb0
      Radosław Piliszek authored
      We get a nice optimisation by using a filtered loop instead
      of task skipping per service with 'when'.
      
      Partially-Implements: blueprint performance-improvements
      Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
      9ff2ecb0
  16. Aug 09, 2021
  17. Aug 06, 2021
    • Ilya Popov's avatar
      Extra var ironic_enable_keystone_integration added. · da4fd2d6
      Ilya Popov authored
      Basically, there are three main installation scenario:
      
      Scenario 1:
      Ironic installation together with other openstack services
      including keystone. In this case variable enable_keystone
      is set to true and keystone service will be installed
      together with ironic installation. It is possible realise this
      scenario, no fix needed
      
      Scenario 2:
      Ironic installation with connection to already installed
      keystone. In this scenario we have to set enable_keystone
      to “No” to prevent from new keystone service installation
      during the ironic installation process. But in other hand,
      we need to have correct sections in ironic.conf to provide
      all information needed to connect to existing keystone.
      But all sections for keystone are added to ironic.conf only
      if enable_keystone var is set to “Yes”. It isn’t possible
      to realise this scenario. Proposed fix provide support for
      this scenario, where multiple regions share the same
      keystone service.
      
      Scenario 3:
      No keystone integration. Ironic don't connect to Keystone.
      It is possible realise this scenario, no fix needed
      
      Proposed solution also keep the default behaviour: if no
      enable_keystone_integration is manually defined by default
      it takes value of enable_keystone variable and all behaviour
      is the same. But if we don't want to install keystone and
      want to connect to existing one at the same time, it will be
      possible to set enable_keystone var to “No”
      (preventing keystone from installation) and at the same
      time set ironic_enable_keystone_integration to Yes to allow
      needed section appear in ironic.conf through templating.
      
      Change-Id: I0c7e9a28876a1d4278fb2ed8555c2b08472864b9
      da4fd2d6
    • Piotr Parczewski's avatar
      Remove deprecated Designate option · 30e0eae8
      Piotr Parczewski authored
      Change-Id: Ib9ea83dd0019a4c4703e673a783c45ab07afe4e7
      30e0eae8
  18. Aug 05, 2021
  19. Aug 02, 2021
    • Michal Arbet's avatar
      Trivial fix horizon's healthcheck when SSL turned on · 6ac4638c
      Michal Arbet authored
      This patch is fixing docker healthcheck for horizon
      by changing value of horizon_listen_port, so
      both apache's virtualhost and healthcheck will have
      same correct port always. Also removing useless
      apache's redirect as all redirects are done on
      haproxy side.
      
      Closes-Bug: #1933846
      Change-Id: Ibb5ad1a5d1bbc74bcb62610d77852d8124c4a323
      6ac4638c
    • Michal Arbet's avatar
      Do not run timesync checks on deployment host · 281c9935
      Michal Arbet authored
      Kolla-ansible install python docker library in role/baremetal
      to group/baremetal, because of this get container facts
      for timesync checks is failing on deployment host.
      
      This patch adding when conditional, so deployment host
      will be skipped as there is no need to run timesync
      checks.
      
      Closes-Bug: #1933347
      Change-Id: Ifefb9c74ee6a80cdbc458992d0196850ddfe7ffa
      281c9935
    • Michal Arbet's avatar
      Fix freezed spice console in horizon · c281a018
      Michal Arbet authored
      This trivial patch is setting "timeout tunnel" in haproxy's
      configuration for spicehtml5proxy. This option extends time
      when spice's websocket connection is closed, so spice will
      not be freezed. Default value is set to 1h as it is in novnc.
      
      Closes-Bug: #1938549
      Change-Id: I3a5cd98ecf4916ebd0748e7c08111ad0e4dca0b2
      c281a018
  20. Jul 29, 2021
    • Will Szumski's avatar
      Support multiple inventories · 6c72fa81
      Will Szumski authored
      Multiple inventories can now be passed to `kolla-ansible`.  This can be
      useful to construct a common inventory that is shared between multiple
      environments.
      
      Change-Id: I2ac5d7851b310bea2ba362b353f18c592a0a6a2e
      6c72fa81
  21. Jul 28, 2021
    • Radosław Piliszek's avatar
      Use more RMQ flags for less busy wait · d7cdad53
      Radosław Piliszek authored
      As mentioned in the Iced014acee7e590c10848e73feca166f48b622dc
      commit message, in Ussuri+ we can use ``+sbwtdcpu none
      +sbwtdio none`` as well. This is due to relying on RMQ-provided
      erlang in version 23.x.
      
      This change adds the extra arguments by default.
      It should be backported down to Ussuri before we do a release with
      Iced014acee7e590c10848e73feca166f48b622dc.
      
      Change-Id: I32e247a6cb34d7f6763b544f247fd408dce2b3a2
      d7cdad53
    • LinPeiWen's avatar
      Delete haproxy_single_service_listen.cfg.j2 template · fca9be38
      LinPeiWen authored
      Delete the "haproxy_single_service_listen.cfg.j2" template,
      which has been replaced by "haproxy_single_service_split.cfg.j2"
      and deprecated in the Victoria version
      
      Change-Id: I3599f85afe9d3045820ea1ea70481ea2500e49ac
      fca9be38
    • Mark Goddard's avatar
      nova: Use cinder user for Ceph · c3f9ba83
      Mark Goddard authored
      In Ussuri, nova stopped using separate Ceph keys for the volumes and vms
      pools by default. Instead, we set ceph_nova_keyring to the value of
      ceph_cinder_keyring by default, which is ceph.client.cinder.keyring.
      This is in line with the Ceph OpenStack integration guide [1]. However,
      the user used by nova to access the vms pool (ceph_nova_user) defaults
      to nova, meaning that nova will still try to use a
      ceph.client.nova.keyring, which probably does not exist. We did not see
      this issue in CI, because we set ceph_nova_user to cinder.
      
      This change fixes the issue by setting ceph_nova_user to the value of
      ceph_cinder_user by default, which is cinder.
      
      Closes-Bug: #1934145
      Related-Bug: #1928690
      
      [1] https://docs.ceph.com/en/latest/rbd/rbd-openstack/
      
      Change-Id: I6aa8db2214e07906f1f3e035411fc80ba911a274
      c3f9ba83
  22. Jul 27, 2021
  23. Jul 22, 2021
    • Mark Goddard's avatar
      ironic: always enable conductor HTTP server · 411668ea
      Mark Goddard authored
      In the Xena release, Ironic removed the iSCSI driver [1]. The
      recommended driver is direct, which uses HTTP to transfer the disk
      image. This requires an HTTP server, and the simplest option is to use
      the one currently deployed when enable_ironic_ipxe is set to true. For
      this reason, this patch always enables the HTTP server running on the
      conductor.
      
      iPXE is still enabled separately, since it cannot currently be used at
      the same time as PXE.
      
      [1] https://review.opendev.org/c/openstack/ironic/+/789382
      
      Change-Id: I30c2ad2bf2957ac544942aefae8898cdc8a61ec6
      411668ea
  24. Jul 21, 2021
    • Mark Goddard's avatar
      Fix ironic_ipxe healthcheck on Debian/Ubuntu · aa28675c
      Mark Goddard authored
      The healthcheck checks for a process called httpd, but these distros
      call it apache2.  This results in the ironic_ipxe container being marked
      as unhealthy.
      
      This change fixes the issue by making the process name distro dependent.
      
      Change-Id: I0b0126e3071146e7f8593ba970ecbed65b36fcfa
      Closes-Bug: #1937037
      aa28675c
  25. Jul 20, 2021
    • Kyle Dean's avatar
      manila: add glance section in manila-share.conf · 2e4f51f6
      Kyle Dean authored
      Since the Victoria release, manila-share.conf requires a glance section
      for some drivers. This change adds the missing section.
      
      It also uses the correct cinder_keystone_user variable to reference the
      cinder user.
      
      Closes-Bug: #1921935
      
      Change-Id: Ib7ce4ed79c28456281087eb4156577f910c072e7
      2e4f51f6
  26. Jul 19, 2021
  27. Jul 08, 2021
    • Piotr Parczewski's avatar
      Reduce container metrics cardinality · c2ae21fd
      Piotr Parczewski authored
      Adds support for passing extra runtime options to cAdvisor.
      By default new options disable exporting rarely useful metrics
      and labels by cAdvisor. This helps reducing the load on Prometheus
      and cAdvisor itself.
      
      Change-Id: I81f3845d6cd03a70a0c8569f8d0ea421027df083
      c2ae21fd
  28. Jul 07, 2021
    • Mark Goddard's avatar
      baremetal: use docker_yum_gpgkey to fetch docker GPG key · 54737cd1
      Mark Goddard authored
      Currently, if you override docker_yum_url, the repo must contain a GPG
      key at {{ docker_yum_url }}/gpg, despite the fact that the GPG key URL
      can be overridden separately via docker_yum_gpgkey. This change uses
      docker_yum_gpgkey consistently, avoiding the need to keep the key in the
      repo.
      
      Closes-Bug: #1934913
      Change-Id: If8e6a02ce0760123f7b076c711727ef575965192
      54737cd1
    • wu.chunyang's avatar
      Remove tempest role · 52619984
      wu.chunyang authored
      Remove tempest role as planned
      
      Change-Id: If3cf073e88c83f670c867a49afe48845f9e81008
      52619984
  29. Jul 02, 2021
    • Rafael Weingärtner's avatar
      Make setup module arguments configurable · 15f2fdcd
      Rafael Weingärtner authored
      
      Ansible facts can have a large impact on the performance of the Ansible
      control host. This patch introduces some control over which facts are
      gathered (kolla_ansible_setup_gather_subset) and which facts are stored
      (kolla_ansible_setup_filter). By default we do not change the default
      values of these arguments to the setup module. The flexibility of these
      arguments is limited, but they do provide enough for a large performance
      improvement in a typical moderate to large OpenStack cloud.
      
      In particular, the large complex dict fact for each interface has a
      large effect, and on an OpenStack controller or hypervisor there may be
      many virtual interfaces. We can use the kolla_ansible_setup_filter
      variable to help:
      
          kolla_ansible_setup_filter: 'ansible_[!qt]*'
      
      This causes Ansible to collect but not store facts matching that
      pattern, which includes the virtual interface facts. Currently we are
      not referencing other facts matching the pattern within Kolla Ansible.
      Note that including the 'ansible_' prefix causes meta facts module_setup
      and gather_subset to be filtered, but this seems to be the only way to
      get a good match on the interface facts. To work around this, we use
      ansible_facts rather than module_setup to detect whether facts exist in
      the cache.
      
      The exact improvement will vary, but has been reported to be as large as
      18x on systems with many virtual interfaces.
      
      For reference, here are some other tunings tried:
      
      * Increased the number of forks (great speedup depending of the size of
        the deployment)
      * Use `strategy = mitogen_linear` (cut processing time in half)
      * Ansible caching (little speed up)
      * SSH tunning (little speed up)
      
      Co-Authored-By: default avatarMark Goddard <mark@stackhpc.com>
      Closes-Bug: #1921538
      Change-Id: Iae8ca4aae945892f1dc65e1b10381d2e26e88805
      15f2fdcd
Loading