Skip to content
Snippets Groups Projects
  1. Jan 20, 2022
  2. Nov 11, 2021
  3. Nov 10, 2021
  4. Nov 09, 2021
  5. Nov 04, 2021
  6. Oct 27, 2021
  7. Oct 22, 2021
  8. Oct 20, 2021
  9. Oct 12, 2021
  10. Oct 06, 2021
  11. Oct 04, 2021
    • Gaël THEROND (Fl1nt)'s avatar
      Add missing CloudKitty documentation. · d5aa73c4
      Gaël THEROND (Fl1nt) authored
      * Fix various typos and formatting.
      * Add documentation about custom collector backend.
      * Add documentation about custom storage backend.
      
      Change-Id: If937afc5ce2a2747f464fbaf38a5dcf2e57ba04f
      Closes-bug: #1940842
      d5aa73c4
  12. Sep 30, 2021
  13. Sep 28, 2021
    • Niklas Hagman's avatar
      Transition Keystone admin user to system scope · 2e933dce
      Niklas Hagman authored
      A system-scoped token implies the user has authorization to act on the
      deployment system. These tokens are useful for interacting with
      resources that affect the deployment as a whole, or exposes resources
      that may otherwise violate project or domain isolation.
      
      Since Queens, the keystone-manage bootstrap command assigns the admin
      role to the admin user with system scope, as well as in the admin
      project. This patch transitions the Keystone admin user from
      authenticating using project scoped tokens to system scoped tokens.
      This is a necessary step towards being able to enable the updated oslo
      policies in services that allow finer grained access to system-level
      resources and APIs.
      
      An etherpad with discussion about the transition to the new oslo
      service policies is:
      
      https://etherpad.opendev.org/p/enabling-system-scope-in-kolla-ansible
      
      
      
      Change-Id: Ib631e2211682862296cce9ea179f2661c90fa585
      Signed-off-by: default avatarNiklas Hagman <ubuntu@post.blinkiz.com>
      2e933dce
  14. Sep 26, 2021
    • Michal Arbet's avatar
      Add way to change weight of haproxy backend per service · 7c2b4bea
      Michal Arbet authored
      This patch adding option to control weight of haproxy
      backends per service via host variable.
      
      Example:
      
      [control]
      server1 haproxy_nova_api_weight=10
      server2 haproxy_nova_api_weight=2 haproxy_keystone_internal_weight=10
      server3 haproxy_keystone_admin_weight=50
      
      If weight is not defined, everything is working as before.
      
      Change-Id: Ie8cc228198651c57f8ffe3eb060875e45d1f0700
      7c2b4bea
  15. Sep 23, 2021
  16. Sep 16, 2021
  17. Aug 25, 2021
    • Mark Goddard's avatar
      docs: Add placeholder page for CI & testing information · d8641e90
      Mark Goddard authored
      Change-Id: Iebcac0827c6f715c6b804223cdcf2cc2e425120b
      d8641e90
    • Mark Goddard's avatar
      Add kolla-ansible gather-facts command · d9a37589
      Mark Goddard authored
      In some situations it may be helpful to populate the fact cache on
      demand. The 'kolla-ansible gather-facts' command may be used to do this.
      
      One specific case where this may be helpful is when running kolla-ansible
      with a --limit argument, since in that case hosts that match the limit
      will gather facts for hosts that fall outside the limit. In the extreme
      case of a limit that matches only one host, it will serially gather
      facts for all other hosts. To avoid this issue, run 'kolla-ansible
      gather-facts' without a limit to populate the fact cache in parallel
      before running the required command with a limit.
      
      Change-Id: I79db9bca23aa1bd45bafa7e7500a90de5a684593
      d9a37589
  18. Aug 22, 2021
  19. Aug 20, 2021
  20. Aug 17, 2021
    • Skylar Kelty's avatar
      Update Manila deploy steps for Wallaby · 8d5dde37
      Skylar Kelty authored
      Manila has changed from using subfolders to subvolumes.
      We need a bit of a tidy up to prevent deploy errors.
      This change also adds the ability to specify the ceph FS
      Manila uses instead of relying on the default "first found".
      
      Closes-Bug: #1938285
      Closes-Bug: #1935784
      Change-Id: I1d0d34919fbbe74a4022cd496bf84b8b764b5e0f
      Unverified
      8d5dde37
  21. Aug 06, 2021
    • Ilya Popov's avatar
      Extra var ironic_enable_keystone_integration added. · da4fd2d6
      Ilya Popov authored
      Basically, there are three main installation scenario:
      
      Scenario 1:
      Ironic installation together with other openstack services
      including keystone. In this case variable enable_keystone
      is set to true and keystone service will be installed
      together with ironic installation. It is possible realise this
      scenario, no fix needed
      
      Scenario 2:
      Ironic installation with connection to already installed
      keystone. In this scenario we have to set enable_keystone
      to “No” to prevent from new keystone service installation
      during the ironic installation process. But in other hand,
      we need to have correct sections in ironic.conf to provide
      all information needed to connect to existing keystone.
      But all sections for keystone are added to ironic.conf only
      if enable_keystone var is set to “Yes”. It isn’t possible
      to realise this scenario. Proposed fix provide support for
      this scenario, where multiple regions share the same
      keystone service.
      
      Scenario 3:
      No keystone integration. Ironic don't connect to Keystone.
      It is possible realise this scenario, no fix needed
      
      Proposed solution also keep the default behaviour: if no
      enable_keystone_integration is manually defined by default
      it takes value of enable_keystone variable and all behaviour
      is the same. But if we don't want to install keystone and
      want to connect to existing one at the same time, it will be
      possible to set enable_keystone var to “No”
      (preventing keystone from installation) and at the same
      time set ironic_enable_keystone_integration to Yes to allow
      needed section appear in ironic.conf through templating.
      
      Change-Id: I0c7e9a28876a1d4278fb2ed8555c2b08472864b9
      da4fd2d6
  22. Aug 05, 2021
  23. Jul 29, 2021
    • Will Szumski's avatar
      Support multiple inventories · 6c72fa81
      Will Szumski authored
      Multiple inventories can now be passed to `kolla-ansible`.  This can be
      useful to construct a common inventory that is shared between multiple
      environments.
      
      Change-Id: I2ac5d7851b310bea2ba362b353f18c592a0a6a2e
      6c72fa81
  24. Jul 28, 2021
    • Radosław Piliszek's avatar
      Use more RMQ flags for less busy wait · d7cdad53
      Radosław Piliszek authored
      As mentioned in the Iced014acee7e590c10848e73feca166f48b622dc
      commit message, in Ussuri+ we can use ``+sbwtdcpu none
      +sbwtdio none`` as well. This is due to relying on RMQ-provided
      erlang in version 23.x.
      
      This change adds the extra arguments by default.
      It should be backported down to Ussuri before we do a release with
      Iced014acee7e590c10848e73feca166f48b622dc.
      
      Change-Id: I32e247a6cb34d7f6763b544f247fd408dce2b3a2
      d7cdad53
    • Mark Goddard's avatar
      nova: Use cinder user for Ceph · c3f9ba83
      Mark Goddard authored
      In Ussuri, nova stopped using separate Ceph keys for the volumes and vms
      pools by default. Instead, we set ceph_nova_keyring to the value of
      ceph_cinder_keyring by default, which is ceph.client.cinder.keyring.
      This is in line with the Ceph OpenStack integration guide [1]. However,
      the user used by nova to access the vms pool (ceph_nova_user) defaults
      to nova, meaning that nova will still try to use a
      ceph.client.nova.keyring, which probably does not exist. We did not see
      this issue in CI, because we set ceph_nova_user to cinder.
      
      This change fixes the issue by setting ceph_nova_user to the value of
      ceph_cinder_user by default, which is cinder.
      
      Closes-Bug: #1934145
      Related-Bug: #1928690
      
      [1] https://docs.ceph.com/en/latest/rbd/rbd-openstack/
      
      Change-Id: I6aa8db2214e07906f1f3e035411fc80ba911a274
      c3f9ba83
  25. Jul 27, 2021
  26. Jul 22, 2021
    • Mark Goddard's avatar
      ironic: always enable conductor HTTP server · 411668ea
      Mark Goddard authored
      In the Xena release, Ironic removed the iSCSI driver [1]. The
      recommended driver is direct, which uses HTTP to transfer the disk
      image. This requires an HTTP server, and the simplest option is to use
      the one currently deployed when enable_ironic_ipxe is set to true. For
      this reason, this patch always enables the HTTP server running on the
      conductor.
      
      iPXE is still enabled separately, since it cannot currently be used at
      the same time as PXE.
      
      [1] https://review.opendev.org/c/openstack/ironic/+/789382
      
      Change-Id: I30c2ad2bf2957ac544942aefae8898cdc8a61ec6
      411668ea
  27. Jul 21, 2021
    • Pierre Riteau's avatar
      Fix variable names in Octavia documentation · 5e85fe2a
      Pierre Riteau authored
      The variable octavia_amphora_flavor should be octavia_amp_flavor.
      
      The variable for customising network and subnet was only mentioned in
      the example.
      
      Change-Id: I3ba5a7ccc2c810fea12bc48584c064738e5aa35e
      5e85fe2a
  28. Jul 02, 2021
    • Rafael Weingärtner's avatar
      Make setup module arguments configurable · 15f2fdcd
      Rafael Weingärtner authored
      
      Ansible facts can have a large impact on the performance of the Ansible
      control host. This patch introduces some control over which facts are
      gathered (kolla_ansible_setup_gather_subset) and which facts are stored
      (kolla_ansible_setup_filter). By default we do not change the default
      values of these arguments to the setup module. The flexibility of these
      arguments is limited, but they do provide enough for a large performance
      improvement in a typical moderate to large OpenStack cloud.
      
      In particular, the large complex dict fact for each interface has a
      large effect, and on an OpenStack controller or hypervisor there may be
      many virtual interfaces. We can use the kolla_ansible_setup_filter
      variable to help:
      
          kolla_ansible_setup_filter: 'ansible_[!qt]*'
      
      This causes Ansible to collect but not store facts matching that
      pattern, which includes the virtual interface facts. Currently we are
      not referencing other facts matching the pattern within Kolla Ansible.
      Note that including the 'ansible_' prefix causes meta facts module_setup
      and gather_subset to be filtered, but this seems to be the only way to
      get a good match on the interface facts. To work around this, we use
      ansible_facts rather than module_setup to detect whether facts exist in
      the cache.
      
      The exact improvement will vary, but has been reported to be as large as
      18x on systems with many virtual interfaces.
      
      For reference, here are some other tunings tried:
      
      * Increased the number of forks (great speedup depending of the size of
        the deployment)
      * Use `strategy = mitogen_linear` (cut processing time in half)
      * Ansible caching (little speed up)
      * SSH tunning (little speed up)
      
      Co-Authored-By: default avatarMark Goddard <mark@stackhpc.com>
      Closes-Bug: #1921538
      Change-Id: Iae8ca4aae945892f1dc65e1b10381d2e26e88805
      15f2fdcd
    • Mark Goddard's avatar
      Add disable_firewall variable · 9fffc7bc
      Mark Goddard authored
      Adds a new variable, 'disable_firewall', which defaults to true. If set
      to false, then the host firewall will not be disabled during
      kolla-ansible bootstrap-servers.
      
      Change-Id: Ie5131013012f89c8c3b91ca359ad17d9cb77efc8
      9fffc7bc
  29. Jun 30, 2021
  30. Jun 23, 2021
  31. Jun 07, 2021
    • John Garbutt's avatar
      Reduce RabbitMQ busy waiting, lowering CPU load · 70f6f8e4
      John Garbutt authored
      On machines with many cores, we were seeing excessive CPU load on systems
      that were not very busy. With the following Erlang VM argument we saw
      RabbitMQ CPU usage drop from about 150% to around 20%, on a system with
      40 hyperthreads.
      
          +S 2:2
      
      By default RabbitMQ starts N schedulers where N is the number of CPU
      cores, including hyper-threaded cores. This is fine when you assume all
      your CPUs are dedicated to RabbitMQ. Its not a good idea in a typical
      Kolla Ansible setup. Here we go for two scheduler threads.
      More details can be found here:
      https://www.rabbitmq.com/runtime.html#scheduling
      and here:
      https://erlang.org/doc/man/erl.html#emulator-flags
      
          +sbwt none
      
      This stops busy waiting of the scheduler, for more details see:
      https://www.rabbitmq.com/runtime.html#busy-waiting
      Newer versions of rabbit may need additional flags:
      "+sbwt none +sbwtdcpu none +sbwtdio none"
      But this patch should be back portable to older versions of RabbitMQ
      used in Train and Stein.
      
      Note that information on this tuning was found by looking at data from:
      rabbitmq-diagnostics runtime_thread_stats
      More details on that can be found here:
      https://www.rabbitmq.com/runtime.html#thread-stats
      
      Related-Bug: #1846467
      
      Change-Id: Iced014acee7e590c10848e73feca166f48b622dc
      70f6f8e4
  32. May 31, 2021
Loading