Skip to content
Snippets Groups Projects
  1. Feb 02, 2022
    • Buddhika Sanjeewa's avatar
      Deploy Zun with Cinder Ceph support · eb7e0f6f
      Buddhika Sanjeewa authored
      Enables zun to access cinder volumes when cinder is configured to use
      external ceph.
      Copies ceph config file and ceph cinder keyring to /etc/ceph in
      zun_compute container.
      
      Closes-Bug: 1848934
      Change-Id: Ie56868d5e9ed37a9274b8cbe65895f3634b895c8
      eb7e0f6f
  2. Jan 31, 2022
  3. Jan 20, 2022
  4. Jan 09, 2022
    • Stig Telfer's avatar
      OpenID Connect certifiate file is optional · 78f29fdc
      Stig Telfer authored
      Some ID provider configurations do not require a certificate file.
      Change the logic to allow this, and update documentation accordingly.
      
      Change-Id: I2c34a6b5894402bbebeb3fb96768789bc3c7fe84
      78f29fdc
  5. Dec 23, 2021
  6. Dec 20, 2021
    • Radosław Piliszek's avatar
      [docs] Mark init-runonce properly · 1c93c8ea
      Radosław Piliszek authored
      This is a docs amendment to let users know that calling
      init-runonce is not a required deployment step and it may not work
      for them if they modified the defaults.
      
      Change-Id: Ia3922b53d91a1a820447fec6a8074b941edc2ee9
      1c93c8ea
  7. Nov 25, 2021
  8. Oct 22, 2021
  9. Oct 20, 2021
  10. Oct 12, 2021
  11. Oct 06, 2021
  12. Oct 04, 2021
    • Gaël THEROND (Fl1nt)'s avatar
      Add missing CloudKitty documentation. · d5aa73c4
      Gaël THEROND (Fl1nt) authored
      * Fix various typos and formatting.
      * Add documentation about custom collector backend.
      * Add documentation about custom storage backend.
      
      Change-Id: If937afc5ce2a2747f464fbaf38a5dcf2e57ba04f
      Closes-bug: #1940842
      d5aa73c4
  13. Sep 30, 2021
  14. Sep 26, 2021
    • Michal Arbet's avatar
      Add way to change weight of haproxy backend per service · 7c2b4bea
      Michal Arbet authored
      This patch adding option to control weight of haproxy
      backends per service via host variable.
      
      Example:
      
      [control]
      server1 haproxy_nova_api_weight=10
      server2 haproxy_nova_api_weight=2 haproxy_keystone_internal_weight=10
      server3 haproxy_keystone_admin_weight=50
      
      If weight is not defined, everything is working as before.
      
      Change-Id: Ie8cc228198651c57f8ffe3eb060875e45d1f0700
      7c2b4bea
  15. Sep 16, 2021
  16. Aug 20, 2021
  17. Aug 17, 2021
    • Skylar Kelty's avatar
      Update Manila deploy steps for Wallaby · 8d5dde37
      Skylar Kelty authored
      Manila has changed from using subfolders to subvolumes.
      We need a bit of a tidy up to prevent deploy errors.
      This change also adds the ability to specify the ceph FS
      Manila uses instead of relying on the default "first found".
      
      Closes-Bug: #1938285
      Closes-Bug: #1935784
      Change-Id: I1d0d34919fbbe74a4022cd496bf84b8b764b5e0f
      Unverified
      8d5dde37
  18. Aug 06, 2021
    • Ilya Popov's avatar
      Extra var ironic_enable_keystone_integration added. · da4fd2d6
      Ilya Popov authored
      Basically, there are three main installation scenario:
      
      Scenario 1:
      Ironic installation together with other openstack services
      including keystone. In this case variable enable_keystone
      is set to true and keystone service will be installed
      together with ironic installation. It is possible realise this
      scenario, no fix needed
      
      Scenario 2:
      Ironic installation with connection to already installed
      keystone. In this scenario we have to set enable_keystone
      to “No” to prevent from new keystone service installation
      during the ironic installation process. But in other hand,
      we need to have correct sections in ironic.conf to provide
      all information needed to connect to existing keystone.
      But all sections for keystone are added to ironic.conf only
      if enable_keystone var is set to “Yes”. It isn’t possible
      to realise this scenario. Proposed fix provide support for
      this scenario, where multiple regions share the same
      keystone service.
      
      Scenario 3:
      No keystone integration. Ironic don't connect to Keystone.
      It is possible realise this scenario, no fix needed
      
      Proposed solution also keep the default behaviour: if no
      enable_keystone_integration is manually defined by default
      it takes value of enable_keystone variable and all behaviour
      is the same. But if we don't want to install keystone and
      want to connect to existing one at the same time, it will be
      possible to set enable_keystone var to “No”
      (preventing keystone from installation) and at the same
      time set ironic_enable_keystone_integration to Yes to allow
      needed section appear in ironic.conf through templating.
      
      Change-Id: I0c7e9a28876a1d4278fb2ed8555c2b08472864b9
      da4fd2d6
  19. Aug 05, 2021
  20. Jul 28, 2021
    • Radosław Piliszek's avatar
      Use more RMQ flags for less busy wait · d7cdad53
      Radosław Piliszek authored
      As mentioned in the Iced014acee7e590c10848e73feca166f48b622dc
      commit message, in Ussuri+ we can use ``+sbwtdcpu none
      +sbwtdio none`` as well. This is due to relying on RMQ-provided
      erlang in version 23.x.
      
      This change adds the extra arguments by default.
      It should be backported down to Ussuri before we do a release with
      Iced014acee7e590c10848e73feca166f48b622dc.
      
      Change-Id: I32e247a6cb34d7f6763b544f247fd408dce2b3a2
      d7cdad53
    • Mark Goddard's avatar
      nova: Use cinder user for Ceph · c3f9ba83
      Mark Goddard authored
      In Ussuri, nova stopped using separate Ceph keys for the volumes and vms
      pools by default. Instead, we set ceph_nova_keyring to the value of
      ceph_cinder_keyring by default, which is ceph.client.cinder.keyring.
      This is in line with the Ceph OpenStack integration guide [1]. However,
      the user used by nova to access the vms pool (ceph_nova_user) defaults
      to nova, meaning that nova will still try to use a
      ceph.client.nova.keyring, which probably does not exist. We did not see
      this issue in CI, because we set ceph_nova_user to cinder.
      
      This change fixes the issue by setting ceph_nova_user to the value of
      ceph_cinder_user by default, which is cinder.
      
      Closes-Bug: #1934145
      Related-Bug: #1928690
      
      [1] https://docs.ceph.com/en/latest/rbd/rbd-openstack/
      
      Change-Id: I6aa8db2214e07906f1f3e035411fc80ba911a274
      c3f9ba83
  21. Jul 27, 2021
  22. Jul 22, 2021
    • Mark Goddard's avatar
      ironic: always enable conductor HTTP server · 411668ea
      Mark Goddard authored
      In the Xena release, Ironic removed the iSCSI driver [1]. The
      recommended driver is direct, which uses HTTP to transfer the disk
      image. This requires an HTTP server, and the simplest option is to use
      the one currently deployed when enable_ironic_ipxe is set to true. For
      this reason, this patch always enables the HTTP server running on the
      conductor.
      
      iPXE is still enabled separately, since it cannot currently be used at
      the same time as PXE.
      
      [1] https://review.opendev.org/c/openstack/ironic/+/789382
      
      Change-Id: I30c2ad2bf2957ac544942aefae8898cdc8a61ec6
      411668ea
  23. Jul 21, 2021
    • Pierre Riteau's avatar
      Fix variable names in Octavia documentation · 5e85fe2a
      Pierre Riteau authored
      The variable octavia_amphora_flavor should be octavia_amp_flavor.
      
      The variable for customising network and subnet was only mentioned in
      the example.
      
      Change-Id: I3ba5a7ccc2c810fea12bc48584c064738e5aa35e
      5e85fe2a
  24. Jul 02, 2021
    • Mark Goddard's avatar
      Add disable_firewall variable · 9fffc7bc
      Mark Goddard authored
      Adds a new variable, 'disable_firewall', which defaults to true. If set
      to false, then the host firewall will not be disabled during
      kolla-ansible bootstrap-servers.
      
      Change-Id: Ie5131013012f89c8c3b91ca359ad17d9cb77efc8
      9fffc7bc
  25. Jun 23, 2021
  26. Jun 07, 2021
    • John Garbutt's avatar
      Reduce RabbitMQ busy waiting, lowering CPU load · 70f6f8e4
      John Garbutt authored
      On machines with many cores, we were seeing excessive CPU load on systems
      that were not very busy. With the following Erlang VM argument we saw
      RabbitMQ CPU usage drop from about 150% to around 20%, on a system with
      40 hyperthreads.
      
          +S 2:2
      
      By default RabbitMQ starts N schedulers where N is the number of CPU
      cores, including hyper-threaded cores. This is fine when you assume all
      your CPUs are dedicated to RabbitMQ. Its not a good idea in a typical
      Kolla Ansible setup. Here we go for two scheduler threads.
      More details can be found here:
      https://www.rabbitmq.com/runtime.html#scheduling
      and here:
      https://erlang.org/doc/man/erl.html#emulator-flags
      
          +sbwt none
      
      This stops busy waiting of the scheduler, for more details see:
      https://www.rabbitmq.com/runtime.html#busy-waiting
      Newer versions of rabbit may need additional flags:
      "+sbwt none +sbwtdcpu none +sbwtdio none"
      But this patch should be back portable to older versions of RabbitMQ
      used in Train and Stein.
      
      Note that information on this tuning was found by looking at data from:
      rabbitmq-diagnostics runtime_thread_stats
      More details on that can be found here:
      https://www.rabbitmq.com/runtime.html#thread-stats
      
      Related-Bug: #1846467
      
      Change-Id: Iced014acee7e590c10848e73feca166f48b622dc
      70f6f8e4
  27. May 17, 2021
  28. May 11, 2021
  29. Apr 27, 2021
  30. Apr 26, 2021
    • wuchunyang's avatar
      [doc] fix a typo · fc406d03
      wuchunyang authored
      Trivial Fix
      
      Change-Id: Ie08877e339455bed45ee467a87de9648678e88c5
      fc406d03
  31. Apr 19, 2021
    • wuchunyang's avatar
      [doc] introduce octavia tenant management network · 3ba06b87
      wuchunyang authored
      Change-Id: I713f6fafe328e060a71dbb584e61603e547deaf6
      3ba06b87
    • Doug Szumski's avatar
      Extend support for custom Grafana dashboards · d01192c1
      Doug Szumski authored
      The current behaviour is to support supplying a single
      folder of Grafana dashboards which can then be populated
      into a single folder in Grafana. Some users may wish
      to have sub-folders of Dashboards, and load these into
      separate dashboard folders in Grafana via a custom
      provisioning file. For example, a user may have a
      sub-folder of Ceph dashboards that they wish to keep
      separate from OpenStack dashboards. This patch supports
      sub-folders whilst not affecting the original mechanism.
      
      Trivial-Fix
      
      Change-Id: I9cd289a1ea79f00cee4d2ef30cbb508ac73f9767
      d01192c1
  32. Apr 07, 2021
  33. Apr 06, 2021
  34. Mar 26, 2021
Loading