Skip to content
Snippets Groups Projects
  1. Nov 15, 2023
  2. Nov 14, 2023
    • Michal Nasiadka's avatar
      Introduce oneshot docker_restart_policy · cea076f3
      Michal Nasiadka authored
      docker_restart_policy: no causes systemd units to not get created
      and we use it in CI to disable restarts on services.
      
      Introducing oneshot policy to not create systemd unit for oneshot
      containers (those that are running bootstrap tasks, like db
      bootstrap and don't need a systemd unit), but still create systemd
      units for long lived containers but with Restart=No.
      
      Change-Id: I9e0d656f19143ec2fcad7d6d345b2c9387551604
      cea076f3
  3. Aug 25, 2023
  4. Jun 28, 2023
  5. Jun 19, 2023
  6. Jun 17, 2023
  7. Apr 20, 2023
  8. Apr 19, 2023
    • Michal Arbet's avatar
      Trivial fix - add int filter for rabbitmq definitions · d1b24a41
      Michal Arbet authored
      Change-Id: I1d8021a1bc780449e3ca96183c6f4abaed17b382
      d1b24a41
    • Matt Crees's avatar
      Add precheck to fail if RabbitMQ HA needs configuring · a5331d32
      Matt Crees authored
      Currently, the process of enabling RabbitMQ HA with the variable
      ``om_enable_rabbitmq_high_availbility`` requires some manual steps to
      migrate from transient to mirrored queues. In preparation for setting
      this variable to ``True`` by default, this adds a precheck that will
      fail if a system is currently running non-mirrored queues and
      ``om_enable_rabbitmq_high_availbility`` is set to ``True``.
      
      Includes a helpful message informing the operator of their choice.
      Either follow the manual procedure to migrate the queues described in
      the docs, or set ``om_enable_rabbitmq_high_availbility`` to ``False``.
      
      The RabbitMQ HA section of the reference docs is updated to include
      these instructions.
      
      Change-Id: Ic5e64998bd01923162204f7bb289cc110187feec
      a5331d32
  9. Apr 13, 2023
    • Matt Crees's avatar
      Remove RabbitMQ ha-all policy when not required · c85b64d1
      Matt Crees authored
      With the addition of the variable
      `om_enable_rabbitmq_high_availability`, this feature in the upgrade
      task should be brought back. It is also now used in the deploy task. The
      `ha-all` policy is cleared only when
      `om_enable_rabbitmq_high_availability` is set to `false`.
      
      Change-Id: Ia056aa40e996b1f0fed43c0f672466c7e4a2f547
      c85b64d1
  10. Apr 12, 2023
  11. Mar 21, 2023
  12. Mar 06, 2023
    • Christian Berendt's avatar
      rabbitmq: add rabbitmq_datadir_volume parameter · a7812741
      Christian Berendt authored
      With the parameter rabbitmq_datadir_volume it is possible
      to use a directory as volume for the rabbitmq service. By default,
      a volume named rabbitmq is used (the previous default).
      
      Change-Id: I99d6bd71ca79cba81062dedfb767c5ed341bb182
      a7812741
  13. Feb 14, 2023
    • John Garbutt's avatar
      Improve RabbitMQ performance by reducing ha replicas · 6cf22b0c
      John Garbutt authored
      Currently we do not follow the RabbitMQ advice on replicas here:
      https://www.rabbitmq.com/ha.html#replication-factor
      
      Here we reduce the number of replicas to n // 2 + 1 as advised
      above. The hope it this helps speed up recovery from rabbit
      issues.
      
      Related-Bug: #1954925
      Change-Id: Ib6bcb26c499c9884faa4a0cd51abaec00cacb096
      6cf22b0c
    • Matt Crees's avatar
      Add flag to change RabbitMQ ha-mode definition · e13072a9
      Matt Crees authored
      Adds the flag `rabbitmq_ha_replica_count` to change how many different
      nodes a queue should be mirrored across. If the value is not set, then
      it defaults to "ha-mode":"all". This value is unset by default to avoid
      any unexpected changes to the RabbitMQ definitions.json file, as that
      would trigger an unexpected restart of RabbitMQ during the next deploy.
      
      Change-Id: Iee98cd937197a73a3b04aa8501fa325e8ecfff24
      e13072a9
  14. Feb 09, 2023
    • John Garbutt's avatar
      RabbitMQ: Support setting ha-promote-on-shutdown · 94f3ce0c
      John Garbutt authored
      By default ha-promote-on-shutdown=when-synced. However we are seeing
      issues with RabbitMQ automatically recovering when nodes are restarted.
      https://www.rabbitmq.com/ha.html#cluster-shutdown
      
      Rather than waiting for operator interventions, it is better we allow
      recovery to happen, even if that means we may loose some messages.
      A few failed and timed out operations is better than a totaly broken
      cloud. This is achieved using ha-promote-on-shutdown=always.
      
      Note, when a node failure is detected, this is already the default
      behaviour from 3.7.5 onwards:
      https://www.rabbitmq.com/ha.html#promoting-unsynchronised-mirrors
      
      This patch adds the option to change the ha-promote-on-shutdown
      definition, using the flag `rabbitmq_ha_promote_on_shutdown`. This
      value is unset by default to avoid any unexpected changes to the
      RabbitMQ definitions.json file, as that would trigger an unexpected
      restart of RabbitMQ during the next deploy.
      
      Related-Bug: #1954925
      
      Change-Id: I2146bda2c72ddac2c9923c6941b0596395fd9ab5
      94f3ce0c
  15. Jan 17, 2023
    • Michal Arbet's avatar
      Add ability to configure rabbitmq · 701dc20f
      Michal Arbet authored
      As rabbitmq's configuration file is not ini or yaml file,
      there is no option to extend configuration by new config
      options via merge_configs or merge_yaml.
      
      This patch moves config options to dictionary
      so it can be overriden in /etc/kolla/globals.yml.
      
      Change-Id: I5cd772f4fb80a0e200fb24d67be735ca81e3fdeb
      701dc20f
  16. Jan 13, 2023
    • Matt Crees's avatar
      Add a flag to handle RabbitMQ high availability · 09df6fc1
      Matt Crees authored
      A combination of durable queues and classic queue mirroring can be used
      to provide high availability of RabbitMQ. However, these options should
      only be used together, otherwise the system will become unstable. Using
      the flag ``om_enable_rabbitmq_high_availability`` will either enable
      both options at once, or neither of them.
      
      There are some queues that should not be mirrored:
      * ``reply`` queues (these have a single consumer and TTL policy)
      * ``fanout`` queues (these have a TTL policy)
      * ``amq`` queues (these are auto-delete queues, with a single consumer)
      An exclusionary pattern is used in the classic mirroring policy. This
      pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*``
      
      Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
      09df6fc1
  17. Jan 12, 2023
    • Mark Goddard's avatar
      Fix prechecks in check mode · 46aeb984
      Mark Goddard authored
      When running in check mode, some prechecks previously failed because
      they use the command module which is silently not run in check mode.
      Other prechecks were not running correctly in check mode due to e.g.
      looking for a string in empty command output or not querying which
      containers are running.
      
      This change fixes these issues.
      
      Closes-Bug: #2002657
      Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
      46aeb984
  18. Jan 09, 2023
  19. Dec 21, 2022
    • Matt Crees's avatar
      Integrate oslo-config-validator · 6c2aace8
      Matt Crees authored
      Regularly, we experience issues in Kolla Ansible deployments because we
      use wrong options in OpenStack configuration files. This is because
      OpenStack services ignore unknown options. We also need to keep on top
      of deprecated options that may be removed in the future. Integrating
      oslo-config-validator into Kolla Ansible will greatly help.
      
      Adds a shared role to run oslo-config-validator on each service. Takes
      into account that services have multiple containers, and these may also
      use multiple config files. Service roles are extended to use this shared
      role. Executed with the new command ``kolla-ansible validate-config``.
      
      Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
      6c2aace8
  20. Nov 02, 2022
  21. Oct 28, 2022
  22. Aug 09, 2022
  23. Jul 27, 2022
  24. Jul 25, 2022
    • Michal Nasiadka's avatar
      Fix var-spacing · dcf5a8b6
      Michal Nasiadka authored
      ansible-lint introduced var-spacing - let's fix our code.
      
      Change-Id: I0d8aaf3c522a5a6a5495032f6dbed8a2be0251f0
      dcf5a8b6
  25. May 23, 2022
  26. Apr 20, 2022
  27. Mar 24, 2022
  28. Mar 18, 2022
  29. Feb 21, 2022
    • Doug Szumski's avatar
      Remove classic queue mirroring for internal RabbitMQ · 6bfe1927
      Doug Szumski authored
      When OpenStack is deployed with Kolla-Ansible, by default there
      are no durable queues or exchanges created by the OpenStack
      services in RabbitMQ. In Rabbit terminology, not being durable
      is referred to as `transient`, and this means that the queue
      is generally held in memory.
      
      Whether OpenStack services create durable or transient queues is
      traditionally controlled by the Oslo Notification config option:
      `amqp_durable_queues`. In Kolla-Ansible, this remains set to
      the default of `False` in all services. The only `durable`
      objects are the `amq*` exchanges which are internal to RabbitMQ.
      
      More recently, Oslo Notification has introduced support for
      Quorum queues [7]. These are a successor to durable classic
      queues, however it isn't yet clear if they are a good fit for
      OpenStack in general [8].
      
      For clustered RabbitMQ deployments, Kolla-Ansible configures all
      queues as `replicated` [1]. Replication occurs over all nodes
      in the cluster. RabbitMQ refers to this as 'mirroring of classic
      queues'.
      
      In summary, this means that a multi-node Kolla-Ansible deployment
      will end up with a large number of transient, mirrored queues
      and exchanges. However, the RabbitMQ documentation warns against
      this, stating that 'For replicated queues, the only reasonable
      option is to use durable queues: [2]`. This is discussed
      further in the following bug report: [3].
      
      Whilst we could try enabling the `amqp_durable_queues` option
      for each service (this is suggested in [4]), there are
      a number of complexities with this approach, not limited to:
      
      1) RabbitMQ is planning to remove classic queue mirroring in
         favor of 'Quorum queues' in a forthcoming release [5].
      2) Durable queues will be written to disk, which may cause
         performance problems at scale. Note that this includes
         Quorum queues which are always durable.
      3) Potential for race conditions and other complexity
         discussed recently on the mailing list under:
         `[ops] [kolla] RabbitMQ High Availability`
      
      The remaining option, proposed here, is to use classic
      non-mirrored queues everywhere, and rely on services to recover
      if the node hosting a queue or exchange they are using fails.
      There is some discussion of this approach in [6]. The downside
      of potential message loss needs to be weighed against the real
      upsides of increasing the performance of RabbitMQ, and moving
      to a configuration which is officially supported and hopefully
      more stable. In the future, we can then consider promoting
      specific queues to quorum queues, in cases where message loss
      can result in failure states which are hard to recover from.
      
      [1] https://www.rabbitmq.com/ha.html
      [2] https://www.rabbitmq.com/queues.html
      [3] https://github.com/rabbitmq/rabbitmq-server/issues/2045
      [4] https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit
      [5] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/
      [6] https://fuel-ccp.readthedocs.io/en/latest/design/ref_arch_1000_nodes.html#replication
      [7] https://bugs.launchpad.net/oslo.messaging/+bug/1942933
      [8] https://www.rabbitmq.com/quorum-queues.html#use-cases
      
      Partial-Bug: #1954925
      Change-Id: I91d0e23b22319cf3fdb7603f5401d24e3b76a56e
      6bfe1927
  30. Jan 12, 2022
  31. Jan 11, 2022
  32. Jan 09, 2022
    • LinPeiWen's avatar
      Support enable/disable rabbitmq prometheus plugins · 1f3dcce5
      LinPeiWen authored
      rabbitmq starting from 3.8.0, built-in Prometheus support,
      prometheus plugins are enabled by default, when the environment is
      "enable_prometheus is no", rabbitmq role will disable prometheus plugins
      
      Closes-Bug: #1885106
      
      Change-Id: I4d694d6224c813285d228d6bc7eece5731db1078
      1f3dcce5
  33. Dec 31, 2021
    • Pierre Riteau's avatar
      Move project_name and kolla_role_name to role vars · 56fc74f2
      Pierre Riteau authored
      Role vars have a higher precedence than role defaults. This allows to
      import default vars from another role via vars_files without overriding
      project_name (see related bug for details).
      
      Change-Id: I3d919736e53d6f3e1a70d1267cf42c8d2c0ad221
      Related-Bug: #1951785
      56fc74f2
  34. Aug 10, 2021
    • Radosław Piliszek's avatar
      Refactor and optimise image pulling · 9ff2ecb0
      Radosław Piliszek authored
      We get a nice optimisation by using a filtered loop instead
      of task skipping per service with 'when'.
      
      Partially-Implements: blueprint performance-improvements
      Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
      9ff2ecb0
  35. Jul 28, 2021
    • Radosław Piliszek's avatar
      Use more RMQ flags for less busy wait · d7cdad53
      Radosław Piliszek authored
      As mentioned in the Iced014acee7e590c10848e73feca166f48b622dc
      commit message, in Ussuri+ we can use ``+sbwtdcpu none
      +sbwtdio none`` as well. This is due to relying on RMQ-provided
      erlang in version 23.x.
      
      This change adds the extra arguments by default.
      It should be backported down to Ussuri before we do a release with
      Iced014acee7e590c10848e73feca166f48b622dc.
      
      Change-Id: I32e247a6cb34d7f6763b544f247fd408dce2b3a2
      d7cdad53
Loading