- Dec 13, 2023
-
-
Matt Crees authored
Adds a precheck to fail if non-quorum queues are found in RabbitMQ. Currently excludes fanout and reply queues, pending support in oslo.messaging [1]. [1]: https://review.opendev.org/c/openstack/oslo.messaging/+/888479 Closes-Bug: #2045887 Change-Id: Ibafdcd58618d97251a3405ef9332022d4d930e2b
-
- Nov 15, 2023
-
-
Martin Hiner authored
Changes name of ansible module kolla_docker to kolla_container. Change-Id: I13c676ed0378aa721a21a1300f6054658ad12bc7 Signed-off-by:
Martin Hiner <m.hiner@partner.samsung.com>
-
- Nov 14, 2023
-
-
Michal Nasiadka authored
docker_restart_policy: no causes systemd units to not get created and we use it in CI to disable restarts on services. Introducing oneshot policy to not create systemd unit for oneshot containers (those that are running bootstrap tasks, like db bootstrap and don't need a systemd unit), but still create systemd units for long lived containers but with Restart=No. Change-Id: I9e0d656f19143ec2fcad7d6d345b2c9387551604
-
- Aug 25, 2023
-
-
Matt Crees authored
This command can be invoked with ``kolla-ansible rabbitmq-reset-state``. This is primarily designed to be used when enabling HA queues[1]. As such, this also updates the RabbitMQ documentation to use this command. [1] https://docs.openstack.org/kolla-ansible/latest/reference/message-queues/rabbitmq.html#high-availability Change-Id: I6ad95a3618fc1a34af56657ef99ef14dc979f17a
-
- Jun 19, 2023
-
-
Ivan Halomi authored
Hardcoded docker value in commands is not supported anymore and kolla_container_engine is used instead. Change-Id: I25d9563c82842ac51d41467ff7b4144b306fdb12 Signed-off-by:
Ivan Halomi <i.halomi@partner.samsung.com>
-
- Jun 17, 2023
-
-
Mark Goddard authored
Ansible 2.14.3 introduced a change that broke the method used for restarting MariaDB and RabbitMQ serially [1][2]. In I57425680a4cdbf0daeb9b2cc35920f1b933aa4a8 we limited to 2.14.2 to work around this. Ansible upstream claim this behaviour was unintentional, and will not fix it. This change moves to a different approach where we use separate plays with a 'serial' keyword to execute the restart. This change also removes the restriction on the maximum supported version of 2.14.2 on ansible-core - any 2.14 release is now supported. [1] https://github.com/ansible/ansible/commit/65366f663de7d044f42ae6dd53368fd4c1f88b35 [2] https://github.com/ansible/ansible/issues/80848 Depends-On: https://review.opendev.org/c/openstack/kolla/+/884208 Change-Id: I5a12670d07077d24047aaff57ce8d33ccf7156ff
-
- Apr 20, 2023
-
-
Magnus Lööf authored
When using externally managed certificates, according to [1], one should set `kolla_externally_managed_cert: yes` and ensure that the certificates are in the correct place. However, RabbitMQ precheck still expects the certificates to be available on the controller node. This is incorrect. Fix by not running the tasks in question when `kolla_externally_managed_cert: yes` [1] https://docs.openstack.org/kolla-ansible/latest/admin/tls.html Closes-Bug: 1999081 Related-Bug: 1940286 Signed-off-by:
Magnus Lööf <magnus.loof@basalt.se> Change-Id: I9f845a7bdf5055165e199ab1887ed3ccbfb9d808
-
- Apr 19, 2023
-
-
Matt Crees authored
Currently, the process of enabling RabbitMQ HA with the variable ``om_enable_rabbitmq_high_availbility`` requires some manual steps to migrate from transient to mirrored queues. In preparation for setting this variable to ``True`` by default, this adds a precheck that will fail if a system is currently running non-mirrored queues and ``om_enable_rabbitmq_high_availbility`` is set to ``True``. Includes a helpful message informing the operator of their choice. Either follow the manual procedure to migrate the queues described in the docs, or set ``om_enable_rabbitmq_high_availbility`` to ``False``. The RabbitMQ HA section of the reference docs is updated to include these instructions. Change-Id: Ic5e64998bd01923162204f7bb289cc110187feec
-
- Apr 13, 2023
-
-
Matt Crees authored
With the addition of the variable `om_enable_rabbitmq_high_availability`, this feature in the upgrade task should be brought back. It is also now used in the deploy task. The `ha-all` policy is cleared only when `om_enable_rabbitmq_high_availability` is set to `false`. Change-Id: Ia056aa40e996b1f0fed43c0f672466c7e4a2f547
-
- Apr 12, 2023
-
-
Matt Crees authored
Puts the RabbitMQ node into maintenance mode before restarting the container. This will make the node shutdown less disruptive. For details on what maintenance mode does, see: https://www.rabbitmq.com/upgrade.html#maintenance-mode Change-Id: Ia61573f3fb95fe8fcde6b789ca77ef5b45fe0a65
-
Michal Nasiadka authored
Since RMQ 3.8 we can use rolling upgrade [1]. Depends-On: https://review.opendev.org/c/openstack/kolla/+/872393 [1]: https://www.rabbitmq.com/upgrade.html#rolling-upgrades Change-Id: If6a7c6c12d9226a2406728108b3c87b3485ac55f
-
- Jan 12, 2023
-
-
Mark Goddard authored
When running in check mode, some prechecks previously failed because they use the command module which is silently not run in check mode. Other prechecks were not running correctly in check mode due to e.g. looking for a string in empty command output or not querying which containers are running. This change fixes these issues. Closes-Bug: #2002657 Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
-
- Jan 09, 2023
-
-
Erik Berg authored
assert will also fail when we're not meeting the conditions, makes clear what we're actually testing, and isn't listed as a skipped task when the condition is ok. Change-Id: I4c919b523dde2602c81179ab3d28b913650b4c9f
-
- Dec 21, 2022
-
-
Matt Crees authored
Regularly, we experience issues in Kolla Ansible deployments because we use wrong options in OpenStack configuration files. This is because OpenStack services ignore unknown options. We also need to keep on top of deprecated options that may be removed in the future. Integrating oslo-config-validator into Kolla Ansible will greatly help. Adds a shared role to run oslo-config-validator on each service. Takes into account that services have multiple containers, and these may also use multiple config files. Service roles are extended to use this shared role. Executed with the new command ``kolla-ansible validate-config``. Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
-
- Nov 02, 2022
-
-
Ivan Halomi authored
Second part of patchset: https://review.opendev.org/c/openstack/kolla-ansible/+/799229/ in which was suggested to split patch into smaller ones. This change adds container_engine variable to kolla_container_facts module, this prepares module to be used with docker and podman as well without further changes in roles. Signed-off-by:
Ivan Halomi <i.halomi@partner.samsung.com> Co-authored-by:
Martin Hiner <m.hiner@partner.samsung.com> Change-Id: I9e8fa30646844ab4a288555f3aafdda345b3a118
-
- Oct 28, 2022
-
-
Ivan Halomi authored
First part of patchset: https://review.opendev.org/c/openstack/kolla-ansible/+/799229/ in which was suggested to split patch into smaller ones. This implements kolla_container_engine variable in command calls of docker,so later on it can be also used for podman without further change. Signed-off-by:
Ivan Halomi <i.halomi@partner.samsung.com> Change-Id: Ic30b67daa2e215524096ad1f4385c569e3d41b95
-
- Aug 09, 2022
-
-
Michal Arbet authored
This patch adds loadbalancer-config role which is "wrapper" around haproxy-config and proxysql-config role which will be added in follow-up patches. Change-Id: I64d41507317081e1860a94b9481a85c8d400797d
-
- Jul 27, 2022
-
-
Radosław Piliszek authored
It is no longer needed per the removed comment. Change-Id: I8d88c21c7e115b842a56f0ba5c780c3bde593964
-
- Jul 25, 2022
-
-
Michal Nasiadka authored
ansible-lint introduced var-spacing - let's fix our code. Change-Id: I0d8aaf3c522a5a6a5495032f6dbed8a2be0251f0
-
- Mar 24, 2022
-
-
Sven Kieske authored
this adds back the ability to configure the rabbitmq/erlang kernel network interface which was removed in https://review.opendev.org/#/c/584427/ seemingly by accident. Closes-Bug: 1900160 Change-Id: I6f00396495853e117429c17fadfafe809e322a31
-
- Mar 18, 2022
-
-
Mark Goddard authored
Follow up to I91d0e23b22319cf3fdb7603f5401d24e3b76a56e, which fixes a conditional corner case when removing the ha-all policy. Change-Id: Iea75551bc6d0da7dd10515dd8bd28c014eed7a5e
-
- Feb 21, 2022
-
-
Doug Szumski authored
When OpenStack is deployed with Kolla-Ansible, by default there are no durable queues or exchanges created by the OpenStack services in RabbitMQ. In Rabbit terminology, not being durable is referred to as `transient`, and this means that the queue is generally held in memory. Whether OpenStack services create durable or transient queues is traditionally controlled by the Oslo Notification config option: `amqp_durable_queues`. In Kolla-Ansible, this remains set to the default of `False` in all services. The only `durable` objects are the `amq*` exchanges which are internal to RabbitMQ. More recently, Oslo Notification has introduced support for Quorum queues [7]. These are a successor to durable classic queues, however it isn't yet clear if they are a good fit for OpenStack in general [8]. For clustered RabbitMQ deployments, Kolla-Ansible configures all queues as `replicated` [1]. Replication occurs over all nodes in the cluster. RabbitMQ refers to this as 'mirroring of classic queues'. In summary, this means that a multi-node Kolla-Ansible deployment will end up with a large number of transient, mirrored queues and exchanges. However, the RabbitMQ documentation warns against this, stating that 'For replicated queues, the only reasonable option is to use durable queues: [2]`. This is discussed further in the following bug report: [3]. Whilst we could try enabling the `amqp_durable_queues` option for each service (this is suggested in [4]), there are a number of complexities with this approach, not limited to: 1) RabbitMQ is planning to remove classic queue mirroring in favor of 'Quorum queues' in a forthcoming release [5]. 2) Durable queues will be written to disk, which may cause performance problems at scale. Note that this includes Quorum queues which are always durable. 3) Potential for race conditions and other complexity discussed recently on the mailing list under: `[ops] [kolla] RabbitMQ High Availability` The remaining option, proposed here, is to use classic non-mirrored queues everywhere, and rely on services to recover if the node hosting a queue or exchange they are using fails. There is some discussion of this approach in [6]. The downside of potential message loss needs to be weighed against the real upsides of increasing the performance of RabbitMQ, and moving to a configuration which is officially supported and hopefully more stable. In the future, we can then consider promoting specific queues to quorum queues, in cases where message loss can result in failure states which are hard to recover from. [1] https://www.rabbitmq.com/ha.html [2] https://www.rabbitmq.com/queues.html [3] https://github.com/rabbitmq/rabbitmq-server/issues/2045 [4] https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit [5] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/ [6] https://fuel-ccp.readthedocs.io/en/latest/design/ref_arch_1000_nodes.html#replication [7] https://bugs.launchpad.net/oslo.messaging/+bug/1942933 [8] https://www.rabbitmq.com/quorum-queues.html#use-cases Partial-Bug: #1954925 Change-Id: I91d0e23b22319cf3fdb7603f5401d24e3b76a56e
-
- Jan 09, 2022
-
-
LinPeiWen authored
rabbitmq starting from 3.8.0, built-in Prometheus support, prometheus plugins are enabled by default, when the environment is "enable_prometheus is no", rabbitmq role will disable prometheus plugins Closes-Bug: #1885106 Change-Id: I4d694d6224c813285d228d6bc7eece5731db1078
-
- Aug 10, 2021
-
-
Radosław Piliszek authored
We get a nice optimisation by using a filtered loop instead of task skipping per service with 'when'. Partially-Implements: blueprint performance-improvements Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
-
- Jun 23, 2021
-
-
Mark Goddard authored
By default, Ansible injects a variable for every fact, prefixed with ansible_. This can result in a large number of variables for each host, which at scale can incur a performance penalty. Ansible provides a configuration option [0] that can be set to False to prevent this injection of facts. In this case, facts should be referenced via ansible_facts.<fact>. This change updates all references to Ansible facts within Kolla Ansible from using individual fact variables to using the items in the ansible_facts dictionary. This allows users to disable fact variable injection in their Ansible configuration, which may provide some performance improvement. This change disables fact variable injection in the ansible configuration used in CI, to catch any attempts to use the injected variables. [0] https://docs.ansible.com/ansible/latest/reference_appendices/config.html#inject-facts-as-vars Change-Id: I7e9d5c9b8b9164d4aee3abb4e37c8f28d98ff5d1 Partially-Implements: blueprint performance-improvements
-
- Apr 14, 2021
-
-
LinPeiWen authored
This change enables the use of Docker healthchecks for rabbitmq services. Implements: blueprint container-health-check Depends-On: https://review.opendev.org/c/openstack/kolla/+/784562 Change-Id: I23a2c2efab858b9ed39c6ce0ec4a82df10e7f93d
-
- Dec 14, 2020
-
-
Mark Goddard authored
This reverts commit 9cae59be. Reason for revert: This patch was found to introduce issues with fluentd customisation. The underlying issue is not currently fully understood, but could be a sign of other obscure issues. Change-Id: Ia4859c23d85699621a3b734d6cedb70225576dfc Closes-Bug: #1906288
-
- Nov 19, 2020
-
-
Victor Chembaev authored
Change-Id: I1ff4cbdf3f60cb7fd5fe5d3c5d498e05fe2df79a Closes-Bug: #1904702
-
- Oct 27, 2020
-
-
Radosław Piliszek authored
Main plays are action-redirect-stubs, ideal for import_tasks. This avoids 'include' penalty and makes logs/ara look nicer. Fixes haproxy and rabbitmq not to check the host group as well. Change-Id: I46136fc40b815e341befff80b54a91ef431eabc0 Partially-Implements: blueprint performance-improvements
-
- Oct 12, 2020
-
-
Radosław Piliszek authored
Config plays do not need to check containers. This avoids skipping tasks during the genconfig action. Ironic and Glance rolling upgrades are handled specially. Swift and Bifrost do not use the handlers at all. Partially-Implements: blueprint performance-improvements Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
-
- Sep 17, 2020
-
-
Mark Goddard authored
This change adds support for encryption of communication between OpenStack services and RabbitMQ. Server certificates are supported, but currently client certificates are not. The kolla-ansible certificates command has been updated to support generating certificates for RabbitMQ for development and testing. RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when The Zuul 'tls_enabled' variable is true. Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5 Implements: blueprint message-queue-ssl-support
-
- Aug 28, 2020
-
-
Mark Goddard authored
Including tasks has a performance penalty when compared with importing tasks. If the include has a condition associated with it, then the overhead of the include may be lower than the overhead of skipping all imported tasks. For unconditionally included tasks, switching to import_tasks provides a clear benefit. Benchmarking of include vs. import is available at [1]. This change switches from include_tasks to import_tasks where there is no condition applied to the include. [1] https://github.com/stackhpc/ansible-scaling/blob/master/doc/include-and-import.md#task-include-and-import Partially-Implements: blueprint performance-improvements Change-Id: Ia45af4a198e422773d9f009c7f7b2e32ce9e3b97
-
- Jul 28, 2020
-
-
Mark Goddard authored
Including tasks has a performance penalty when compared with importing tasks. If the include has a condition associated with it, then the overhead of the include may be lower than the overhead of skipping all imported tasks. In the case of the check-containers.yml include, the included file only has a single task, so the overhead of skipping this task will not be greater than the overhead of the task import. It therefore makes sense to switch to use import_tasks there. Partially-Implements: blueprint performance-improvements Change-Id: I65d911670649960708b9f6a4c110d1a7df1ad8f7
-
- Mar 02, 2020
-
-
Radosław Piliszek authored
Both include_role and import_role expect role's name to be given via "name" param instead of "role". This worked but caused errors with ansible-lint. See: https://review.opendev.org/694779 Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
-
- Feb 28, 2020
-
-
Mark Goddard authored
We assume that all groups are present in the inventory, and quite obtuse errors can result if any are not. This change adds a precheck that checks for the presence of all expected groups in the inventory for each service. It also introduces a common service-precheck role that we can use for other common prechecks. Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3 Partially-Implements: blueprint improve-prechecks
-
- Feb 16, 2020
-
-
Radosław Piliszek authored
Make it require uniqueness of resolution as well to avoid later issues with RabbitMQ going crazy. Change-Id: I000ba6c62ab44eac0abdf8d5d1f069adfbc6552f Closes-bug: #1863363
-
- Jan 13, 2020
-
-
Mark Goddard authored
Change-Id: Iecbc2fe5fa3391dca5a3cc7e575314b95942114b Co-Authored-By:
Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
-
- Oct 17, 2019
-
-
Radosław Piliszek authored
IPv6 control plane implementation [1] follow-up. [1] Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Change-Id: I4c2bd81e77fc09a04838a62f008e5d6c5dc1483d
-
- Oct 16, 2019
-
-
Radosław Piliszek authored
Introduce kolla_address filter. Introduce put_address_in_context filter. Add AF config to vars. Address contexts: - raw (default): <ADDR> - memcache: inet6:[<ADDR>] - url: [<ADDR>] Other changes: globals.yml - mention just IP in comment prechecks/port_checks (api_intf) - kolla_address handles validation 3x interface conditional (swift configs: replication/storage) 2x interface variable definition with hostname (haproxy listens; api intf) 1x interface variable definition with hostname with bifrost exclusion (baremetal pre-install /etc/hosts; api intf) neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network basic multinode source CI job for IPv6 prechecks for rabbitmq and qdrouterd use proper NSS database now MariaDB Galera Cluster WSREP SST mariabackup workaround (socat and IPv6) Ceph naming workaround in CI TODO: probably needs documenting RabbitMQ IPv6-only proto_dist Ceph ms switch to IPv6 mode Remove neutron-server ml2_type_vxlan/vxlan_group setting as it is not used (let's avoid any confusion) and could break setups without proper multicast routing if it started working (also IPv4-only) haproxy upgrade checks for slaves based on ipv6 addresses TODO: ovs-dpdk grabs ipv4 network address (w/ prefix len / submask) not supported, invalid by default because neutron_external has no address No idea whether ovs-dpdk works at all atm. ml2 for xenapi Xen is not supported too well. This would require working with XenAPI facts. rp_filter setting This would require meddling with ip6tables (there is no sysctl param). By default nothing is dropped. Unlikely we really need it. ironic dnsmasq is configured IPv4-only dnsmasq needs DHCPv6 options and testing in vivo. KNOWN ISSUES (beyond us): One cannot use IPv6 address to reference the image for docker like we currently do, see: https://github.com/moby/moby/issues/39033 (docker_registry; docker API 400 - invalid reference format) workaround: use hostname/FQDN RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4. This is due to old RabbitMQ versions available in images. IPv4 is preferred by default and may fail in the IPv6-only scenario. This should be no problem in real life as IPv6-only is indeed IPv6-only. Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will no longer be relevant as we supply all the necessary config. See: https://github.com/rabbitmq/rabbitmq-server/pull/1982 For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed to work well). Older Ansible versions are known to miss IPv6 addresses in interface facts. This may affect redeploys, reconfigures and upgrades which run after VIP address is assigned. See: https://github.com/ansible/ansible/issues/63227 Bifrost Train does not support IPv6 deployments. See: https://storyboard.openstack.org/#!/story/2006689 Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Implements: blueprint ipv6-control-plane Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Sep 26, 2019
-
-
Kris Lindgren authored
Sometimes as cloud admins, we want to only update code that is running in a cloud. But we dont need to do anything else. Make an action in kolla-ansible that allows us to do that. Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8 Implements: blueprint deploy-containers-action
-