- Mar 18, 2022
-
-
Mark Goddard authored
Follow up to I91d0e23b22319cf3fdb7603f5401d24e3b76a56e, which fixes a conditional corner case when removing the ha-all policy. Change-Id: Iea75551bc6d0da7dd10515dd8bd28c014eed7a5e
-
- Feb 21, 2022
-
-
Doug Szumski authored
When OpenStack is deployed with Kolla-Ansible, by default there are no durable queues or exchanges created by the OpenStack services in RabbitMQ. In Rabbit terminology, not being durable is referred to as `transient`, and this means that the queue is generally held in memory. Whether OpenStack services create durable or transient queues is traditionally controlled by the Oslo Notification config option: `amqp_durable_queues`. In Kolla-Ansible, this remains set to the default of `False` in all services. The only `durable` objects are the `amq*` exchanges which are internal to RabbitMQ. More recently, Oslo Notification has introduced support for Quorum queues [7]. These are a successor to durable classic queues, however it isn't yet clear if they are a good fit for OpenStack in general [8]. For clustered RabbitMQ deployments, Kolla-Ansible configures all queues as `replicated` [1]. Replication occurs over all nodes in the cluster. RabbitMQ refers to this as 'mirroring of classic queues'. In summary, this means that a multi-node Kolla-Ansible deployment will end up with a large number of transient, mirrored queues and exchanges. However, the RabbitMQ documentation warns against this, stating that 'For replicated queues, the only reasonable option is to use durable queues: [2]`. This is discussed further in the following bug report: [3]. Whilst we could try enabling the `amqp_durable_queues` option for each service (this is suggested in [4]), there are a number of complexities with this approach, not limited to: 1) RabbitMQ is planning to remove classic queue mirroring in favor of 'Quorum queues' in a forthcoming release [5]. 2) Durable queues will be written to disk, which may cause performance problems at scale. Note that this includes Quorum queues which are always durable. 3) Potential for race conditions and other complexity discussed recently on the mailing list under: `[ops] [kolla] RabbitMQ High Availability` The remaining option, proposed here, is to use classic non-mirrored queues everywhere, and rely on services to recover if the node hosting a queue or exchange they are using fails. There is some discussion of this approach in [6]. The downside of potential message loss needs to be weighed against the real upsides of increasing the performance of RabbitMQ, and moving to a configuration which is officially supported and hopefully more stable. In the future, we can then consider promoting specific queues to quorum queues, in cases where message loss can result in failure states which are hard to recover from. [1] https://www.rabbitmq.com/ha.html [2] https://www.rabbitmq.com/queues.html [3] https://github.com/rabbitmq/rabbitmq-server/issues/2045 [4] https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit [5] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/ [6] https://fuel-ccp.readthedocs.io/en/latest/design/ref_arch_1000_nodes.html#replication [7] https://bugs.launchpad.net/oslo.messaging/+bug/1942933 [8] https://www.rabbitmq.com/quorum-queues.html#use-cases Partial-Bug: #1954925 Change-Id: I91d0e23b22319cf3fdb7603f5401d24e3b76a56e
-
- Jan 12, 2022
-
-
Michal Nasiadka authored
Change-Id: I547ab4b05aa14ed3bbee8be2dc77a6840d4816f6
-
- Jan 11, 2022
-
-
Mark Goddard authored
Move new variables added in I4d694d6224c813285d228d6bc7eece5731db1078 to role defaults. Change-Id: Ie09a2dbae2701cb18fd1eb5bfab76e82f9920fb3
-
- Jan 09, 2022
-
-
LinPeiWen authored
rabbitmq starting from 3.8.0, built-in Prometheus support, prometheus plugins are enabled by default, when the environment is "enable_prometheus is no", rabbitmq role will disable prometheus plugins Closes-Bug: #1885106 Change-Id: I4d694d6224c813285d228d6bc7eece5731db1078
-
- Dec 31, 2021
-
-
Pierre Riteau authored
Role vars have a higher precedence than role defaults. This allows to import default vars from another role via vars_files without overriding project_name (see related bug for details). Change-Id: I3d919736e53d6f3e1a70d1267cf42c8d2c0ad221 Related-Bug: #1951785
-
- Aug 10, 2021
-
-
Radosław Piliszek authored
We get a nice optimisation by using a filtered loop instead of task skipping per service with 'when'. Partially-Implements: blueprint performance-improvements Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
-
- Jul 28, 2021
-
-
Radosław Piliszek authored
As mentioned in the Iced014acee7e590c10848e73feca166f48b622dc commit message, in Ussuri+ we can use ``+sbwtdcpu none +sbwtdio none`` as well. This is due to relying on RMQ-provided erlang in version 23.x. This change adds the extra arguments by default. It should be backported down to Ussuri before we do a release with Iced014acee7e590c10848e73feca166f48b622dc. Change-Id: I32e247a6cb34d7f6763b544f247fd408dce2b3a2
-
- Jun 23, 2021
-
-
Mark Goddard authored
By default, Ansible injects a variable for every fact, prefixed with ansible_. This can result in a large number of variables for each host, which at scale can incur a performance penalty. Ansible provides a configuration option [0] that can be set to False to prevent this injection of facts. In this case, facts should be referenced via ansible_facts.<fact>. This change updates all references to Ansible facts within Kolla Ansible from using individual fact variables to using the items in the ansible_facts dictionary. This allows users to disable fact variable injection in their Ansible configuration, which may provide some performance improvement. This change disables fact variable injection in the ansible configuration used in CI, to catch any attempts to use the injected variables. [0] https://docs.ansible.com/ansible/latest/reference_appendices/config.html#inject-facts-as-vars Change-Id: I7e9d5c9b8b9164d4aee3abb4e37c8f28d98ff5d1 Partially-Implements: blueprint performance-improvements
-
- Jun 08, 2021
-
-
Mark Goddard authored
The host list order seen during Ansible handlers may differ to the usual play host list order, due to race conditions in notifying handlers. This means that restart_services.yml for RabbitMQ may be included in a different order than the rabbitmq group, resulting in a node other than the 'first' being restarted first. This can cause some nodes to fail to join the cluster. The include_tasks loop was introduced in [1]. This change fixes the issue by splitting the handler into two tasks, and restarting the first node before all others. [1] https://review.opendev.org/c/openstack/kolla-ansible/+/763137 Change-Id: I1823301d5889589bfd48326ed7de03c6061ea5ba Closes-Bug: #1930293
-
- Jun 07, 2021
-
-
John Garbutt authored
On machines with many cores, we were seeing excessive CPU load on systems that were not very busy. With the following Erlang VM argument we saw RabbitMQ CPU usage drop from about 150% to around 20%, on a system with 40 hyperthreads. +S 2:2 By default RabbitMQ starts N schedulers where N is the number of CPU cores, including hyper-threaded cores. This is fine when you assume all your CPUs are dedicated to RabbitMQ. Its not a good idea in a typical Kolla Ansible setup. Here we go for two scheduler threads. More details can be found here: https://www.rabbitmq.com/runtime.html#scheduling and here: https://erlang.org/doc/man/erl.html#emulator-flags +sbwt none This stops busy waiting of the scheduler, for more details see: https://www.rabbitmq.com/runtime.html#busy-waiting Newer versions of rabbit may need additional flags: "+sbwt none +sbwtdcpu none +sbwtdio none" But this patch should be back portable to older versions of RabbitMQ used in Train and Stein. Note that information on this tuning was found by looking at data from: rabbitmq-diagnostics runtime_thread_stats More details on that can be found here: https://www.rabbitmq.com/runtime.html#thread-stats Related-Bug: #1846467 Change-Id: Iced014acee7e590c10848e73feca166f48b622dc
-
- May 21, 2021
-
-
Michal Arbet authored
Change-Id: If2fdab2ae0f981d9fcbb0fea7a92fcde325804f8
-
- Apr 14, 2021
-
-
LinPeiWen authored
This change enables the use of Docker healthchecks for rabbitmq services. Implements: blueprint container-health-check Depends-On: https://review.opendev.org/c/openstack/kolla/+/784562 Change-Id: I23a2c2efab858b9ed39c6ce0ec4a82df10e7f93d
-
- Dec 14, 2020
-
-
Mark Goddard authored
This reverts commit 9cae59be. Reason for revert: This patch was found to introduce issues with fluentd customisation. The underlying issue is not currently fully understood, but could be a sign of other obscure issues. Change-Id: Ia4859c23d85699621a3b734d6cedb70225576dfc Closes-Bug: #1906288
-
- Nov 19, 2020
-
-
Victor Chembaev authored
Change-Id: I1ff4cbdf3f60cb7fd5fe5d3c5d498e05fe2df79a Closes-Bug: #1904702
-
- Oct 27, 2020
-
-
Radosław Piliszek authored
Main plays are action-redirect-stubs, ideal for import_tasks. This avoids 'include' penalty and makes logs/ara look nicer. Fixes haproxy and rabbitmq not to check the host group as well. Change-Id: I46136fc40b815e341befff80b54a91ef431eabc0 Partially-Implements: blueprint performance-improvements
-
- Oct 12, 2020
-
-
Radosław Piliszek authored
Config plays do not need to check containers. This avoids skipping tasks during the genconfig action. Ironic and Glance rolling upgrades are handled specially. Swift and Bifrost do not use the handlers at all. Partially-Implements: blueprint performance-improvements Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
-
- Sep 17, 2020
-
-
Mark Goddard authored
This change adds support for encryption of communication between OpenStack services and RabbitMQ. Server certificates are supported, but currently client certificates are not. The kolla-ansible certificates command has been updated to support generating certificates for RabbitMQ for development and testing. RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when The Zuul 'tls_enabled' variable is true. Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5 Implements: blueprint message-queue-ssl-support
-
- Aug 28, 2020
-
-
Mark Goddard authored
Including tasks has a performance penalty when compared with importing tasks. If the include has a condition associated with it, then the overhead of the include may be lower than the overhead of skipping all imported tasks. For unconditionally included tasks, switching to import_tasks provides a clear benefit. Benchmarking of include vs. import is available at [1]. This change switches from include_tasks to import_tasks where there is no condition applied to the include. [1] https://github.com/stackhpc/ansible-scaling/blob/master/doc/include-and-import.md#task-include-and-import Partially-Implements: blueprint performance-improvements Change-Id: Ia45af4a198e422773d9f009c7f7b2e32ce9e3b97
-
- Aug 10, 2020
-
-
Mark Goddard authored
Previously we mounted /etc/timezone if the kolla_base_distro is debian or ubuntu. This would fail prechecks if debian or ubuntu images were deployed on CentOS. While this is not a supported combination, for correctness we should fix the condition to reference the host OS rather than the container OS, since that is where the /etc/timezone file is located. Change-Id: Ifc252ae793e6974356fcdca810b373f362d24ba5 Closes-Bug: #1882553
-
- Jul 28, 2020
-
-
Mark Goddard authored
Including tasks has a performance penalty when compared with importing tasks. If the include has a condition associated with it, then the overhead of the include may be lower than the overhead of skipping all imported tasks. In the case of the check-containers.yml include, the included file only has a single task, so the overhead of skipping this task will not be greater than the overhead of the task import. It therefore makes sense to switch to use import_tasks there. Partially-Implements: blueprint performance-improvements Change-Id: I65d911670649960708b9f6a4c110d1a7df1ad8f7
-
- Jul 07, 2020
-
-
Mark Goddard authored
The common role was previously added as a dependency to all other roles. It would set a fact after running on a host to avoid running twice. This had the nice effect that deploying any service would automatically pull in the common services for that host. When using tags, any services with matching tags would also run the common role. This could be both surprising and sometimes useful. When using Ansible at large scale, there is a penalty associated with executing a task against a large number of hosts, even if it is skipped. The common role introduces some overhead, just in determining that it has already run. This change extracts the common role into a separate play, and removes the dependency on it from all other roles. New groups have been added for cron, fluentd, and kolla-toolbox, similar to other services. This changes the behaviour in the following ways: * The common role is now run for all hosts at the beginning, rather than prior to their first enabled service * Hosts must be in the necessary group for each of the common services in order to have that service deployed. This is mostly to avoid deploying on localhost or the deployment host * If tags are specified for another service e.g. nova, the common role will *not* automatically run for matching hosts. The common tag must be specified explicitly The last of these is probably the largest behaviour change. While it would be possible to determine which hosts should automatically run the common role, it would be quite complex, and would introduce some overhead that would probably negate the benefit of splitting out the common role. Partially-Implements: blueprint performance-improvements Change-Id: I6a4676bf6efeebc61383ec7a406db07c7a868b2a
-
- Apr 27, 2020
-
-
Christian Berendt authored
Erlang 22.x dropped support for HiPE so use of "rabbitmq_hipe_compile" is deprecated. Change-Id: I8e0173c7aa6204e5b4c60dafbb8b464482cae90b
-
- Apr 09, 2020
-
-
Dincer Celik authored
Some services look for /etc/timezone on Debian/Ubuntu, so we should introduce it to the containers. In addition, added prechecks for /etc/localtime and /etc/timezone. Closes-Bug: #1821592 Change-Id: I9fef14643d1bcc7eee9547eb87fa1fb436d8a6b3
-
- Mar 02, 2020
-
-
Radosław Piliszek authored
Both include_role and import_role expect role's name to be given via "name" param instead of "role". This worked but caused errors with ansible-lint. See: https://review.opendev.org/694779 Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
-
- Feb 28, 2020
-
-
Mark Goddard authored
We assume that all groups are present in the inventory, and quite obtuse errors can result if any are not. This change adds a precheck that checks for the presence of all expected groups in the inventory for each service. It also introduces a common service-precheck role that we can use for other common prechecks. Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3 Partially-Implements: blueprint improve-prechecks
-
- Feb 16, 2020
-
-
Radosław Piliszek authored
Make it require uniqueness of resolution as well to avoid later issues with RabbitMQ going crazy. Change-Id: I000ba6c62ab44eac0abdf8d5d1f069adfbc6552f Closes-bug: #1863363
-
- Jan 13, 2020
-
-
Mark Goddard authored
Change-Id: Ibf40216b847f103e383f19fe1ef608a75fcfd452 Co-Authored-By:
Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
-
Mark Goddard authored
Change-Id: Iecbc2fe5fa3391dca5a3cc7e575314b95942114b Co-Authored-By:
Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
-
- Jan 10, 2020
-
-
Mark Goddard authored
For the CentOS 7 to 8 transition, we will have a period where both CentOS 7 and 8 images are available. We differentiate these images via a tag - the CentOS 8 images will have a tag of train-centos8 (or master-centos8 temporarily). To achieve this, and maintain backwards compatibility for the openstack_release variable, we introduce a new 'openstack_tag' variable. This variable is based on openstack_release, but has a suffix of 'openstack_tag_suffix', which is empty except on CentOS 8 where it has a value of '-centos8'. Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625 Partially-Implements: blueprint centos-rhel-8
-
- Dec 18, 2019
-
-
yj.bai authored
deploy rabbitmq cluster by train with ipv6 report: unable to connect to epmd (port 4369) on control-1: address (cannot connect to host/port) Closes-Bug: #1856725 Change-Id: I36ebb4e196ece8a304269e8c85e39dda72faae50 Signed-off-by:
yj.bai <bai.yongjun@99cloud.net>
-
- Oct 25, 2019
-
-
Jan Vondra authored
Adds rabbitmq_server_additional_erl_args variable which is appended to RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS environment variable to RabbitMQ server startup script. This can be used to configure the schedulers. Docs attached. Change-Id: Id683c8cc6dac61354ffd94f3b460335b42136ba2 Co-authored-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com> Related-bug: #1846467
-
- Oct 17, 2019
-
-
Radosław Piliszek authored
IPv6 control plane implementation [1] follow-up. [1] Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Change-Id: I4c2bd81e77fc09a04838a62f008e5d6c5dc1483d
-
- Oct 16, 2019
-
-
Radosław Piliszek authored
Introduce kolla_address filter. Introduce put_address_in_context filter. Add AF config to vars. Address contexts: - raw (default): <ADDR> - memcache: inet6:[<ADDR>] - url: [<ADDR>] Other changes: globals.yml - mention just IP in comment prechecks/port_checks (api_intf) - kolla_address handles validation 3x interface conditional (swift configs: replication/storage) 2x interface variable definition with hostname (haproxy listens; api intf) 1x interface variable definition with hostname with bifrost exclusion (baremetal pre-install /etc/hosts; api intf) neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network basic multinode source CI job for IPv6 prechecks for rabbitmq and qdrouterd use proper NSS database now MariaDB Galera Cluster WSREP SST mariabackup workaround (socat and IPv6) Ceph naming workaround in CI TODO: probably needs documenting RabbitMQ IPv6-only proto_dist Ceph ms switch to IPv6 mode Remove neutron-server ml2_type_vxlan/vxlan_group setting as it is not used (let's avoid any confusion) and could break setups without proper multicast routing if it started working (also IPv4-only) haproxy upgrade checks for slaves based on ipv6 addresses TODO: ovs-dpdk grabs ipv4 network address (w/ prefix len / submask) not supported, invalid by default because neutron_external has no address No idea whether ovs-dpdk works at all atm. ml2 for xenapi Xen is not supported too well. This would require working with XenAPI facts. rp_filter setting This would require meddling with ip6tables (there is no sysctl param). By default nothing is dropped. Unlikely we really need it. ironic dnsmasq is configured IPv4-only dnsmasq needs DHCPv6 options and testing in vivo. KNOWN ISSUES (beyond us): One cannot use IPv6 address to reference the image for docker like we currently do, see: https://github.com/moby/moby/issues/39033 (docker_registry; docker API 400 - invalid reference format) workaround: use hostname/FQDN RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4. This is due to old RabbitMQ versions available in images. IPv4 is preferred by default and may fail in the IPv6-only scenario. This should be no problem in real life as IPv6-only is indeed IPv6-only. Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will no longer be relevant as we supply all the necessary config. See: https://github.com/rabbitmq/rabbitmq-server/pull/1982 For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed to work well). Older Ansible versions are known to miss IPv6 addresses in interface facts. This may affect redeploys, reconfigures and upgrades which run after VIP address is assigned. See: https://github.com/ansible/ansible/issues/63227 Bifrost Train does not support IPv6 deployments. See: https://storyboard.openstack.org/#!/story/2006689 Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Implements: blueprint ipv6-control-plane Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Oct 14, 2019
-
-
Gaëtan Trellu authored
This is to avoid split-brain. This change also adds relevant docs that sort out the HA/quorum questions. Change-Id: I9a8c2ec4dbbd0318beb488548b2cde8f4e487dc1 Closes-Bug: #1837761 Co-authored-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Sep 26, 2019
-
-
Kris Lindgren authored
Sometimes as cloud admins, we want to only update code that is running in a cloud. But we dont need to do anything else. Make an action in kolla-ansible that allows us to do that. Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8 Implements: blueprint deploy-containers-action
-
- Sep 23, 2019
-
-
Mark Goddard authored
This allows the install type for the project to be different than kolla_install_type This can be used to avoid hitting bug 1786238, since kuryr only supports the source type. Change-Id: I2b6fc85bac092b1614bccfd22bee48442c55dda4 Closes-Bug: #1786238
-
- Aug 08, 2019
-
-
Doug Szumski authored
The RabbitMQ role supports namespacing the service via the project_name. For example, if you change the project_name, the container name and config directory will be renamed accordingly. However the log folder is currently fixed, even though the service tries to write to one named after the project_name. This change fixes that. Whilst you might generally use vhosts, running multiple RabbitMQ services on a single node is useful at the very least for testing, or for running 'outward RabbitMQ' on the same node. This change is part of the work to support Cells v2. Partially Implements: blueprint support-nova-cells Change-Id: Ied2c24c01571327ea532ba0aaf2fc5e89de8e1fb
-
- Jul 18, 2019
-
-
Radosław Piliszek authored
Docker has no restart policy named 'never'. It has 'no'. This has bitten us already (see [1]) and might bite us again whenever we want to change the restart policy to 'no'. This patch makes our docker integration honor all valid restart policies and only valid restart policies. All relevant docker restart policy usages are patched as well. I added some FIXMEs around which are relevant to kolla-ansible docker integration. They are not fixed in here to not alter behavior. [1] https://review.opendev.org/667363 Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6 Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Jun 27, 2019
-
-
ZijianGuo authored
We don't add extra volumes support for all services in patch [1]. In order to unify the management of the volume, so we need add extra volumes support for these services. [1] https://opendev.org/openstack/kolla-ansible/commit/12ff28a69351cf8ab4ef3390739e04862ba76983 Change-Id: Ie148accdd8e6c60df6b521d55bda12b850c0d255 Partially-Implements: blueprint support-extra-volumes Signed-off-by:
ZijianGuo <guozijn@gmail.com>
-