- May 20, 2020
-
-
Michal Nasiadka authored
Depends-On: https://review.opendev.org/710217/ Change-Id: I85652f23e487c40192106d23f2cdd45a3077deca
-
- Apr 09, 2020
-
-
Dincer Celik authored
Some services look for /etc/timezone on Debian/Ubuntu, so we should introduce it to the containers. In addition, added prechecks for /etc/localtime and /etc/timezone. Closes-Bug: #1821592 Change-Id: I9fef14643d1bcc7eee9547eb87fa1fb436d8a6b3
-
- Mar 25, 2020
-
-
LinPeiWen authored
mariadb container name variable is fixed in some places, but in the defaults directory, mariadb container_name variable is variable. If the mariadb container_name variable is changed during deployment, it will not be assigned to container_name, but a fixed 'mariadb' name. Change-Id: Ie8efa509953d5efa5c3073c9b550be051a7f4f9b
-
- Mar 02, 2020
-
-
Radosław Piliszek authored
Both include_role and import_role expect role's name to be given via "name" param instead of "role". This worked but caused errors with ansible-lint. See: https://review.opendev.org/694779 Change-Id: I388d4ae27111e430d38df1abcb6c6127d90a06e0
-
- Feb 28, 2020
-
-
Mark Goddard authored
We assume that all groups are present in the inventory, and quite obtuse errors can result if any are not. This change adds a precheck that checks for the presence of all expected groups in the inventory for each service. It also introduces a common service-precheck role that we can use for other common prechecks. Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3 Partially-Implements: blueprint improve-prechecks
-
- Feb 19, 2020
-
-
Michal Nasiadka authored
Change-Id: I26206bece95d31c0182e75f2a585c50d6f0fad6f
-
- Feb 02, 2020
-
-
Radosław Piliszek authored
This fixes issues reported by Mark: - possible failure with 4-node cluster (however unlikely) - failure to stop all nodes from progressing when conditions are not valid (due to: "any_errors_fatal: False") Change-Id: Ib6995bf4c99202c9813859b3d9e2f420448f0445
-
- Jan 15, 2020
-
-
Radosław Piliszek authored
These affected both deploy (and reconfigure) and upgrade resulting in WSREP issues, failed deploys or need to recover the cluster. This patch makes sure k-a does not abruptly terminate nodes to break cluster. This is achieved by cleaner separation between stages (bootstrap, restart current, deploy new) and 3 phases for restarts (to keep the quorum). Upgrade actions, which operate on a healthy cluster, went to its section. Service restart was refactored. We no longer rely on the master/slave distinction as all nodes are masters in Galera. Closes-bug: #1857908 Closes-bug: #1859145 Change-Id: I83600c69141714fc412df0976f49019a857655f5
-
- Jan 13, 2020
-
-
Mark Goddard authored
Change-Id: Ibf40216b847f103e383f19fe1ef608a75fcfd452 Co-Authored-By:
Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
-
Mark Goddard authored
Change-Id: Iecbc2fe5fa3391dca5a3cc7e575314b95942114b Co-Authored-By:
Marcin Juszkiewicz <marcin.juszkiewicz@linaro.org>
-
- Jan 10, 2020
-
-
Mark Goddard authored
For the CentOS 7 to 8 transition, we will have a period where both CentOS 7 and 8 images are available. We differentiate these images via a tag - the CentOS 8 images will have a tag of train-centos8 (or master-centos8 temporarily). To achieve this, and maintain backwards compatibility for the openstack_release variable, we introduce a new 'openstack_tag' variable. This variable is based on openstack_release, but has a suffix of 'openstack_tag_suffix', which is empty except on CentOS 8 where it has a value of '-centos8'. Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625 Partially-Implements: blueprint centos-rhel-8
-
- Jan 02, 2020
-
-
yj.bai authored
CentOS 7 uses old galera which has multiple issues handling IPv6 addressing. This patch applies two workarounds for CentOS 7. Co-Authored-By:
Jeffrey Zhang <jeffrey.zhang@99cloud.net> Co-Authored-By:
Radosław Piliszek <radoslaw.piliszek@gmail.com> Change-Id: I7c178aba60c389e65075e0e6cbe4dfa5b8ce06ec Closes-Bug: #1856532 Signed-off-by:
yj.bai <bai.yongjun@99cloud.net>
-
- Nov 22, 2019
-
-
Michal Nasiadka authored
As part of the effort to implement Ansible code linting in CI (using ansible-lint) - we need to implement recommendations from ansible-lint output [1]. One of them is to stop using local_action in favor of delegate_to - to increase readability and and match the style of typical ansible tasks. [1]: https://review.opendev.org/694779/ Partially implements: blueprint ansible-lint Change-Id: I46c259ddad5a6aaf9c7301e6c44cd8a1d5c457d3
-
- Nov 07, 2019
-
-
Mark Goddard authored
After performing a recovery of MariaDB, the mariadb containers are left without a restart policy. This leaves them unable to recover from the crash of a single galera node. There is another issue, in that the 'master' node is left in a bootstrap configuration, with the --wsrep-new-cluster argument configured as BOOTSTRAP_ARGS. This change fixes these issues by removing the restart policy of 'no' from the 'slave' containers, and recreating the master container without the restart policy or bootstrap arguments. Change-Id: I36c875611931163ca2c29ae93b71d3af64cb197c Closes-Bug: #1851594
-
- Nov 04, 2019
-
-
lklimin authored
Change-Id: I12fa6ae8dcec79485c30c4fea2977875aa8f4fae Closes-Bug: #1850792
-
- Nov 01, 2019
-
-
Mark Goddard authored
Currently, Xtrabackup is used for database backups. However, Xtrabackup is not compatible with MariaDB 10.3. This change switches to use mariabackup [1], which is available in the mariadb image. The documented full and incremental restore procedures have been modified to use mariabackup, following [2] and [3]. [1] https://mariadb.com/kb/en/library/mariabackup-overview/ [2] https://mariadb.com/kb/en/library/full-backup-and-restore-with-mariabackup/ [3] https://mariadb.com/kb/en/library/incremental-backup-and-restore-with-mariabackup/ Change-Id: Id52b9b1f7b013277e401b1f6b8aed34473d2b2c4 Closes-Bug: #1843043 Depends-On: https://review.opendev.org/691290
-
Mark Goddard authored
We use the wsrep_notify.sh script to notify changes in Galera cluster membership to haproxy. When xtrabackup was used for the state transfer, nodes in the Donor state would be included in the backend pool. However, since the switch to mariabackup in the Stein cycle, we now remove nodes in the Donor state from the backend pool. This change ensures that nodes in the Donor state are included in the backend pool when the SST method is either xtrabackup or mariabackup. https://galeracluster.com/library/documentation/mysql-wsrep-options.html#wsrep-notify-cmd Change-Id: Ide4301779a0d221ae5d4dbdd4873fb8a40eb7297 Co-authored-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com> Closes-Bug: #1850945
-
- Oct 25, 2019
-
-
Mark Goddard authored
The MariaDB handlers require master_host to be set. TrivialFix Change-Id: I162efbd9e615b86dcdc6e8a4af081cda2f8b0b2b
-
- Oct 16, 2019
-
-
Radosław Piliszek authored
Introduce kolla_address filter. Introduce put_address_in_context filter. Add AF config to vars. Address contexts: - raw (default): <ADDR> - memcache: inet6:[<ADDR>] - url: [<ADDR>] Other changes: globals.yml - mention just IP in comment prechecks/port_checks (api_intf) - kolla_address handles validation 3x interface conditional (swift configs: replication/storage) 2x interface variable definition with hostname (haproxy listens; api intf) 1x interface variable definition with hostname with bifrost exclusion (baremetal pre-install /etc/hosts; api intf) neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network basic multinode source CI job for IPv6 prechecks for rabbitmq and qdrouterd use proper NSS database now MariaDB Galera Cluster WSREP SST mariabackup workaround (socat and IPv6) Ceph naming workaround in CI TODO: probably needs documenting RabbitMQ IPv6-only proto_dist Ceph ms switch to IPv6 mode Remove neutron-server ml2_type_vxlan/vxlan_group setting as it is not used (let's avoid any confusion) and could break setups without proper multicast routing if it started working (also IPv4-only) haproxy upgrade checks for slaves based on ipv6 addresses TODO: ovs-dpdk grabs ipv4 network address (w/ prefix len / submask) not supported, invalid by default because neutron_external has no address No idea whether ovs-dpdk works at all atm. ml2 for xenapi Xen is not supported too well. This would require working with XenAPI facts. rp_filter setting This would require meddling with ip6tables (there is no sysctl param). By default nothing is dropped. Unlikely we really need it. ironic dnsmasq is configured IPv4-only dnsmasq needs DHCPv6 options and testing in vivo. KNOWN ISSUES (beyond us): One cannot use IPv6 address to reference the image for docker like we currently do, see: https://github.com/moby/moby/issues/39033 (docker_registry; docker API 400 - invalid reference format) workaround: use hostname/FQDN RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4. This is due to old RabbitMQ versions available in images. IPv4 is preferred by default and may fail in the IPv6-only scenario. This should be no problem in real life as IPv6-only is indeed IPv6-only. Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will no longer be relevant as we supply all the necessary config. See: https://github.com/rabbitmq/rabbitmq-server/pull/1982 For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed to work well). Older Ansible versions are known to miss IPv6 addresses in interface facts. This may affect redeploys, reconfigures and upgrades which run after VIP address is assigned. See: https://github.com/ansible/ansible/issues/63227 Bifrost Train does not support IPv6 deployments. See: https://storyboard.openstack.org/#!/story/2006689 Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Implements: blueprint ipv6-control-plane Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Sep 26, 2019
-
-
Kris Lindgren authored
Sometimes as cloud admins, we want to only update code that is running in a cloud. But we dont need to do anything else. Make an action in kolla-ansible that allows us to do that. Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8 Implements: blueprint deploy-containers-action
-
- Sep 23, 2019
-
-
Mark Goddard authored
This allows the install type for the project to be different than kolla_install_type This can be used to avoid hitting bug 1786238, since kuryr only supports the source type. Change-Id: I2b6fc85bac092b1614bccfd22bee48442c55dda4 Closes-Bug: #1786238
-
- Aug 20, 2019
-
-
Doug Szumski authored
The MariaDB role HAProxy config section exposes MariaDB on the mariadb_port which may not always be the same as database_port. The HAProxy role checks that the database_port is free, and not the mariadb_port. This could mean that the check passes, but the actual port which HAProxy will attempt to use is taken. This change configures HAProxy to talk to the MariaDB instances on the mariadb_port, and maps them to the database_port which is used by most services as part of the DB connection string. There is a small risk that it may break someones override config. Change-Id: I9507ee709cb21eb743112107770ed3170c61ef74
-
- Aug 15, 2019
-
-
Scott Solkhon authored
Explicitly wait for the database to be accessible via the load balancer. Sometimes it can reject connections even when all database services are up, possibly due to the health check polling in HAProxy. Closes-Bug: #1840145 Change-Id: I7601bb710097a78f6b29bc4018c71f2c6283eef2
-
- Jul 18, 2019
-
-
Radosław Piliszek authored
Docker has no restart policy named 'never'. It has 'no'. This has bitten us already (see [1]) and might bite us again whenever we want to change the restart policy to 'no'. This patch makes our docker integration honor all valid restart policies and only valid restart policies. All relevant docker restart policy usages are patched as well. I added some FIXMEs around which are relevant to kolla-ansible docker integration. They are not fixed in here to not alter behavior. [1] https://review.opendev.org/667363 Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6 Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Jul 05, 2019
-
-
Mark Goddard authored
* Fix wsrep sequence number detection. Log message format is 'WSREP: Recovered position: <UUID>:<seqno>' but we were picking out the UUID rather than the sequence number. This is as good as random. * Add become: true to log file reading and removal since I4a5ebcedaccb9261dbc958ec67e8077d7980e496 added become: true to the 'docker cp' command which creates it. * Don't run handlers during recovery. If the config files change we would end up restarting the cluster twice. * Wait for wsrep recovery container completion (don't detach). This avoids a potential race between wsrep recovery and the subsequent 'stop_container'. * Finally, we now wait for the bootstrap host to report that it is in an OPERATIONAL state. Without this we can see errors where the MariaDB cluster is not ready when used by other services. Change-Id: Iaf7862be1affab390f811fc485fd0eb6879fd583 Closes-Bug: #1834467
-
- Jun 27, 2019
-
-
ZijianGuo authored
We don't add extra volumes support for all services in patch [1]. In order to unify the management of the volume, so we need add extra volumes support for these services. [1] https://opendev.org/openstack/kolla-ansible/commit/12ff28a69351cf8ab4ef3390739e04862ba76983 Change-Id: Ie148accdd8e6c60df6b521d55bda12b850c0d255 Partially-Implements: blueprint support-extra-volumes Signed-off-by:
ZijianGuo <guozijn@gmail.com>
-
- Jun 06, 2019
-
-
Mark Goddard authored
Many tasks that use Docker have become specified already, but not all. This change ensures all tasks that use the following modules have become: * kolla_docker * kolla_ceph_keyring * kolla_toolbox * kolla_container_facts It also adds become for 'command' tasks that use docker CLI. Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
-
- May 02, 2019
-
-
Raimund Hook authored
Since Ansible 2.5, the use of jinja tests as filters has been deprecated. I've run the script provided by the ansible team to 'fix' the jinja filters to conform to the newer syntax. This fixes the deprecation warnings. Change-Id: I844ecb7bec94e561afb09580f58b1bf83a6d00bd Closes-bug: #1827370
-
- Apr 08, 2019
-
-
Mark Goddard authored
Since we are now in the Train cycle, we can be sure that any running MariaDB containers can be safely stopped, and we do not need to perform an explicit shutdown prior to restarting them. Change-Id: I5450690f1cbe0c995e8e4b01a76e90dac2574d61 Related-Bug: #1820325
-
- Apr 03, 2019
-
-
Jim Rollenhagen authored
This is how services reach mariadb; verify it that way. Closes-Bug: #1823005 Change-Id: I9924ad050118b8a853e2309654a089f65178cd77
-
- Apr 02, 2019
-
-
Mark Goddard authored
Several config file permissions are incorrect on the host. In general, files should be 0660, and directories and executables 0770. Change-Id: Id276ac1864f280554e98b937f2845bb424d521de Closes-Bug: #1821579
-
- Mar 23, 2019
-
-
Mark Goddard authored
Upgrading MariaDB from Rocky to Stein currently fails, with the new container left continually restarting. The problem is that the Rocky container does not shutdown cleanly, leaving behind state that the new container cannot recover. The container does not shutdown cleanly because we run dumb-init with a --single-child argument, causing it to forward signals to only the process executed by dumb-init. In our case this is mysqld_safe, which ignores various signals, including SIGTERM. After a (default 10 second) timeout, Docker then kills the container. A Kolla change [1] removes the --single-child argument from dumb-init for the MariaDB container, however we still need to support upgrading from Rocky images that don't have this change. To do that, we add new handlers to execute 'mysqladmin shutdown' to cleanly shutdown the service. A second issue with the current upgrade approach is that we don't execute mysql_upgrade after starting the new service. This can leave the database state using the format of the previous release. This patch also adds handlers to execute mysql_upgrade. [1] https://review.openstack.org/644244 Depends-On: https://review.openstack.org/644244 Depends-On: https://review.openstack.org/645990 Change-Id: I08a655a359ff9cfa79043f2166dca59199c7d67f Closes-Bug: #1820325
-
- Feb 15, 2019
-
-
Michal Nasiadka authored
Those issues intermittently show up in various branches, in all cases it's wrong path used to resolveip binary. Similar to the recent kolla-ansible-ubuntu-source job failures. Change-Id: I8cce42b60897e4ceb8d3b0bd5181fda88b10c2b8
-
- Feb 14, 2019
-
-
Michal Nasiadka authored
- py35/py36 jobs are failing python 3.6 pycache also includes links - so those also need to be removed by tox testenv - kolla-ansible-ubuntu-source job is failing Without basedir set in galera.cnf - mysql_install_db looks for resolveip in /usr/sbin, instead of /usr/bin, thus complains about cannot resolving neither $HOSTNAME, nor localhost. Change-Id: I40514c0a7c43ae01c7680aac81123942be1cdef9
-
- Dec 11, 2018
-
-
Eduardo Gonzalez authored
xtrabackup doesnt work with mariadb 10.3, need to be changed to mariadb-backup tool. For now only migrate galera, not kolla-backup tool to fix the CI. https://jira.mariadb.org/browse/MDEV-15774 Change-Id: Ie77ae41e419873feed4b036a307887b22455183b Depends-On: Icefe3a77fb12d57c869521000d458e3f58435374
-
- Nov 26, 2018
-
-
Eduardo Gonzalez authored
With this change, an operator may be able to stop a service container without stopping all services in a host. This change is the starting point to start fast-forward upgrades support. In next changes new flags will be introducced to disable stop dataplane services during upgrades. Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef Implements: blueprint support-stop-containers
-
- Nov 22, 2018
-
-
Nick Jones authored
blueprint database-backup-recovery Introduce a new option, mariadb_backup, which takes a backup of all databases hosted in MariaDB. Backups are performed using XtraBackup, the output of which is saved to a dedicated Docker volume on the target host (which defaults to the first node in the MariaDB cluster). It supports either full (the default) or incremental backups. Change-Id: Ied224c0d19b8734aa72092aaddd530155999dbc3
-
- Sep 26, 2018
-
-
Adam Harwell authored
Having all services in one giant haproxy file makes altering configuration for a service both painful and dangerous. Each service should be configured with a simple set of variables and rendered with a single unified template. Available are two new templates: * haproxy_single_service_listen.cfg.j2: close to the original style, but only one service per file * haproxy_single_service_split.cfg.j2: using the newer haproxy syntax for separated frontend and backend For now the default will be the single listen block, for ease of transition. Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
-
- Aug 13, 2018
-
-
caoyuan authored
With the more recent versions of ansible, we should now use "is" instead of the "|" This should update it. Change-Id: I6fba56fca182349972e8b0ee5452b37aa4090e0c
-
- Jul 26, 2018
-
-
Lakshmi Prasanna Goutham Pratapa authored
This commit is to apply resource-constraints to a few more OpenStack services. Commit to apply constraints to the last set of services will be made in the upcoming commit. Depends-on: Icafa54baca24d2de64238222a5677b9d8b90e2aa Change-Id: I39004f54281f97d53dfa4b1dbcf248650ad6f186
-