- Jul 10, 2019
-
-
Zuul authored
-
Michal Nasiadka authored
* Sometimes getting/creating ceph mds keyring fails, similar to https://tracker.ceph.com/issues/16255 Change-Id: I47587cbeb8be0e782c13ba7f40367409e2daa8a8
-
Raimund Hook authored
Updated the docs to refer to the openstack client, rather than the (old) neutron client. TrivialFix Change-Id: I82011175f7206f52570a0f7d1c6863ad8fa08fd0
-
- Jul 09, 2019
-
-
Radosław Piliszek authored
Missed by me in a recent merge. TrivialFix Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com> Change-Id: I83b1e84a43f014ce20be8677868be3f66017e3c2
-
Zuul authored
-
Zuul authored
-
- Jul 08, 2019
-
-
Zuul authored
-
Mark Goddard authored
Due to a bug in ansible, kolla-ansible deploy currently fails in nova with the following error when used with ansible earlier than 2.8: TASK [nova : Waiting for nova-compute services to register themselves] ********* task path: /home/zuul/src/opendev.org/openstack/kolla-ansible/ansible/roles/nova/tasks/discover_computes.yml:30 fatal: [primary]: FAILED! => { "failed": true, "msg": "The field 'vars' has an invalid value, which includes an undefined variable. The error was: 'nova_compute_services' is undefined\n\nThe error appears to have been in '/home/zuul/src/opendev.org/openstack/kolla-ansible/ansible/roles/nova/tasks/discover_computes.yml': line 30, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n\n- name: Waiting for nova-compute services to register themselves\n ^ here\n" } Example: http://logs.openstack.org/00/669700/1/check/kolla-ansible-centos-source/81b65b9/primary/logs/ansible/deploy This was caused by https://review.opendev.org/#/q/I2915e2610e5c0b8d67412e7ec77f7575b8fe9921, which hits upon an ansible bug described here: https://github.com/markgoddard/ansible-experiments/tree/master/05-referencing-registered-var-do-until. We can work around this by not using an intermediary variable. Change-Id: I58f8fd0a6e82cb614e02fef6e5b271af1d1ce9af Closes-Bug: #1835817
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
- Jul 07, 2019
-
-
Zuul authored
-
- Jul 05, 2019
-
-
Corey Bryant authored
This is a mechanically generated patch to ensure unit testing is in place for all of the Tested Runtimes for Train. See the Train python3-updates goal document for details: https://governance.openstack.org/tc/goals/train/python3-updates.html Change-Id: Ic5f9c5c666e08bc34127d97f9540033536c5b08f Story: #2005924 Task: #34216
-
Zuul authored
-
Zuul authored
-
Mark Goddard authored
* Fix wsrep sequence number detection. Log message format is 'WSREP: Recovered position: <UUID>:<seqno>' but we were picking out the UUID rather than the sequence number. This is as good as random. * Add become: true to log file reading and removal since I4a5ebcedaccb9261dbc958ec67e8077d7980e496 added become: true to the 'docker cp' command which creates it. * Don't run handlers during recovery. If the config files change we would end up restarting the cluster twice. * Wait for wsrep recovery container completion (don't detach). This avoids a potential race between wsrep recovery and the subsequent 'stop_container'. * Finally, we now wait for the bootstrap host to report that it is in an OPERATIONAL state. Without this we can see errors where the MariaDB cluster is not ready when used by other services. Change-Id: Iaf7862be1affab390f811fc485fd0eb6879fd583 Closes-Bug: #1834467
-
Zuul authored
-
Zuul authored
-
- Jul 04, 2019
-
-
Zuul authored
-
Mark Goddard authored
This is the documented procedure. Change-Id: I09ca99e92b112621d66b564a88b13658632242f5
-
Mark Goddard authored
There are now several good tools for deploying Ceph, including Ceph Ansible and ceph-deploy. Maintaining our own Ceph deployment is a significant maintenance burden, and we should focus on our core mission to deploy OpenStack. Given that this is a significant part of kolla ansible currently we will need a long deprecation period and a migration path to another tool. Change-Id: Ic603c85c04d8794580a19f9efaa7a8589565f4f6 Partially-Implements: blueprint remove-ceph
-
Christian Berendt authored
Change-Id: Ib5490d504a5b7c9a37dda7babf1257aa661c11de
-
Mark Goddard authored
There is a race condition during nova deploy since we wait for at least one compute service to register itself before performing cells v2 host discovery. It's quite possible that other compute nodes will not yet have registered and will therefore not be discovered. This leaves them not mapped into a cell, and results in the following error if the scheduler picks one when booting an instance: Host 'xyz' is not mapped to any cell The problem has been exacerbated by merging a fix [1][2] for a nova race condition, which disabled the dynamic periodic discovery mechanism in the nova scheduler. This change fixes the issue by waiting for all expected compute services to register themselves before performing host discovery. This includes both virtualised compute services and bare metal compute services. [1] https://bugs.launchpad.net/kolla-ansible/+bug/1832987 [2] https://review.opendev.org/665554 Change-Id: I2915e2610e5c0b8d67412e7ec77f7575b8fe9921 Closes-Bug: #1835002
-
Radosław Piliszek authored
Change-Id: I9773a7c4f7a5d31a83c10562057ce772439b9693 Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
- Jul 03, 2019
-
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Radosław Piliszek authored
This is to ensure that any Depends-On does not cause Zuul not to pick up the change for gating due to no notifications between queues. Previously W+1-ing a change which depended on non-merged change from the other project caused it to remain in the same state. Change-Id: Ib2d88471ac5730c00b5a9721066d1fb3f2998c9c Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
gujin authored
1. Update the UPPER_CONSTRAINTS_FILE to releases.openstack.org[1] 2. Blacklist sphinx 2.1.0[2] [1]: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006478.html [2]: https://github.com/sphinx-doc/sphinx/issues/6440 Change-Id: Ie5f9ae1bd5c45617c6b7fde0e490d471e172c24e
-
Radosław Piliszek authored
Change-Id: I9e3650e83c72081ef2679fe01842bb9be6a4eb7c Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Jul 02, 2019
-
-
Radosław Piliszek authored
Otherwise ara had only the stderr part and logs only the stdout part which made ordered analysis harder. Additionally add -vvv for the bootstrap-servers run. Change-Id: Ia42ac9b90a17245e9df277c40bda24308ebcd11d Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
Rafael Weingärtner authored
This proposal will add support to Kolla-Ansible for Cloudkitty InfluxDB storage system deployment. The feature of InfluxDB as the storage backend for Cloudkitty was created with the following commit https://github.com/openstack/cloudkitty/commit/ c4758e78b49386145309a44623502f8095a2c7ee Problem Description =================== With the addition of support for InfluxDB in Cloudkitty, which is achieving general availability via Stein release, we need a method to easily configure/support this storage backend system via Kolla-ansible. Kolla-ansible is already able to deploy and configure an InfluxDB system. Therefore, this proposal will use the InfluxDB deployment configured via Kolla-ansible to connect to CloudKitty and use it as a storage backend. If we do not provide a method for users (operators) to manage Cloudkitty storage backend via Kolla-ansible, the user has to execute these changes/configurations manually (or via some other set of automated scripts), which creates distributed set of configuration files, "configurations" scripts that have different versioning schemas and life cycles. Proposed Change =============== Architecture ------------ We propose a flag that users can use to make Kolla-ansible configure CloudKitty to use InfluxDB as the storage backend system. When enabling this flag, Kolla-ansible will also enable the deployment of the InfluxDB via Kolla-ansible automatically. CloudKitty will be configured accordingly to [1] and [2]. We will also externalize the "retention_policy", "use_ssl", and "insecure", to allow fine granular configurations to operators. All of these configurations will only be used when configured; therefore, when they are not set, the default value/behavior defined in Cloudkitty will be used. Moreover, when we configure "use_ssl" to "true", the user will be able to set "cafile" to a custom trusted CA file. Again, if these variables are not set, the default ones in Cloudkitty will be used. Implementation -------------- We need to introduce a new variable called `cloudkitty_storage_backend`. Valid options are `sqlalchemy` or `influxdb`. The default value in Kolla-ansible is `sqlalchemy` for backward compatibility. Then, the first step is to change the definition for the following variable: `/ansible/group_vars/all.yml:enable_influxdb: "{{ enable_monasca | bool }}"` We also need to enable InfluxDB when CloudKitty is configured to use it as the storage backend. Afterwards, we need to create tasks in CloudKitty configurations to create the InfluxDB schema and configure the configuration files accordingly. Alternatives ------------ The alternative would be to execute the configurations manually or handle it via a different set of scripts and configurations files, which can become cumbersome with time. Security Impact --------------- None identified by the author of this spec Notifications Impact -------------------- Operators that are already deploying CloudKitty with InfluxDB as storage backend would need to convert their configurations to Kolla-ansible (if they wish to adopt Kolla-ansible to execute these tasks). Also, deployments (OpenStack environments) that were created with Cloudkitty using storage v1 will need to migrate all of their data to V2 before enabling InfluxDB as the storage system. Other End User Impact --------------------- None. Performance Impact ------------------ None. Other Deployer Impact --------------------- New configuration options will be available for CloudKitty. * cloudkitty_storage_backend * cloudkitty_influxdb_retention_policy * cloudkitty_influxdb_use_ssl * cloudkitty_influxdb_cafile * cloudkitty_influxdb_insecure_connections * cloudkitty_influxdb_name Developer Impact ---------------- None Implementation ============== Assignee -------- * `Rafael Weingärtner <rafaelweingartne>` Work Items ---------- * Extend InfluxDB "enable/disable" variable * Add new tasks to configure Cloudkitty accordingly to these new variables that are presented above * Write documentation and release notes Dependencies ============ None Documentation Impact ==================== New documentation for the feature. References ========== [1] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/storage.html#influxdb-v2` [2] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/collector.html#metric-collection` Change-Id: I65670cb827f8ca5f8529e1786ece635fe44475b0 Signed-off-by:
Rafael Weingärtner <rafael@apache.org>
-
Mark Goddard authored
This performs the same as a deploy-bifrost, but first stops the bifrost services and container if they are running. This can help where a docker stop may lead to an ungraceful shutdown, possibly due to running multiple services in one container. Change-Id: I131ab3c0e850a1d7f5c814ab65385e3a03dfcc74 Implements: blueprint bifrost-upgrade Closes-Bug: #1834332
-
Zuul authored
-