- Jul 05, 2019
-
-
Zuul authored
-
- Jul 04, 2019
-
-
Zuul authored
-
Christian Berendt authored
Change-Id: Ib5490d504a5b7c9a37dda7babf1257aa661c11de
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
- Jul 03, 2019
-
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Radosław Piliszek authored
This is to ensure that any Depends-On does not cause Zuul not to pick up the change for gating due to no notifications between queues. Previously W+1-ing a change which depended on non-merged change from the other project caused it to remain in the same state. Change-Id: Ib2d88471ac5730c00b5a9721066d1fb3f2998c9c Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
gujin authored
1. Update the UPPER_CONSTRAINTS_FILE to releases.openstack.org[1] 2. Blacklist sphinx 2.1.0[2] [1]: http://lists.openstack.org/pipermail/openstack-discuss/2019-May/006478.html [2]: https://github.com/sphinx-doc/sphinx/issues/6440 Change-Id: Ie5f9ae1bd5c45617c6b7fde0e490d471e172c24e
-
Radosław Piliszek authored
Change-Id: I9e3650e83c72081ef2679fe01842bb9be6a4eb7c Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Jul 02, 2019
-
-
Radosław Piliszek authored
Otherwise ara had only the stderr part and logs only the stdout part which made ordered analysis harder. Additionally add -vvv for the bootstrap-servers run. Change-Id: Ia42ac9b90a17245e9df277c40bda24308ebcd11d Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
Rafael Weingärtner authored
This proposal will add support to Kolla-Ansible for Cloudkitty InfluxDB storage system deployment. The feature of InfluxDB as the storage backend for Cloudkitty was created with the following commit https://github.com/openstack/cloudkitty/commit/ c4758e78b49386145309a44623502f8095a2c7ee Problem Description =================== With the addition of support for InfluxDB in Cloudkitty, which is achieving general availability via Stein release, we need a method to easily configure/support this storage backend system via Kolla-ansible. Kolla-ansible is already able to deploy and configure an InfluxDB system. Therefore, this proposal will use the InfluxDB deployment configured via Kolla-ansible to connect to CloudKitty and use it as a storage backend. If we do not provide a method for users (operators) to manage Cloudkitty storage backend via Kolla-ansible, the user has to execute these changes/configurations manually (or via some other set of automated scripts), which creates distributed set of configuration files, "configurations" scripts that have different versioning schemas and life cycles. Proposed Change =============== Architecture ------------ We propose a flag that users can use to make Kolla-ansible configure CloudKitty to use InfluxDB as the storage backend system. When enabling this flag, Kolla-ansible will also enable the deployment of the InfluxDB via Kolla-ansible automatically. CloudKitty will be configured accordingly to [1] and [2]. We will also externalize the "retention_policy", "use_ssl", and "insecure", to allow fine granular configurations to operators. All of these configurations will only be used when configured; therefore, when they are not set, the default value/behavior defined in Cloudkitty will be used. Moreover, when we configure "use_ssl" to "true", the user will be able to set "cafile" to a custom trusted CA file. Again, if these variables are not set, the default ones in Cloudkitty will be used. Implementation -------------- We need to introduce a new variable called `cloudkitty_storage_backend`. Valid options are `sqlalchemy` or `influxdb`. The default value in Kolla-ansible is `sqlalchemy` for backward compatibility. Then, the first step is to change the definition for the following variable: `/ansible/group_vars/all.yml:enable_influxdb: "{{ enable_monasca | bool }}"` We also need to enable InfluxDB when CloudKitty is configured to use it as the storage backend. Afterwards, we need to create tasks in CloudKitty configurations to create the InfluxDB schema and configure the configuration files accordingly. Alternatives ------------ The alternative would be to execute the configurations manually or handle it via a different set of scripts and configurations files, which can become cumbersome with time. Security Impact --------------- None identified by the author of this spec Notifications Impact -------------------- Operators that are already deploying CloudKitty with InfluxDB as storage backend would need to convert their configurations to Kolla-ansible (if they wish to adopt Kolla-ansible to execute these tasks). Also, deployments (OpenStack environments) that were created with Cloudkitty using storage v1 will need to migrate all of their data to V2 before enabling InfluxDB as the storage system. Other End User Impact --------------------- None. Performance Impact ------------------ None. Other Deployer Impact --------------------- New configuration options will be available for CloudKitty. * cloudkitty_storage_backend * cloudkitty_influxdb_retention_policy * cloudkitty_influxdb_use_ssl * cloudkitty_influxdb_cafile * cloudkitty_influxdb_insecure_connections * cloudkitty_influxdb_name Developer Impact ---------------- None Implementation ============== Assignee -------- * `Rafael Weingärtner <rafaelweingartne>` Work Items ---------- * Extend InfluxDB "enable/disable" variable * Add new tasks to configure Cloudkitty accordingly to these new variables that are presented above * Write documentation and release notes Dependencies ============ None Documentation Impact ==================== New documentation for the feature. References ========== [1] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/storage.html#influxdb-v2` [2] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/collector.html#metric-collection` Change-Id: I65670cb827f8ca5f8529e1786ece635fe44475b0 Signed-off-by:
Rafael Weingärtner <rafael@apache.org>
-
Zuul authored
-
- Jul 01, 2019
-
-
Radosław Piliszek authored
Some kolla-ansible jobs failed due to using external mirrors instead of local ones. This was due to not using the template override provided by kolla. This patch fixes that. Depends-On: https://review.opendev.org/668226 Change-Id: I27f714fdf05e521aa8ce25c5683a452ceb35eeb8 Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
Radosław Piliszek authored
Change-Id: Ifc898015b9b523ef4c50fc969e464f05762f2151 Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
Zuul authored
-
Mark Goddard authored
This reverts commit 8ce5ffd0. Change-Id: I81ce7c007ff267ebbbb721bcdb7eebc0dd575bf8
-
- Jun 28, 2019
-
-
Will Szumski authored
otherwise I'm seeing: TASK [monasca : Creating the monasca agent user] **************************************************************************************************************************** fatal: [monitor1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.16.3.24 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n F ile \"/tmp/ansible_I0RmxQ/ansible_module_kolla_toolbox.py\", line 163, in <module>\r\n main()\r\n File \"/tmp/ansible_I0RmxQ/ansible_module_kolla_toolbox.py\", line 141, in main\r\n output = client.exec_start(job)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/decorators.py\", line 19, in wrapped\r\n return f(self, resource_id, *args, **kwargs)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/api/exec_api.py\", line 165, in exec_start\r\ n return self._read_from_socket(res, stream, tty)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/api/client.py\", line 377, in _read_from_ socket\r\n return six.binary_type().join(gen)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 75, in frames_iter\r\ n n = next_frame_size(socket)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 62, in next_frame_size\r\n data = read_exactly(socket, 8)\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 47, in read_exactly\r\n next_data = read(socket, n - len(data))\r\n File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 31, in read\r\n return socket.recv(n)\r\nsocket.timeout: timed out\r\n", "msg": "MODULE FAILURE", "rc": 1} when the monitoring nodes aren't on the public API network. Change-Id: I7a93f69da0e02c9264da0b081d2e60626f899e3a
-
- Jun 27, 2019
-
-
Mark Goddard authored
Currently, we have a lot of logic for checking if a handler should run, depending on whether config files have changed and whether the container configuration has changed. As rm_work pointed out during the recent haproxy refactor, these conditionals are typically unnecessary - we can rely on Ansible's handler notification system to only trigger handlers when they need to run. This removes a lot of error prone code. This patch removes conditional handler logic for all services. It is important to ensure that we no longer trigger handlers when unnecessary, because without these checks in place it will trigger a restart of the containers. Implements: blueprint simplify-handlers Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Christian Berendt authored
Change-Id: Ia7041be384ac07d0a790c2c5c68b1b31ff0e567a
-
Mark Goddard authored
During an upgrade, nova pins the version of RPC calls to the minimum seen across all services. This ensures that old services do not receive data they cannot handle. After the upgrade is complete, all nova services are supposed to be reloaded via SIGHUP to cause them to check again the RPC versions of services and use the new latest version which should now be supported by all running services. Due to a bug [1] in oslo.service, sending services SIGHUP is currently broken. We replaced the HUP with a restart for the nova_compute container for bug 1821362, but not other nova services. It seems we need to restart all nova services to allow the RPC version pin to be removed. Testing in a Queens to Rocky upgrade, we find the following in the logs: Automatically selected compute RPC version 5.0 from minimum service version 30 However, the service version in Rocky is 35. There is a second issue in that it takes some time for the upgraded services to update the nova services database table with their new version. We need to wait until all nova-compute services have done this before the restart is performed, otherwise the RPC version cap will remain in place. There is currently no interface in nova available for checking these versions [2], so as a workaround we use a configurable delay with a default duration of 30 seconds. Testing showed it takes about 10 seconds for the version to be updated, so this gives us some headroom. This change restarts all nova services after an upgrade, after a 30 second delay. [1] https://bugs.launchpad.net/oslo.service/+bug/1715374 [2] https://bugs.launchpad.net/nova/+bug/1833542 Change-Id: Ia6fc9011ee6f5461f40a1307b72709d769814a79 Closes-Bug: #1833069 Related-Bug: #1833542
-
Mark Goddard authored
When running deploy or reconfigure for Keystone, ansible/roles/keystone/tasks/deploy.yml calls init_fernet.yml, which runs /usr/bin/fernet-rotate.sh, which calls keystone-manage fernet_rotate. This means that a token can become invalid if the operator runs deploy or reconfigure too often. This change splits out fernet-push.sh from the fernet-rotate.sh script, then calls fernet-push.sh after the fernet bootstrap performed in deploy. Change-Id: I824857ddfb1dd026f93994a4ac8db8f80e64072e Closes-Bug: #1833729
-
- Jun 26, 2019
-
-
Radosław Piliszek authored
They are used only to obtain keys for the next task. Change-Id: I2fac22af4710b70e4df8e3a272bcfb6cc8b8532e Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
Zuul authored
-
- Jun 25, 2019
-
-
Zuul authored
-
- Jun 24, 2019