- Mar 22, 2019
-
-
Mark Goddard authored
After upgrading from Rocky to Stein, nova-compute services fail to start new instances with the following error message: Failed to allocate the network(s), not rescheduling. Looking in the nova-compute logs, we also see this: Neutron Reported failure on event network-vif-plugged-60c05a0d-8758-44c9-81e4-754551567be5 for instance 32c493c4-d88c-4f14-98db-c7af64bf3324: NovaException: In shutdown, no new events can be scheduled During the upgrade process, we send nova containers a SIGHUP to cause them to reload their object version state. Speaking to the nova team in IRC, there is a known issue with this, caused by oslo.service performing a full shutdown in response to a SIGHUP, which breaks nova-compute. There is a patch [1] in review to address this. The workaround employed here is to restart the nova compute service. [1] https://review.openstack.org/#/c/641907 Change-Id: Ia4fcc558a3f62ced2d629d7a22d0bc1eb6b879f1 Closes-Bug: #1821362
-
Mark Goddard authored
This is used for version pinning during rolling upgrades. Change-Id: I6e878a8f7c9e0747d8d60cb4527c5f8f039ec15a
-
- Mar 21, 2019
-
-
Mark Goddard authored
Services were being passed as a JSON list, then iterated over in the neutron-server container's extend_start.sh script like this: ['neutron-server' 'neutron-fwaas' 'neutron-vpnaas'] I'm not actually sure why we have to specify services explicitly, it seems liable to break if we have other plugins that need migrating. Change-Id: Ic8ce595793cbe0772e44c041246d5af3a9471d44
-
Zuul authored
-
Zuul authored
-
- Mar 19, 2019
-
-
Zuul authored
-
- Mar 18, 2019
-
-
Zuul authored
-
Mark Goddard authored
Migrate to the latest Ubuntu LTS release 18.04 aka Bionic. See [0] for the big picture. Also test running tox jobs on Bionic. [0] https://etherpad.openstack.org/p/devstack-bionic Change-Id: I96e7b8d17bc1e92716c04fdcf362c2adb08a2212
-
Doug Szumski authored
All Prometheus services should use the Prometheus install type which defaults to the Kolla install type, rather than directly using the Kolla install type. Change-Id: Ieaa924986dff33d4cf4a90991a8f34534cfc3468
-
Zuul authored
-
Zuul authored
-
- Mar 16, 2019
-
-
Zuul authored
-
- Mar 15, 2019
-
-
Mark Goddard authored
Change-Id: I0c31ad353e1fb764bc8e826cda5c3d092623f44b
-
Eduardo Gonzalez authored
Depends-On: https://review.openstack.org/#/c/642958 Depends-On: https://review.openstack.org/642984 Change-Id: If795a9eb3ec92f75867ce3f755d6b832eba31af9
-
- Mar 14, 2019
-
-
Victor Coutellier authored
Fix filemode in the merge_configs and merge_yaml action plugin to be compatible with python3 Change-Id: Ief64c5bdcd717141281e23c255a49ec02a96aef2 Closes-Bug: #1820134
-
Zuul authored
-
Mark Goddard authored
Change-Id: I1f17c504e265e127409b108d2cc53ef6e8c6b887
-
Zuul authored
-
Scott Solkhon authored
Adds support to seperate Swift access and replication traffic from other storage traffic. In a deployment where both Ceph and Swift have been deployed, this changes adds functionalality to support optional seperation of storage network traffic. This adds two new network interfaces 'swift_storage_interface' and 'swift_replication_interface' which maintain backwards compatibility. The Swift access network interface is configured via 'swift_storage_interface', which defaults to 'storage_interface'. The Swift replication network interface is configured via 'swift_replication_interface', which defaults to 'swift_storage_interface'. If a separate replication network is used, Kolla Ansible now deploys separate replication servers for the accounts, containers and objects, that listen on this network. In this case, these services handle only replication traffic, and the original account-, container- and object- servers only handle storage user requests. Change-Id: Ib39e081574e030126f2d08f51de89641ddb0d42e
-
confi-surya authored
As py35 has been dropped and py36, py37 jobs are running, so updated the setup.cfg accordingly. Change-Id: I09eae818f3d4188444aba8f1ece9d3d11eda95c2
-
Zuul authored
-
Zuul authored
-
- Mar 13, 2019
-
-
Zuul authored
-
chenxing authored
Update wsgi configuration after services migrating to python3. Change-Id: I25d8db36dabd5f148b2ec96a30381c6a86fa710e Depends-On: https://review.openstack.org/#/c/625298/ Partially Implements: blueprint python3-support
-
- Mar 11, 2019
-
-
Pierre Riteau authored
Commit 2f6b1c68 changed the way the cephfs source path was generated and dropped the source path component, keeping only the list of IPs and ports. This results in failures to mount cephfs with the following message: source mount path was not specified failed to resolve source Change-Id: I94d18ec064971870264ae8d0b279564f2172e548 Closes-Bug: #1819502
-
Zuul authored
-
Erol Guzoglu authored
This patch implements the support for the elasticsearch-exporter in kolla-ansible The configuration and prechecks are reused from the other exporters Depends-On: Id138f12e10102a6dd2cd8d84f2cc47aa29af3972 Change-Id: Iae0eac0179089f159804490bf71f1cf2c38dde54
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Gary Perkins authored
With newer Docker versions `systemctl show docker` returns: MountFlags=shared Instead of: MountFlags=1048576 This fix accepts either value as valid to ensure the check is not erroneously failing. Closes-Bug: #1791365 Change-Id: I2bd626466d6a0e189e0d85877b2be8f2b4bb37f4
-
- Mar 10, 2019
-
-
Maciej Kucia authored
When methods for passwords generation and merge are extracted then external apps and scripts can use those methods without resolving to subprocess execution or injecting sys.argv. Change-Id: I99aff7852180534129fa36859075306eea776ba9 Signed-off-by:
Maciej Kucia <maciej@kucia.net>
-
Victor Coutellier authored
It is possible to reference undefined variable in kolla-docker module if DockerWorker object initialization fail, so the current behaviour will crash the playbook with the unwanted error message : UnboundLocalError: local variable 'dw' referenced before assignment Change-Id: Ic8d26b11f93255220888b5406f8ab4a6f81736c2 Closes-Bug: #1819361
-
- Mar 09, 2019
-
-
Duong Mai authored
Kibana deployment failed becaused of kibana_confs variable does not have attribute key, So handlers failed to check conditional kibana_conf.changed | bool becaused of kibana_confs.results|selectattr(item.key) does not exits. Change variable name kibana_confs to kibana_conf. Change-Id: If5e0a25b270a6f05c435a6dc32e2ac49406389c5 Closes-Bug: #1819246
-
- Mar 08, 2019
-
-
Mark Goddard authored
Recently as part of adding support for Docker CE we added the following task to the baremetal role: - name: Update yum cache yum: update_cache: yes become: True when: ansible_os_family == 'RedHat' This works fine on Ansible 2.5, but no longer works on Ansible 2.6, which complains that either the 'name' or 'list' argument is mandatory for the yum module. This change updates the cache later on, when installing packages. Change-Id: I1a158bda52c4e362cb12d361d7f961cfc699b385 Closes-Bug: #1819173
-