- Aug 16, 2019
-
-
Radosław Piliszek authored
Change-Id: I7d0ed4ad94e3d07220de131b2a0fcd399d942782 Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Aug 15, 2019
-
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Rafael Weingärtner authored
After all of the discussions we had on "https://review.opendev.org/#/c/670626/2", I studied all projects that have an "oslo_messaging" section. Afterwards, I applied the same method that is already used in "oslo_messaging" section in Nova, Cinder, and others. This guarantees that we have a consistent method to enable/disable notifications across projects based on components (e.g. Ceilometer) being enabled or disabled. Here follows the list of components, and the respective changes I did. * Aodh: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Congress: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Cinder: It was already properly configured. * Octavia: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Heat: It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Ceilometer: Ceilometer publishes some messages in the rabbitMQ. However, the default driver is "messagingv2", and not ''(empty) as defined in Oslo; these configurations are defined in ceilometer/publisher/messaging.py. Therefore, we do not need to do anything for the "oslo_messaging_notifications" section in Ceilometer * Tacker: It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Neutron: It was already properly configured. * Nova It was already properly configured. However, we found another issue with its configuration. Kolla-ansible does not configure nova notifications as it should. If 'searchlight' is not installed (enabled) the 'notification_format' should be 'unversioned'. The default is 'both'; so nova will send a notification to the queue versioned_notifications; but that queue has no consumer when 'searchlight' is disabled. In our case, the queue got 511k messages. The huge amount of "stuck" messages made the Rabbitmq cluster unstable. https://bugzilla.redhat.com/show_bug.cgi?id=1478274 https://bugs.launchpad.net/ceilometer/+bug/1665449 * Nova_hyperv: I added the same configurations as in Nova project. * Vitrage It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Searchlight I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Ironic I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Glance It was already properly configured. * Trove It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Blazar It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Sahara It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Watcher I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Barbican I created a mechanism similar to what we have in Cinder, Nova, and others. I also added a configuration to 'keystone_notifications' section. Barbican needs its own queue to capture events from Keystone. Otherwise, it has an impact on Ceilometer and other systems that are connected to the "notifications" default queue. * Keystone Keystone is the system that triggered this work with the discussions that followed on https://review.opendev.org/#/c/670626/2 . After a long discussion, we agreed to apply the same approach that we have in Nova, Cinder and other systems in Keystone. That is what we did. Moreover, we introduce a new topic "barbican_notifications" when barbican is enabled. We also removed the "variable" enable_cadf_notifications, as it is obsolete, and the default in Keystone is CADF. * Mistral: It was hardcoded "noop" as the driver. However, that does not seem a good practice. Instead, I applied the same standard of using the driver and pushing to "notifications" queue if Ceilometer is enabled. * Cyborg: I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Murano It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Senlin It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Manila It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Zun The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Designate It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Magnum It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components Closes-Bug: #1838985 Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202 Signed-off-by:
Rafael Weingärtner <rafael@apache.org>
-
Kien Nguyen authored
Masakari provides Instances High Availability Service for OpenStack clouds by automatically recovering failed Instances. Depends-On: https://review.openstack.org/#/c/615469/ Change-Id: I0b3457232ee86576022cff64eb2e227ff9bbf0aa Implements: blueprint ansible-masakari Co-Authored-By:
Gaëtan Trellu <gaetan.trellu@incloudus.com>
-
Zuul authored
-
Zuul authored
-
- Aug 14, 2019
-
-
Zuul authored
-
Zuul authored
-
Scott Solkhon authored
Change-Id: If5bba855a6e34c971fdb1ceb6f10dba62e54b811
-
Kien Nguyen authored
Add Masakari testing into the Gate. Change-Id: I52df33f963e7d2ae4059887df3d24d9e6642134e Depends-On: https://review.opendev.org/#/c/615469/ Depends-On: https://review.opendev.org/#/c/615715 Implements: blueprint ansible-masakari Co-Authored-By:
Gaëtan Trellu <gaetan.trellu@incloudus.com>
-
Scott Solkhon authored
Fix fluentd config from overwriting custom config with the same filename Closes-Bug: #1840166 Change-Id: I42c5446381033015f590901b2120950d602f847f
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Scott Solkhon authored
This commit adds the missing policy file for Octavia in Horizon, thus enabling the panel where appropriate. Change-Id: I60f1a52de71519f2d8bd84baa8aba5700fa75b1c
-
Scott Solkhon authored
This feature is disabled by default, and can be enabled by setting 'enable_swift_s3api' to 'true' in globals.yml. Two middlewares are required for Swift S3 - s3api and s3token. Additionally, we need to configure the authtoken middleware to delay auth decisions to give s3token a chance to authorise requests using EC2 credentials. Change-Id: Ib8e8e3a1c2ab383100f3c60ec58066e588d3b4db
-
- Aug 13, 2019
-
-
Zuul authored
-
Scott Solkhon authored
Change-Id: I7f980640e75a9328a14a3e14e9c55358955f3182
-
Keith Plant authored
Added configuration to ansible/roles/telegraf/templates/telegraf.conf.j2 to allow telegraf to grab telemetry data from docker directly. Added option to etc/kolla/globals.yml to switch on/off the configuration to ingest data from the docker daemon into telegraf. Change-Id: Icbebc415d643a237fa128840d5f5a9c91d22c12d Signed-off-by:
Keith Plant <kplantjr@gmail.com>
-
Zuul authored
-
- Aug 12, 2019
-
-
Zuul authored
-
Marcin Juszkiewicz authored
We use that variable in Kolla in many places. There are places in 'kolla-ansible' where we also need it. Change-Id: Iea78c4a7cb0fd1405ea7299cdcf0841f63820c8c
-
- Aug 10, 2019
-
-
Zuul authored
-
- Aug 09, 2019
- Aug 08, 2019
-
-
Radosław Piliszek authored
Because we merged both [1] and [2] in master, we got broken FWaaS. This patch unbreaks it and is required to backport to Stein due to [2] backport waiting for merge, while [1] is already backported. [1] https://review.opendev.org/661704 [2] https://review.opendev.org/668406 Change-Id: I74427ce9b937c42393d86574614603bd788606af Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
Doug Szumski authored
The RabbitMQ role supports namespacing the service via the project_name. For example, if you change the project_name, the container name and config directory will be renamed accordingly. However the log folder is currently fixed, even though the service tries to write to one named after the project_name. This change fixes that. Whilst you might generally use vhosts, running multiple RabbitMQ services on a single node is useful at the very least for testing, or for running 'outward RabbitMQ' on the same node. This change is part of the work to support Cells v2. Partially Implements: blueprint support-nova-cells Change-Id: Ied2c24c01571327ea532ba0aaf2fc5e89de8e1fb
-
Zuul authored
-
- Aug 07, 2019
-
-
Michal Nasiadka authored
- add support for sha256 in bslurp module - change sha1 to sha256 in ceph-mon ansible role Depends-On: https://review.opendev.org/655623 Change-Id: I25e28d150f2a8d4a7f87bb119d9fb1c46cfe926f Closes-Bug: #1826327
-
Marcin Juszkiewicz authored
According to Docker upstream release notes [1] MountFlags should be empty. 1. https://docs.docker.com/engine/release-notes/#18091 "Important notes about this release In Docker versions prior to 18.09, containerd was managed by the Docker engine daemon. In Docker Engine 18.09, containerd is managed by systemd. Since containerd is managed by systemd, any custom configuration to the docker.service systemd configuration which changes mount settings (for example, MountFlags=slave) breaks interactions between the Docker Engine daemon and containerd, and you will not be able to start containers. Run the following command to get the current value of the MountFlags property for the docker.service: sudo systemctl show --property=MountFlags docker.service MountFlags= Update your configuration if this command prints a non-empty value for MountFlags, and restart the docker service." Closes-bug: #1833835 Change-Id: I4f4cbb09df752d00073a606463c62f0a6ca6c067
-
Mark Goddard authored
Without this we may see the following error in cinder-backup when using the LVM backend: Could not login to any iSCSI portal Enabling the iscsid container on hosts in the cinder-backup group fixes this. Closes-Bug: #1838624 Change-Id: If373c002b0744ce9dbdffed50a02bab55dd0acb9 Co-Authored-By:
dmitry-a-grachev <dmitry.a.grachev@gmail.com>
-
- Aug 06, 2019
-
-
Mark Goddard authored
During the MariaDB testing we saw a number of cases where this IP address was not assigned to one or more hosts, which caused various issues later on. Change-Id: I61b54483e4553b926e9ddc0a8848b2daa6bc49f1
-
Mark Goddard authored
Docker is now always installed using the community edition (CE) packages. Change-Id: I8c3fe44fd9d2da99b5bb1c0ec3472d7e1b5fb295
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
- Aug 05, 2019