- Dec 21, 2018
-
-
Zuul authored
-
dommgifer authored
This is required to support execution as a non-root user. Change-Id: I60d224407c2828d6b9f1701f7637385a25fbcced Closes-Bug: #1809233
-
confi-surya authored
Small cleanups: * Use openstack-lower-constraints-jobs template, remove individual jobs. * Sort list of templates Change-Id: I67199fabe6a9f7b1fd38dac77a6157bf4fb465b9 Needed-By: https://review.openstack.org/623229
-
- Dec 20, 2018
- Dec 19, 2018
-
-
Eduardo Gonzalez authored
Change-Id: If5b4ba975a65e07d2704eb6bdb9d841d6a9c3d42
-
Duc Nguyen Cong authored
In multi controller deployment, kolla will generate "controller_ip_port_list option" in [health_manager] section with ONLY IP of that node instead of a list of controller ip. Therefor, "amphora-agent.conf" file of amphora instance will contain IP of ONLY ONE controller node. In case of that node fail, amphora agent won't send heartbeat message to other health manager node, and the loadbalancer will go to ERROR state. Change-Id: I102ed6ba3fff2c12cc6d37f81ad59508eacc859c Co-Authored-By:
Hieu LE <hieulq2@viettel.com.vn>
-
Zuul authored
-
- Dec 18, 2018
-
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Mark Goddard authored
This means we can pull in the job from other repositories without explicitly adding the dependency on kolla-ansible in that project. Change-Id: Ia7e4294508e6d445638c176359a939af32fdfb12
-
- Dec 17, 2018
-
-
Nick Jones authored
Update the template so that if 'dns_interface' is set, named listens on this interface as well as the 'api_interface'. Change-Id: I986ca46e5599e4767800fcc7f34a1c6e682efb55 Closes-Bug: 1808829
-
Zuul authored
-
Pavel Sinkevych authored
Add missing `prometheus_memcached_exporter` container_fact Fix conditional container_fact for haproxy_exporter Change-Id: Id0f3b94af956f51e3c782c0244c6ce7a340119bd Closes-Bug: #1808820
-
Zuul authored
-
Patrick O'Neill authored
Add an enable_cinder_backend_quobyte option to etc/kolla/globals.yml to enable use the Quobyte Cinder backend. Change the bind mounts for /var/lib/nova/mnt to include the shared propogation if Quobyte is enabled. Update the documentation to include a section on configuring the Cinder. Implements: blueprint cinder-quobyte-backend Change-Id: I364939407ad244fe81cea40f880effdbcaa8a20d
-
Kien Nguyen authored
According [1], vitrage notification has to be configured in Nova, Neutron, Cinder & Aodh config file. [1] https://review.openstack.org/#/c/302802/ Change-Id: Iaf8cd7d40e6eb988adf4d208e6ad784f1004caa5
-
- Dec 16, 2018
-
-
Bartosz Zurkowski authored
Find module searches paths on managed server. Since role path and custom Kolla config is located on deployment node and deployment node is not considered to be a managed server, Monasca plugin files cannot be found. After the deployment container running Monasca agent collector stucks in restart mode due to missing plugin files. The problem does not occur if deployment was started from a managed server (eg. OSC). The problem occurs if the deployment was started from a separate deployment server - a common case. This change enforces running find module locally on deployment node. Change-Id: Ia25daafe2f82f5744646fd2eda2d255ccead814e Signed-off-by:
Bartosz Zurkowski <b.zurkowski@samsung.com>
-
Bartosz Zurkowski authored
In multinode deployments creating default Grafana organization failed, because Ansible attempted to call Grafana API in the context of each host in the inventory. After creating organization via the first host, subsequent attempts via the remaining hosts failed due to already existing organization. This change enforces creating default organization only once. Other tasks using Grafana API have been enforced to be ran only once as well. Change-Id: I3a93a719b3c9b4e55ab226d3b22d571d9a0f489d Signed-off-by:
Bartosz Zurkowski <b.zurkowski@samsung.com>
-
- Dec 14, 2018
-
-
Mark Goddard authored
Nova services may reasonably expect cell databases to exist when they start. The current cell setup tasks in kolla run after the nova containers have started, meaning that cells may or may not exist in the database when they start, depending on timing. In particular, we are seeing issues in kolla CI currently with jobs timing out waiting for nova compute services to start. The following error is seen in the nova logs of these jobs, which may or may not be relevant: No cells are configured, unable to continue This change creates the cell0 and cell1 databases prior to starting nova services. In order to do this, we must create new containers in which to run the nova-manage commands, because the nova-api container may not yet exist. This required adding support to the kolla_docker module for specifying a command for the container to run that overrides the image's command. We also add the standard output and error to the module's result when a non-detached container is run. A secondary benefit of this is that the output of bootstrap containers is now displayed in the Ansible output if the bootstrapping command fails, which will help with debugging. Change-Id: I2c1e991064f9f588f398ccbabda94f69dc285e61 Closes-Bug: #1808575
-
- Dec 13, 2018
- Dec 12, 2018
-
-
Zuul authored
-
wu.chunyang authored
trivial modify Change-Id: I27d5b85d2c745fee5ff0643e7771b46faebd23a6
-
Zuul authored
-
- Dec 11, 2018
-
-
Eduardo Gonzalez authored
xtrabackup doesnt work with mariadb 10.3, need to be changed to mariadb-backup tool. For now only migrate galera, not kolla-backup tool to fix the CI. https://jira.mariadb.org/browse/MDEV-15774 Change-Id: Ie77ae41e419873feed4b036a307887b22455183b Depends-On: Icefe3a77fb12d57c869521000d458e3f58435374
-
Jeffrey Zhang authored
when using ceilometer+gnocchi, for every notification sample, ceilometer will update the resource even if is not updated. We should add [cache] section to make ceilometer cache the resource, and stop send the useless update request. Closes-Bug: #1807841 Change-Id: Ic33b4cd5ba8165c20878cab068f38a3948c9d31d
-
Kien Nguyen authored
Vitrage has already supported Prometheus as datasource. Kolla can config it automatically, just need a little changes, for example in wsgi config file [1]. Co-Authored-By:
Hieu LE <hieulq2@viettel.com.vn> [1] https://review.openstack.org/#/c/584649/8/devstack/apache-vitrage.template Change-Id: I64028a0dfd9887813b980a31c30c2c1b1046da61
-
Zuul authored
-
- Dec 07, 2018
-
-
Mark Goddard authored
Prior to this change, when the --limit argument is used, each host in the limit gathers facts for every other host. This is clearly unnecessary, and can result in up to (N-1)^2 fact gathers. This change gathers facts for each host only once. Hosts that are not in the limit are divided between those that are in the limit, and facts are gathered via delegation. This change also factors out the fact gathering logic into a separate playbook that is imported where necessary. Change-Id: I923df5af41a7f1b7b0142d0da185a9a0979be543
-
Mark Goddard authored
Currently, every service has a play in site.yml that is executed, and the role is skipped if the service is disabled. This can be slow, particularly with many hosts, since each play takes time to setup, and evaluate. This change creates various Ansible groups for hosts with services enabled at the beginning of the playbook. If a service is disabled, this new group will have no hosts, and the play for that service will be a noop. I have tested this on a laptop using an inventory with 12 hosts (each pointing to my laptop via SSH), and a config file that disables every service. Time taken to run 'kolla-ansible deploy': Before change: 2m30s After change: 0m14s During development I also tried an approach using an 'include_role' task for each service. This was not as good, taking 1m00s. The downsides to this patch are that there is a large number of tasks at the beginning of the playbook to perform the grouping, and every play for a disabled service now outputs this warning message: [WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True This is because if the service is disabled, there are no hosts in the group. This seems like a reasonable tradeoff. Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
-
Zuul authored
-
Zuul authored
-
- Dec 06, 2018
-
-
XiaojueGuan authored
refer: https://docs.ansible.com/ansible/2.5/modules/package_module.html Change-Id: I68a0eb64a61bc6c0f77cbae7e8b4f4c7143202c5
-
Zuul authored
-
- Dec 05, 2018
-
-
Eduardo Gonzalez authored
This change adds support to comfigure tty, it was enabled by default but a recent patch removed it. Some services such as Karaf in opendaylight requires a TTY during startup. Closes-Bug: #1806662 Change-Id: Ia4335523b727d0e45505cbb1efb40ccf04c27db7
-
Zuul authored
-