- Oct 04, 2023
-
-
Michal Nasiadka authored
hostnqn is generated using to_uuid filter Usually "nvme gen-hostnqn" command is used to generate hostnqn, and it has the format of: nqn.2014-08.org.nvmexpress:uuid:67dc8c8e-0262-4d81-ac51-ace7c25e4daa "nqn.2014-08.org.nvmexpress:uuid:" is always static Closes-Bug: #2035975 Change-Id: I6ece4fe8c18c0167a2707c24693fbe39ed15cdba
-
- Sep 08, 2023
-
-
John Garbutt authored
For example, an operator may wish to customise the nova-compute-ironic service configuration without affecting other Nova services. Closes-Bug: #2034949 Change-Id: If8648d8e85ab3dbcbb4ecba674b2e34b06898327
-
- Jun 28, 2023
-
-
Michal Nasiadka authored
Use case: exposing single external https frontend and load balancing services using FQDNs. Support different ports for internal and external endpoints. Introduced kolla_url filter to normalize urls like: - https://magnum.external:443/v1 - http://magnum.external:80/v1 Change-Id: I9fb03fe1cebce5c7198d523e015280c69f139cd0 Co-Authored-By:
Jakub Darmach <jakub@stackhpc.com>
-
- Jun 14, 2023
-
-
Michal Arbet authored
This patch is adding a feature for an option to copy different ceph configuration files and corresponding keyrings for cinder, glance, manila, gnocchi and nova services. This is especially useful when the deployment uses availability zones as below example. - Individual compute can read/write to individual ceph cluster in same AZ. - Cinder can write to several ceph clusters in several AZs. - Glance can use multistore and upload images to several ceph clusters in several AZs at once. Change-Id: Ie4d8ab5a3df748137835cae1c943b9180cd10eb1
-
- Feb 14, 2023
-
-
Mark Goddard authored
Previously, when running one of the following commands: kolla-ansible deploy --check kolla-ansible genconfig --check deployment or configuration generation fails for various reasons. MariaDB fails to lookup the existing cluster. Keystone fails to generate cron config. Nova-cell fails to get the cell settings. Closes-Bug: #2002661 Change-Id: I5e765f498ae86d213d0a4379ca5d473db1499962
-
- Jan 26, 2023
-
-
Ghanshyam Mann authored
As per the RBAC new direction in Zed cycle, we have dropped the system scope from API policies and all the policies are hardcoded to project scoped so that any user accessing APIs using system scope will get 403 error. It is dropped from all the OpenStack services except for the Ironic service which will have system scope and to support ironic only deployment, we are keeping system as well as project scope in Keystone. Complete discussion and direction can be found in the below gerrit change and TC goal direction: - https://review.opendev.org/c/openstack/governance/+/847418 - https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html#the-issues-we-are-facing-with-scope-concept As phase-2 of RBAC goal, services will start enabling the new defaults and project scope by default. For example: Nova did in - https://review.opendev.org/c/openstack/nova/+/866218 Kolla who start accessing the services using system scope token - https://review.opendev.org/c/openstack/kolla-ansible/+/692179 This commit partially revert the above change except keeping system scope usage for Keystone and Ironic. Rest all services are changed to use the project scope token. And enable the scope and new defaults for Nova which was disabled by https://review.opendev.org/c/openstack/kolla-ansible/+/870804 Change-Id: I0adbe0a6c39e11d7c9542569085fc5d580f26c9d
-
- Jan 12, 2023
-
-
Mark Goddard authored
When running in check mode, some prechecks previously failed because they use the command module which is silently not run in check mode. Other prechecks were not running correctly in check mode due to e.g. looking for a string in empty command output or not querying which containers are running. This change fixes these issues. Closes-Bug: #2002657 Change-Id: I5219cb42c48d5444943a2d48106dc338aa08fa7c
-
- Dec 21, 2022
-
-
Matt Crees authored
Regularly, we experience issues in Kolla Ansible deployments because we use wrong options in OpenStack configuration files. This is because OpenStack services ignore unknown options. We also need to keep on top of deprecated options that may be removed in the future. Integrating oslo-config-validator into Kolla Ansible will greatly help. Adds a shared role to run oslo-config-validator on each service. Takes into account that services have multiple containers, and these may also use multiple config files. Service roles are extended to use this shared role. Executed with the new command ``kolla-ansible validate-config``. Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
-
- Nov 04, 2022
-
-
Ivan Halomi authored
Second part of patchset: https://review.opendev.org/c/openstack/kolla-ansible/+/799229/ in which was suggested to split patch into smaller ones. THis change adds container_engine to module parameters so when we introduce podman, kolla_toolbox can be used for both engines. Signed-off-by:
Ivan Halomi <i.halomi@partner.samsung.com> Co-authored-by:
Martin Hiner <m.hiner@partner.samsung.com> Change-Id: Ic2093aa9341a0cb36df8f340cf290d62437504ad
-
- Nov 02, 2022
-
-
Ivan Halomi authored
Second part of patchset: https://review.opendev.org/c/openstack/kolla-ansible/+/799229/ in which was suggested to split patch into smaller ones. This change adds container_engine variable to kolla_container_facts module, this prepares module to be used with docker and podman as well without further changes in roles. Signed-off-by:
Ivan Halomi <i.halomi@partner.samsung.com> Co-authored-by:
Martin Hiner <m.hiner@partner.samsung.com> Change-Id: I9e8fa30646844ab4a288555f3aafdda345b3a118
-
- Oct 28, 2022
-
-
Ivan Halomi authored
First part of patchset: https://review.opendev.org/c/openstack/kolla-ansible/+/799229/ in which was suggested to split patch into smaller ones. This implements kolla_container_engine variable in command calls of docker,so later on it can be also used for podman without further change. Signed-off-by:
Ivan Halomi <i.halomi@partner.samsung.com> Change-Id: Ic30b67daa2e215524096ad1f4385c569e3d41b95
-
- Oct 07, 2022
-
-
Doug Szumski authored
In the Victoria cycle, Nova merged improved support for managing resource providers: https://review.opendev.org/q/topic:bp%252Fprovider-config-file See the blueprint for more details: https://docs.openstack.org/nova/latest/admin/managing-resource-providers.html This change allows us to copy the necessary configuration. Change-Id: I0a3caaad73bc6fe27380e7f6bf6b792aca51c84c
-
- Sep 26, 2022
-
-
Radosław Piliszek authored
Kolla Ansible stopped setting them as they turned out to be unnecessary for its operations, yet may have conflicted with security policies of the hosts. [1] [2] [1] https://launchpad.net/bugs/1837551 [2] https://launchpad.net/bugs/1945453 Change-Id: Ie8ccd3ab6f22a6f548b1da8d3acd334068dc48f5
-
- Sep 21, 2022
-
-
Michal Nasiadka authored
mainly jinja spacing and jinja[invalid] related Change-Id: I6f52f2b0c1ef76de626657d79486d31e0f47f384
-
- Aug 09, 2022
-
-
Michal Arbet authored
This patch adds loadbalancer-config role which is "wrapper" around haproxy-config and proxysql-config role which will be added in follow-up patches. Change-Id: I64d41507317081e1860a94b9481a85c8d400797d
-
- Jul 25, 2022
-
-
Michal Nasiadka authored
ansible-lint introduced var-spacing - let's fix our code. Change-Id: I0d8aaf3c522a5a6a5495032f6dbed8a2be0251f0
-
- Apr 22, 2022
-
-
Mark Goddard authored
We run some nova tasks once per cell, using a condition to match a single host in the cell. In other similar tasks, we use run_once, which will fail all hosts if the task fails. Typically these tasks are critical, and that is desirable. However, with the approach used in nova-cell to support multiple cells, if a once-per-cell task fails, then other hosts will continue to execute, which could lead to unexpected results. This change adds any_errors_fatal to the plays or blocks that run these tasks. Closes-Bug: #1948694 Change-Id: I2a5871ccd4e8198171ef3239ce95f475f3e4b051
-
- Apr 05, 2022
-
-
Mark Goddard authored
This change addresses an issue in the nova-libvirt-cleanup command, added in I46854ed7eaf1d5b5e3ccd8531c963427848bdc99. Check for rc=1 pgrep command, since a lack of matches is a pass. Also, use bash for set -o pipefail. Change-Id: Iffda0dfffce8768324ffec55e629134c70e2e996
-
- Mar 29, 2022
-
-
Mark Goddard authored
If any nova compute service fails to register itself, Kolla Ansible will fail the host that queries the Nova API. This is the first compute host in the inventory, and fails in the task: Waiting for nova-compute services to register themselves Other hosts continue, often leading to further errors later on. Clearly this is not idea. This change modifies the behaviour to query the compute service list until all expected hosts are present, but does not fail the querying host if they are not. A new task is added that executes for all hosts, and fails only those hosts that have not registered successfully. Alternatively, to fail all hosts in a cell when any compute service fails to register, set nova_compute_registration_fatal to true. Change-Id: I12c1928cf1f1fb9e28f1741e7fe4968004ea1816 Closes-Bug: #1940119
-
- Mar 21, 2022
-
-
Mark Goddard authored
Change Ia1239069ccee39416b20959cbabad962c56693cf added support for running a libvirt daemon on the host, rather than using the nova_libvirt container. It did not cover migration of existing hosts from using a container to using a host daemon. This change adds a kolla-ansible nova-libvirt-cleanup command which may be used to clean up the nova_libvirt container, volumes and related items on hosts, once it has been disabled. The playbook assumes that compute hosts have been emptied of VMs before it runs. A future extension could support migration of existing VMs, but this is currently out of scope. Change-Id: I46854ed7eaf1d5b5e3ccd8531c963427848bdc99
-
Mark Goddard authored
In some cases it may be desirable to run the libvirt daemon on the host. For example, when mixing host and container OS distributions or versions. This change makes it possible to disable the nova_libvirt container, by setting enable_nova_libvirt_container to false. The default values of some Docker mounts and other paths have been updated to point to default host directories rather than Docker volumes when using a host libvirt daemon. This change does not handle migration of existing systems from using a nova_libvirt container to libvirt on the host. Depends-On: https://review.opendev.org/c/openstack/ansible-collection-kolla/+/830504 Change-Id: Ia1239069ccee39416b20959cbabad962c56693cf
-
- Mar 18, 2022
-
-
Imran Hussain authored
Consistently use template instead of copy. This has the added advantage of allowing variables inside ceph conf files and keyrings. Closes-Bug: 1959565 Signed-off-by:
Imran Hussain <ih@imranh.co.uk> Change-Id: Ibd0ff2641a54267ff06d3c89a26915a455dff1c1
-
- Mar 10, 2022
-
-
Mark Goddard authored
In Kolla Ansible OpenStack deployments, by default, libvirt is configured to allow read-write access via an unauthenticated, unencrypted TCP connection, using the internal API network. This is to facilitate migration between hosts. By default, Kolla Ansible does not use encryption for services on the internal network (and did not support it until Ussuri). However, most other services on the internal network are at least authenticated (usually via passwords), ensuring that they cannot be used by anyone with access to the network, unless they have credentials. The main issue here is the lack of authentication. Any client with access to the internal network is able to connect to the libvirt TCP port and make arbitrary changes to the hypervisor. This could include starting a VM, modifying an existing VM, etc. Given the flexibility of the domain options, it could be seen as equivalent to having root access to the hypervisor. Kolla Ansible supports libvirt TLS [1] since the Train release, using client and server certificates for mutual authentication and encryption. However, this feature is not enabled by default, and requires certificates to be generated for each compute host. This change adds support for libvirt SASL authentication, and enables it by default. This provides base level of security. Deployments requiring further security should use libvirt TLS. [1] https://docs.openstack.org/kolla-ansible/latest/reference/compute/libvirt-guide.html#libvirt-tls Depends-On: https://review.opendev.org/c/openstack/kolla/+/833021 Closes-Bug: #1964013 Change-Id: Ia91ceeb609e4cdb144433122b443028c0278b71e
-
- Jan 10, 2022
-
-
Radosław Piliszek authored
This is required as nova_compute tries to reach my_ip of the other node when resizing an instance and my_ip is set to api_interface_address. This potential issue was introduced with [1]. [1] https://review.opendev.org/c/openstack/kolla-ansible/+/569131 Closes-Bug: #1956976 Change-Id: Id57a672c69a2d5aa74e55f252d05bb756bbc945a
-
- Oct 27, 2021
-
-
Mark Goddard authored
This reverts commit 15259002. Reason for revert: The iptables_firewall produces warnings without it. Change-Id: Id046a3048436c4c18dd1fd9700ac9971d8c42c57
-
- Oct 01, 2021
-
-
Radosław Piliszek authored
Nor set related sysctls. More details in the reno. Change-Id: I898548ecc6df3caa094c3222159b7ba1e16dc211 Closes-Bug: #1945789
-
- Sep 28, 2021
-
-
Niklas Hagman authored
A system-scoped token implies the user has authorization to act on the deployment system. These tokens are useful for interacting with resources that affect the deployment as a whole, or exposes resources that may otherwise violate project or domain isolation. Since Queens, the keystone-manage bootstrap command assigns the admin role to the admin user with system scope, as well as in the admin project. This patch transitions the Keystone admin user from authenticating using project scoped tokens to system scoped tokens. This is a necessary step towards being able to enable the updated oslo policies in services that allow finer grained access to system-level resources and APIs. An etherpad with discussion about the transition to the new oslo service policies is: https://etherpad.opendev.org/p/enabling-system-scope-in-kolla-ansible Change-Id: Ib631e2211682862296cce9ea179f2661c90fa585 Signed-off-by:
Niklas Hagman <ubuntu@post.blinkiz.com>
-
- Aug 12, 2021
-
-
Michal Arbet authored
Kolla-ansible upgrade task is calling different handlers as deploy task and these handlers are missing healthcheck key. This patch is fixing this. Closes-Bug: #1939679 Change-Id: Id83d20bfd89c27ccf70a3a79938f428cdb5d40fc
-
- Aug 10, 2021
-
-
Radosław Piliszek authored
We get a nice optimisation by using a filtered loop instead of task skipping per service with 'when'. Partially-Implements: blueprint performance-improvements Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
-
- Aug 02, 2021
-
-
Michal Arbet authored
This trivial patch is setting "timeout tunnel" in haproxy's configuration for spicehtml5proxy. This option extends time when spice's websocket connection is closed, so spice will not be freezed. Default value is set to 1h as it is in novnc. Closes-Bug: #1938549 Change-Id: I3a5cd98ecf4916ebd0748e7c08111ad0e4dca0b2
-
- Jul 27, 2021
-
-
wu.chunyang authored
Nova always tries to create the rabbitmq user regardless of whether RabbitMQ is enabled or not. This ps also adds an external rabbitmq doc. Change-Id: Iec517226e4c82ea351889b55689a3efceaadcc76
-
- Jun 23, 2021
-
-
Mark Goddard authored
By default, Ansible injects a variable for every fact, prefixed with ansible_. This can result in a large number of variables for each host, which at scale can incur a performance penalty. Ansible provides a configuration option [0] that can be set to False to prevent this injection of facts. In this case, facts should be referenced via ansible_facts.<fact>. This change updates all references to Ansible facts within Kolla Ansible from using individual fact variables to using the items in the ansible_facts dictionary. This allows users to disable fact variable injection in their Ansible configuration, which may provide some performance improvement. This change disables fact variable injection in the ansible configuration used in CI, to catch any attempts to use the injected variables. [0] https://docs.ansible.com/ansible/latest/reference_appendices/config.html#inject-facts-as-vars Change-Id: I7e9d5c9b8b9164d4aee3abb4e37c8f28d98ff5d1 Partially-Implements: blueprint performance-improvements
-
- May 30, 2021
-
-
Radosław Piliszek authored
Makes nova-libvirt container always run in 'host' CgroupnsMode to ensure it works. Change-Id: I75105baf434977c68bc5c8ca1f5213e602c52c8c
-
- Mar 02, 2021
-
-
Michał Nasiadka authored
Change-Id: Ib6719a033b37be3e248b682795b7243c60b22b84
-
- Dec 14, 2020
-
-
Mark Goddard authored
This reverts commit 9cae59be. Reason for revert: This patch was found to introduce issues with fluentd customisation. The underlying issue is not currently fully understood, but could be a sign of other obscure issues. Change-Id: Ia4859c23d85699621a3b734d6cedb70225576dfc Closes-Bug: #1906288
-
- Oct 27, 2020
-
-
Radosław Piliszek authored
Makes 'import_tasks' not change behaviour compared to 'include_tasks'. Change-Id: I600be7c3bd763b3b924bd4a45b4e7b4dca7a33e3
-
Radosław Piliszek authored
Main plays are action-redirect-stubs, ideal for import_tasks. This avoids 'include' penalty and makes logs/ara look nicer. Fixes haproxy and rabbitmq not to check the host group as well. Change-Id: I46136fc40b815e341befff80b54a91ef431eabc0 Partially-Implements: blueprint performance-improvements
-
- Oct 12, 2020
-
-
Radosław Piliszek authored
Config plays do not need to check containers. This avoids skipping tasks during the genconfig action. Ironic and Glance rolling upgrades are handled specially. Swift and Bifrost do not use the handlers at all. Partially-Implements: blueprint performance-improvements Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
-
- Oct 05, 2020
-
-
Michal Nasiadka authored
This change enables the use of Docker healthchecks for core OpenStack services. Also check-failures.sh has been updated to treat containers with unhealthy status as failed. Implements: blueprint container-health-check Change-Id: I79c6b11511ce8af70f77e2f6a490b59b477fefbb
-
- Sep 21, 2020
-
-
Radosław Piliszek authored
via KOLLA_SKIP and KOLLA_UNSET Change-Id: I7d9af21c2dd8c303066eb1ee4dff7a72bca24283 Related-Bug: #1837551
-