- Mar 10, 2022
-
-
Mark Goddard authored
In Kolla Ansible OpenStack deployments, by default, libvirt is configured to allow read-write access via an unauthenticated, unencrypted TCP connection, using the internal API network. This is to facilitate migration between hosts. By default, Kolla Ansible does not use encryption for services on the internal network (and did not support it until Ussuri). However, most other services on the internal network are at least authenticated (usually via passwords), ensuring that they cannot be used by anyone with access to the network, unless they have credentials. The main issue here is the lack of authentication. Any client with access to the internal network is able to connect to the libvirt TCP port and make arbitrary changes to the hypervisor. This could include starting a VM, modifying an existing VM, etc. Given the flexibility of the domain options, it could be seen as equivalent to having root access to the hypervisor. Kolla Ansible supports libvirt TLS [1] since the Train release, using client and server certificates for mutual authentication and encryption. However, this feature is not enabled by default, and requires certificates to be generated for each compute host. This change adds support for libvirt SASL authentication, and enables it by default. This provides base level of security. Deployments requiring further security should use libvirt TLS. [1] https://docs.openstack.org/kolla-ansible/latest/reference/compute/libvirt-guide.html#libvirt-tls Depends-On: https://review.opendev.org/c/openstack/kolla/+/833021 Closes-Bug: #1964013 Change-Id: Ia91ceeb609e4cdb144433122b443028c0278b71e
-
Zuul authored
-
- Mar 09, 2022
-
-
Zuul authored
-
- Mar 08, 2022
- Mar 07, 2022
-
-
Zuul authored
-
Mark Goddard authored
While I8bb398e299aa68147004723a18d3a1ec459011e5 stopped setting the net.ipv4.ip_forward sysctl, this change explicitly removes the option from the Kolla sysctl config file. In the absence of another source for this sysctl, it should revert to the default of 0 after the next reboot. A deployer looking to more aggressively change the value may set neutron_l3_agent_host_ipv4_ip_forward to 0. Any deployments still relying on the previous value may set neutron_l3_agent_host_ipv4_ip_forward to 1. Related-Bug: #1945453 Change-Id: I9b39307ad8d6c51e215fe3d3bc56aab998d218ec
-
Radosław Piliszek authored
Since [1] we are not running keepalived directly on CI network, and are therefore safeguarded against such collisions. [1] 8e406291 Change-Id: Ie25b2d6d48f10c6b295795b3c82c1f8a213f2a8c
-
Radosław Piliszek authored
In Ironic jobs with Tenks, we saw issues with IPMI commands failing, resuling in job failures: Error setting Chassis Boot Parameter 5 A metal3.io commit [1] was found that fixes the issue by moving IPMI retries from ironic to ipmitool, which has a side-effect of increasing the timeout. This change applies the same configuration. This change has been adapted from an analogous change in kayobe-config-dev. [2] [1] https://github.com/metal3-io/ironic-image/commit/6bc1499d8bb04c2c859b970b3739c3a8ed66ae2a [2] Ib4fce74cebebe85c31049eafe2eeb6b28dfab041 Co-Authored-By:
Mark Goddard <mark@stackhpc.com> Change-Id: I552417b9da03b8dfc9406e0ff644092579bc7122
-
- Mar 05, 2022
-
-
Mark Goddard authored
Installs Tenks [1] and uses it to create virtual machines to pose as bare metal compute nodes. The nodes are registered in Ironic, and used to provision instances. [1] https://docs.openstack.org/tenks/latest/ Depends-On: https://review.opendev.org/c/openstack/tenks/+/830182 Depends-On: https://review.opendev.org/c/openstack/tenks/+/830675 Depends-On: https://review.opendev.org/c/openstack/kolla-ansible/+/831055 Change-Id: Idfb8fbb50dc7442225967b2a2ec38ae7114f3c11 Co-Authored-By:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Mar 04, 2022
-
-
Radosław Piliszek authored
Ironic is dropping default_boot_option and the new default has been around for quite a while now so let's remove this old scary comment. Change-Id: I80d645cb97251ac63e04d7ec1c87d4600d17d4ee
-
Radosław Piliszek authored
Since I30c2ad2bf2957ac544942aefae8898cdc8a61ec6 this container is always enabled and thus the port should always be checked. Change-Id: I94a70d89123611899872061bd69593280d0a68c4
-
- Mar 03, 2022
-
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Michal Nasiadka authored
Depends-On: https://review.opendev.org/c/openstack/ansible-collection-kolla/+/831642 Change-Id: I70dcd2d0cade52a23b3e219b7e0aaa31193ec938
-
- Mar 02, 2022
-
-
IDerr authored
Change-Id: I4cf48620f03d67ea4a9ef327afbf3b1ebe28550b Closes-Bug: #1946506
-
- Feb 28, 2022
- Feb 25, 2022
-
-
Radosław Piliszek authored
Ironic has changed the default PXE to be iPXE (as opposed to plain PXE) in Yoga. Kolla Ansible supports either one or the other and we tend to stick to upstream defaults so this change enables iPXE instead of plain PXE - by default - the users are allowed to change back and they need to take one other action so it is good to remind them via upgrade notes either way. Change-Id: If14ec83670d2212906c6e22c7013c475f3c4748a
-
- Feb 24, 2022
-
-
Zuul authored
-
Juan Pablo Suazo authored
Closes-Bug: #1961795 Change-Id: I5547cce5c389846ed216bb898b78e45b8f231e1e
-
- Feb 23, 2022
-
-
Zuul authored
-
Piotr Parczewski authored
Closes-bug: 1959781 Change-Id: If574d2242aa6a875dcf624d95495e6cec6fefddd
-
- Feb 22, 2022
-
-
Zuul authored
-
Mark Goddard authored
TrivialFix Change-Id: Id85a5d69e1222b616705e24885252425c92af527
-
Pierre Riteau authored
These configuration settings were removed in Grafana 6.2. Instead we can use [remote_cache], but it is not required since it will use database settings by default. Change-Id: I37966027aea9039b2ecba4214444507e9d87f513
-
Zuul authored
-
- Feb 21, 2022
-
-
Zuul authored
-
Zuul authored
-
Zuul authored
-
Doug Szumski authored
When OpenStack is deployed with Kolla-Ansible, by default there are no durable queues or exchanges created by the OpenStack services in RabbitMQ. In Rabbit terminology, not being durable is referred to as `transient`, and this means that the queue is generally held in memory. Whether OpenStack services create durable or transient queues is traditionally controlled by the Oslo Notification config option: `amqp_durable_queues`. In Kolla-Ansible, this remains set to the default of `False` in all services. The only `durable` objects are the `amq*` exchanges which are internal to RabbitMQ. More recently, Oslo Notification has introduced support for Quorum queues [7]. These are a successor to durable classic queues, however it isn't yet clear if they are a good fit for OpenStack in general [8]. For clustered RabbitMQ deployments, Kolla-Ansible configures all queues as `replicated` [1]. Replication occurs over all nodes in the cluster. RabbitMQ refers to this as 'mirroring of classic queues'. In summary, this means that a multi-node Kolla-Ansible deployment will end up with a large number of transient, mirrored queues and exchanges. However, the RabbitMQ documentation warns against this, stating that 'For replicated queues, the only reasonable option is to use durable queues: [2]`. This is discussed further in the following bug report: [3]. Whilst we could try enabling the `amqp_durable_queues` option for each service (this is suggested in [4]), there are a number of complexities with this approach, not limited to: 1) RabbitMQ is planning to remove classic queue mirroring in favor of 'Quorum queues' in a forthcoming release [5]. 2) Durable queues will be written to disk, which may cause performance problems at scale. Note that this includes Quorum queues which are always durable. 3) Potential for race conditions and other complexity discussed recently on the mailing list under: `[ops] [kolla] RabbitMQ High Availability` The remaining option, proposed here, is to use classic non-mirrored queues everywhere, and rely on services to recover if the node hosting a queue or exchange they are using fails. There is some discussion of this approach in [6]. The downside of potential message loss needs to be weighed against the real upsides of increasing the performance of RabbitMQ, and moving to a configuration which is officially supported and hopefully more stable. In the future, we can then consider promoting specific queues to quorum queues, in cases where message loss can result in failure states which are hard to recover from. [1] https://www.rabbitmq.com/ha.html [2] https://www.rabbitmq.com/queues.html [3] https://github.com/rabbitmq/rabbitmq-server/issues/2045 [4] https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit [5] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/ [6] https://fuel-ccp.readthedocs.io/en/latest/design/ref_arch_1000_nodes.html#replication [7] https://bugs.launchpad.net/oslo.messaging/+bug/1942933 [8] https://www.rabbitmq.com/quorum-queues.html#use-cases Partial-Bug: #1954925 Change-Id: I91d0e23b22319cf3fdb7603f5401d24e3b76a56e
-
Pierre Riteau authored
The Prometheus HTTP API is reachable under /api/v1. Without this fix, CloudKitty receives 404 errors from Prometheus. Change-Id: Ie872da5ccddbcb8028b8b57022e2427372ed474e
-
Mark Goddard authored
This change adds an Ansible Galaxy requirements file including the openstack.kolla collection. A new 'kolla-ansible install-deps' command is provided to install the requirements. With the new collection in place, this change also switches to using the baremetal role from the openstack.kolla collection, and removes the baremetal role from this repository. Depends-On: https://review.opendev.org/c/openstack/ansible-collection-kolla/+/820168 Change-Id: I9708f57b4bb9d64eb4903c253684fe0d9147bd4a
-
Zuul authored
-
- Feb 18, 2022
-
-
Pierre Riteau authored
Without this configuration, all mount points are reporting the same utilisation metrics [1]. With the rslave option, all root mounts from the host are visible in the container, so we can remove the bind mounts for /proc and /sys. [1] https://github.com/prometheus/node_exporter#docker Change-Id: I4087dc81f9d1fa5daa24b9df6daf1f9e1ccd702f Closes-Bug: #1961438
-
Zuul authored
-
alecorps authored
An FCD, also known as an Improved Virtual Disk (IVD) or Managed Virtual Disk, is a named virtual disk independent of a virtual machine. Using FCDs for Cinder volumes eliminates the need for shadow virtual machines. This patch adds Kolla support. Change-Id: Ic0b66269e6d32762e786c95cf6da78cb201d2765
-