- Dec 12, 2024
-
-
Michal Nasiadka authored
It has been added in magnum-cluster-api driver in v0.15.0 [1]: https://github.com/vexxhost/magnum-cluster-api/commit/2a53f3e340524deee3ddbf08b41071fba070d7d3#diff-50c86b7ed8ac2cf95bd48334961bf0530cdc77b5a56f852c5c61b89d735fd711R53 Closes-Bug: #2047360 Change-Id: Ib02389c03ab8f61fdd3827cb30fcc18f3dc952a9 (cherry picked from commit 99e61073)
-
- Aug 12, 2024
-
-
Roman Krček authored
For possible config options see docs https://docs.openstack.org/keystonemiddleware/latest/middlewarearchitecture.html#memcache-protection Closes-bug: #1850733 Signed-off-by:
Roman Krček <roman.krcek@tietoevry.com> Change-Id: I169e27899f7350f5eb8adb1f81a062c51e6cbdfc
-
- Jan 02, 2024
-
-
Michal Nasiadka authored
Closes-Bug: #2047360 Change-Id: I73490d84da39a74ea7ac493c7dd41fe7bfe2f578
-
- Nov 30, 2023
-
-
Sven Kieske authored
This implements a global toggle `om_enable_rabbitmq_quorum_queues` to enable quorum queues for each service in RabbitMQ, similar to what was done for HA[0]. Quorum Queues are enabled by default. Quorum queues are more reliable, safer, simpler and faster than replicated mirrored classic queues[1]. Mirrored classic queues are deprecated and scheduled for removal in RabbitMQ 4.0[2]. Notice, that we do not need a new policy in the RabbitMQ definitions template, because their usage is enabled on the client side and can't be set using a policy[3]. Notice also, that quorum queues are not yet enabled in oslo.messaging for the usage of reply_ and fanout_ queues (transient queues). This will change once[4] is merged. [0]: https://review.opendev.org/c/openstack/kolla-ansible/+/867771 [1]: https://www.rabbitmq.com/quorum-queues.html [2]: https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/ [3]: https://www.rabbitmq.com/quorum-queues.html#declaring [4]: https://review.opendev.org/c/openstack/oslo.messaging/+/888479 Signed-off-by:
Sven Kieske <kieske@osism.tech> Change-Id: I6c033d460a5c9b93c346e9e47e93b159d3c27830
-
- Jan 13, 2023
-
-
Matt Crees authored
A combination of durable queues and classic queue mirroring can be used to provide high availability of RabbitMQ. However, these options should only be used together, otherwise the system will become unstable. Using the flag ``om_enable_rabbitmq_high_availability`` will either enable both options at once, or neither of them. There are some queues that should not be mirrored: * ``reply`` queues (these have a single consumer and TTL policy) * ``fanout`` queues (these have a TTL policy) * ``amq`` queues (these are auto-delete queues, with a single consumer) An exclusionary pattern is used in the classic mirroring policy. This pattern is ``^(?!(amq\\.)|(.*_fanout_)|(reply_)).*`` Change-Id: I51c8023b260eb40b2eaa91bd276b46890c215c25
-
- Jan 05, 2023
-
-
Matt Crees authored
The ``[oslo_messaging_rabbit] heartbeat_in_pthread`` config option is set to ``true`` for wsgi applications to allow the RabbitMQ heartbeats to function. For non-wsgi applications it is set to ``false`` as it may otherwise break the service [1]. [1] https://docs.openstack.org/releasenotes/oslo.messaging/zed.html#upgrade-notes Change-Id: Id89bd6158aff42d59040674308a8672c358ccb3c
-
- Jul 12, 2022
-
-
Michal Arbet authored
Render {{ openstack_service_workers }} for workers of each openstack service is not enough. There are several services which has to have more workers because there are more requests sent to them. This patch is just adding default value for workers for each service and sets {{ openstack_service_workers }} as default, so value can be overrided in hostvars per server. Nothing changed for normal user. Change-Id: Ifa5863f8ec865bbf8e39c9b2add42c92abe40616
-
- Jun 20, 2022
-
-
Radosław Piliszek authored
Per comments on [1]. [1] https://review.opendev.org/c/openstack/kolla-ansible/+/843727 Change-Id: I60162b54bc06e158534d29311d4474b34750c64d
-
- Jun 09, 2022
-
-
Will Szumski authored
Fixes an issue where access rules failed to validate: Cannot validate request with restricted access rules. Set service_type in [keystone_authtoken] to allow access rule validation I've used the values from the endpoint. This was mostly a straight forward copy and paste, except: - versioned endpoints e.g cinderv3 where I stripped the version - monasca has multiple endpoints associated with a single service. For this, I concatenated logging and monitoring to be logging-monitoring. Closes-Bug: #1965111 Change-Id: Ic4b3ab60abad8c3dd96cd4923a67f2a8f9d195d7
-
- May 28, 2022
-
-
Radosław Piliszek authored
Following up on [1]. The 3 variables are only introducing noise after we removed the reliance on Keystone's admin port. [1] I5099b08953789b280c915a6b7a22bdd4e3404076 Change-Id: I3f9dab93042799eda9174257e604fd1844684c1c
-
- Jun 23, 2021
-
-
Mark Goddard authored
Magnum has various sections in its configuration file for OpenStack clients. When internal TLS is enabled, these may need a CA certificate to be specified. This change adds a CA certificate configuration, based on openstack_cacert, for all clients using internal endpoints. Note: we are explicitly not adding the configuration for the [magnum_client] ca_file and [drivers] openstack_ca_file options, since these use the public endpoint by default. These options may be provided via custom configuration if necessary. Change-Id: Ie59b3777c0a2c142b580addd67e279bc4b2f2c90 Co-Authored-By: Kyle Dean Closes-Bug: #1919389
-
wu.chunyang authored
follow: https://review.opendev.org/c/openstack/kolla-ansible/+/791980 Change-Id: I7231ae0b2702d56879092a2c34b7f8bb3b07f50b
-
- Jun 22, 2021
-
-
Michal Arbet authored
Closes-Bug: #1933025 Change-Id: Ib67d715ddfa986a5b70a55fdda39e6d0e3333162
-
- Sep 22, 2020
-
-
Pierre Riteau authored
When the internal VIP is moved in the event of a failure of the active controller, OpenStack services can become unresponsive as they try to talk with MariaDB using connections from the SQLAlchemy pool. It has been argued that OpenStack doesn't really need to use connection pooling with MariaDB [1]. This commit reduces the use of connection pooling via two configuration options: - max_pool_size is set to 1 to allow only a single connection in the pool (it is not possible to disable connection pooling entirely via oslo.db, and max_pool_size = 0 means unlimited pool size) - lower connection_recycle_time from the default of one hour to 10 seconds, which means the single connection in the pool will be recreated regularly These settings have shown better reactivity of the system in the event of a failover. [1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html Change-Id: Ib6a62d4428db9b95569314084090472870417f3d Closes-Bug: #1896635
-
- Sep 17, 2020
-
-
Mark Goddard authored
This change adds support for encryption of communication between OpenStack services and RabbitMQ. Server certificates are supported, but currently client certificates are not. The kolla-ansible certificates command has been updated to support generating certificates for RabbitMQ for development and testing. RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when The Zuul 'tls_enabled' variable is true. Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5 Implements: blueprint message-queue-ssl-support
-
- Aug 03, 2020
-
-
likui authored
Deprecated: Option "cafile" from group "keystone_authtoken" is deprecated. Use option "cafile" from group "keystone_auth". Change-Id: Ia372b1b73afc0bea6a68dcd156cf963c01e3f3ab
-
- Jul 01, 2020
-
-
Bharat Kunwar authored
While all other clients should use internalURL, the Magnum client itself and Keystone interface for trustee credentials should be publicly accessible (upstream default when no config is specified) since instances need to be able to reach them. Closes-Bug: #1885420 Change-Id: I74359cec7147a80db24eb4aa4156c35d31a026bf
-
- Jun 25, 2020
-
-
Bharat Kunwar authored
Magnum, Cinder and Octavia clients in Magnum now use endpoint_type of internalURL by default consistent with other clients also used by the conductor. Additionally, they also use the globally defined `openstack_region_name` for region_name. Closes-Bug: #1885096 Change-Id: Ibec511013760cc4f681a2ec1b769b532be3daf2d
-
Pierre Riteau authored
Change-Id: I7214ef38ea529f7585d7a0c75b8b0498ea4c58a2 Closes-Bug: #1885078
-
- Apr 03, 2020
-
-
Mark Goddard authored
The use of default(omit) is for module parameters, not templates. We define a default value for openstack_cacert, so it should never be undefined anyway. Change-Id: Idfa73097ca168c76559dc4f3aa8bb30b7113ab28
-
- Jan 13, 2020
-
-
James Kirsch authored
Include a reference to the globally configured Certificate Authority to all services. Services use the CA to verify HTTPs connections. Change-Id: I38da931cdd7ff46cce1994763b5c713652b096cc Partially-Implements: blueprint support-trusted-ca-certificate-file
-
- Oct 16, 2019
-
-
Radosław Piliszek authored
Introduce kolla_address filter. Introduce put_address_in_context filter. Add AF config to vars. Address contexts: - raw (default): <ADDR> - memcache: inet6:[<ADDR>] - url: [<ADDR>] Other changes: globals.yml - mention just IP in comment prechecks/port_checks (api_intf) - kolla_address handles validation 3x interface conditional (swift configs: replication/storage) 2x interface variable definition with hostname (haproxy listens; api intf) 1x interface variable definition with hostname with bifrost exclusion (baremetal pre-install /etc/hosts; api intf) neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network basic multinode source CI job for IPv6 prechecks for rabbitmq and qdrouterd use proper NSS database now MariaDB Galera Cluster WSREP SST mariabackup workaround (socat and IPv6) Ceph naming workaround in CI TODO: probably needs documenting RabbitMQ IPv6-only proto_dist Ceph ms switch to IPv6 mode Remove neutron-server ml2_type_vxlan/vxlan_group setting as it is not used (let's avoid any confusion) and could break setups without proper multicast routing if it started working (also IPv4-only) haproxy upgrade checks for slaves based on ipv6 addresses TODO: ovs-dpdk grabs ipv4 network address (w/ prefix len / submask) not supported, invalid by default because neutron_external has no address No idea whether ovs-dpdk works at all atm. ml2 for xenapi Xen is not supported too well. This would require working with XenAPI facts. rp_filter setting This would require meddling with ip6tables (there is no sysctl param). By default nothing is dropped. Unlikely we really need it. ironic dnsmasq is configured IPv4-only dnsmasq needs DHCPv6 options and testing in vivo. KNOWN ISSUES (beyond us): One cannot use IPv6 address to reference the image for docker like we currently do, see: https://github.com/moby/moby/issues/39033 (docker_registry; docker API 400 - invalid reference format) workaround: use hostname/FQDN RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4. This is due to old RabbitMQ versions available in images. IPv4 is preferred by default and may fail in the IPv6-only scenario. This should be no problem in real life as IPv6-only is indeed IPv6-only. Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will no longer be relevant as we supply all the necessary config. See: https://github.com/rabbitmq/rabbitmq-server/pull/1982 For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed to work well). Older Ansible versions are known to miss IPv6 addresses in interface facts. This may affect redeploys, reconfigures and upgrades which run after VIP address is assigned. See: https://github.com/ansible/ansible/issues/63227 Bifrost Train does not support IPv6 deployments. See: https://storyboard.openstack.org/#!/story/2006689 Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Implements: blueprint ipv6-control-plane Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Aug 15, 2019
-
-
Rafael Weingärtner authored
After all of the discussions we had on "https://review.opendev.org/#/c/670626/2", I studied all projects that have an "oslo_messaging" section. Afterwards, I applied the same method that is already used in "oslo_messaging" section in Nova, Cinder, and others. This guarantees that we have a consistent method to enable/disable notifications across projects based on components (e.g. Ceilometer) being enabled or disabled. Here follows the list of components, and the respective changes I did. * Aodh: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Congress: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Cinder: It was already properly configured. * Octavia: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Heat: It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Ceilometer: Ceilometer publishes some messages in the rabbitMQ. However, the default driver is "messagingv2", and not ''(empty) as defined in Oslo; these configurations are defined in ceilometer/publisher/messaging.py. Therefore, we do not need to do anything for the "oslo_messaging_notifications" section in Ceilometer * Tacker: It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Neutron: It was already properly configured. * Nova It was already properly configured. However, we found another issue with its configuration. Kolla-ansible does not configure nova notifications as it should. If 'searchlight' is not installed (enabled) the 'notification_format' should be 'unversioned'. The default is 'both'; so nova will send a notification to the queue versioned_notifications; but that queue has no consumer when 'searchlight' is disabled. In our case, the queue got 511k messages. The huge amount of "stuck" messages made the Rabbitmq cluster unstable. https://bugzilla.redhat.com/show_bug.cgi?id=1478274 https://bugs.launchpad.net/ceilometer/+bug/1665449 * Nova_hyperv: I added the same configurations as in Nova project. * Vitrage It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Searchlight I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Ironic I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Glance It was already properly configured. * Trove It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Blazar It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Sahara It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Watcher I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Barbican I created a mechanism similar to what we have in Cinder, Nova, and others. I also added a configuration to 'keystone_notifications' section. Barbican needs its own queue to capture events from Keystone. Otherwise, it has an impact on Ceilometer and other systems that are connected to the "notifications" default queue. * Keystone Keystone is the system that triggered this work with the discussions that followed on https://review.opendev.org/#/c/670626/2 . After a long discussion, we agreed to apply the same approach that we have in Nova, Cinder and other systems in Keystone. That is what we did. Moreover, we introduce a new topic "barbican_notifications" when barbican is enabled. We also removed the "variable" enable_cadf_notifications, as it is obsolete, and the default in Keystone is CADF. * Mistral: It was hardcoded "noop" as the driver. However, that does not seem a good practice. Instead, I applied the same standard of using the driver and pushing to "notifications" queue if Ceilometer is enabled. * Cyborg: I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Murano It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Senlin It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Manila It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Zun The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Designate It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Magnum It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components Closes-Bug: #1838985 Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202 Signed-off-by:
Rafael Weingärtner <rafael@apache.org>
-
- Mar 06, 2019
-
-
Jim Rollenhagen authored
We're duplicating code to build the keystone URLs in nearly every config, where we've already done it in group_vars. Replace the redundancy with a variable that does the same thing. Change-Id: I207d77870e2535c1cdcbc5eaf704f0448ac85a7a
-
- Aug 15, 2018
-
-
Murali Annamneni authored
To create a magnum cluster, its required to specify 'default_docker_volume_type' with some default value (default cinder volume type). And, it also enables users to select diffferent cinder volume types for their volumes. Change-Id: I50b4c436875e4daac48a14fc1e119136eb5fd844
-
- Aug 07, 2018
-
-
ZhongShengping authored
Option auth_uri from group keystone_authtoken is deprecated[1]. Use option www_authenticate_uri from group keystone_authtoken. [1]https://review.openstack.org/#/c/508522/ Co-Authored-By:
confi-surya <singh.surya64mnnit@gmail.com> Change-Id: Ifd8527d404f1df807ae8196eac2b3849911ddc26 Closes-Bug: #1761907
-
- Jun 01, 2018
-
-
Zhangfei Gao authored
Currently osprofiler only choose elasticsearch, which is only supported on x86. On other platform like aarch64 osprofiler can not be used since no elasticsearch package. Enable osprofiler by enable_osprofiler: "yes", which choose elasticsearch by default. Choose redis by enable_redis: "yes" & osprofiler_backend: "redis" On platform without elasticsearch support like aarch64 set enable_elasticsearch: "no" Change-Id: I68fe7a33e11d28684962fc5d0b3d326e90784d78
-
- Apr 18, 2018
-
-
Kevin TIBI authored
If SSL is enabled, api of multiple services returns wrong external URL without https prefix. Removal of condition for deletion of http header. Change-Id: I4264e04d0d6b9a3e11ef7dd7add6c5e166cf9fb4 Closes-Bug: #1749155 Closes-Bug: #1717491
-
- Mar 09, 2018
-
-
ZhongShengping authored
Remove duplicated [oslo_policy] in magnum.conf. Change-Id: I69c82e31d7041d7e8f9c31ba1bf54f0906f2a6dc Closes-Bug: #1754593
-
- Jan 22, 2018
-
-
Dai Dang Van authored
- Heat - Ironic - Magum - Manila - Mistral This will copy only yaml or json policy file if they exist. Change-Id: I1ab71e2758dc99dd6654d433ece79600f0c44ce8 Implements: blueprint support-custom-policy-yaml Co-authored-By:
Duong Ha-Quang <duonghq@vn.fujitsu.com>
-
- Jan 12, 2018
-
-
Pierre Blanc authored
In several templates the variable topics is configured between simple quotes. It is better to remove them to use the openstack default value. Change-Id: I418c714240b38b2853a5c746203eac31588e841a
-
- Nov 22, 2017
-
-
Andrew Smith authored
This commit separates the messaging rpc and notify transports in order to support separate and different oslo.messaging backends This patch: * add rpc and notify variables * update service role conf templates * add example to globals.yaml * add release note Implements: blueprint hybrid-messaging Change-Id: I34691c2895c8563f1f322f0850ecff98d11b5185
-
- Jul 13, 2017
-
-
Bertrand Lallau authored
This enable cluster_user_trust customization which is needed to get Kubernetes integration with Cinder and Neutron LBaaS. https://github.com/openstack/magnum/blob/master/releasenotes/notes/CVE-2016-7404-f53e62a4a40e4d30.yaml#L5 Change-Id: Ib3243b110d2c592f3bf6467b086738335799c853
-
- Jul 06, 2017
-
-
Bertrand Lallau authored
As described here: https://github.com/openstack/keystone/blob/master/keystone/resource/core.py#L841 https://github.com/openstack/keystone/blob/master/keystone/conf/identity.py#L21 * default project domain name MUST be named 'Default' * default project domain id MUST be named 'default' * default project user name MUST be named 'Default' * default project user id MUST be named 'default' Change-Id: I610a0416647fdea31bb04889364da5395d8c8d74
-
- Jul 04, 2017
-
-
Bertrand Lallau authored
* add additional options called 'endpoint_type' for each of config groups related to openstack clients used by Magnum. * add Glance, Neutron and Nova config groups. Change-Id: Ie74979e05c4f5763674ba2fc5b9f07bd51ad9454
-
- Jun 02, 2017
-
-
Eduardo Gonzalez authored
OSprofile allows user/devs trace OpenStack requests. Implements: blueprint enable-osprofiler Co-Authored-By:
Bertrand Lallau <bertrand.lallau@gmail.com> Change-Id: I82ea85d726011ef6cbf99380f395452d6d7f8053
-
- May 23, 2017
-
-
Bertrand Lallau authored
Useful api_interface_address variable has been define here: https://github.com/openstack/kolla-ansible/blob/master/ansible/group_vars/all.yml#L57 In order to simplify codebase we must use it as much as possible. Change-Id: I18fec19bf69e05a22a4142a9cd1165eccd022455
-
- Apr 12, 2017
-
-
Bertrand Lallau authored
Magnum can send RPC notifications to Ceilometer as define here: https://github.com/openstack/ceilometer/blob/master/ceilometer/pipeline/data/event_definitions.yaml#L554 oslo_messaging_notifications section MUST be managed in magnum.conf file. Change-Id: I6cafa6666bcb1fc15bf08ef049f0044e788eb98b Closes-Bug: #1677655
-
- Mar 10, 2017
-
-
Bertrand Lallau authored
Change-Id: I8df89250d8430cf5abe3d0bd6387a3966591e435 Closes-Bug: #1671777
-
- Jan 24, 2017
-
-
Cornelio Hopmann authored
Change-Id: Icef8d2ec95629a78ba761778df2f92ef9494d166 Closes-Bug: #1657894
-