- Nov 25, 2024
-
-
songwenping authored
fix bug: cyborg-api in controller does not work at kolla_external_fqdn with port 6666, there is no haproxy setting for cyborg module Closes-Bug: #2020088 Change-Id: I9d0b332420d97f4c5b02d50ce153fabd4bfd6b10 (cherry picked from commit 2adc1488)
-
- Jul 19, 2024
-
-
Michal Arbet authored
The Kolla project supports building images with user-defined prefixes. However, Kolla-ansible is unable to use those images for installation. This patch fixes that issue. Closes-Bug: #2073541 Change-Id: Ia8140b289aa76fcd584e0e72686e3786215c5a99
-
- Aug 29, 2023
-
-
Pierre Riteau authored
Cyborg endpoints were fixed with a /v2 suffix [1], but this was accidentally reverted by the single external frontend patch [2]. [1] https://review.opendev.org/c/openstack/kolla-ansible/+/883495 [2] https://review.opendev.org/c/openstack/kolla-ansible/+/823395 Related-Bug: #2020080 Change-Id: Ic2cfaf739a65fbb02577be08c081a2a88360d5f5
-
- Jun 28, 2023
-
-
Michal Nasiadka authored
Use case: exposing single external https frontend and load balancing services using FQDNs. Support different ports for internal and external endpoints. Introduced kolla_url filter to normalize urls like: - https://magnum.external:443/v1 - http://magnum.external:80/v1 Change-Id: I9fb03fe1cebce5c7198d523e015280c69f139cd0 Co-Authored-By:
Jakub Darmach <jakub@stackhpc.com>
-
- Jun 07, 2023
-
-
Maksim Malchuk authored
According to the documentation [1] type of the Cyborg service should be 'accelerator' and description 'Acceleration Service'. Also, this change fixes incorrect endpoint URLs, and not configures an admin endpoint [2] because the documentation [1] not updated yet. 1. https://docs.openstack.org/cyborg/latest/install/common.html 2. Icf3bf08deab2c445361f0a0124d87ad8b0e4e9d9 Closes-Bug: #2020080 Change-Id: I002db50cbad5a90e479498e605bdeab343e129c7 Signed-off-by:
Maksim Malchuk <maksim.malchuk@gmail.com>
-
- Dec 21, 2022
-
-
Matt Crees authored
Regularly, we experience issues in Kolla Ansible deployments because we use wrong options in OpenStack configuration files. This is because OpenStack services ignore unknown options. We also need to keep on top of deprecated options that may be removed in the future. Integrating oslo-config-validator into Kolla Ansible will greatly help. Adds a shared role to run oslo-config-validator on each service. Takes into account that services have multiple containers, and these may also use multiple config files. Service roles are extended to use this shared role. Executed with the new command ``kolla-ansible validate-config``. Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
-
- Sep 21, 2022
-
-
Michal Nasiadka authored
mainly jinja spacing and jinja[invalid] related Change-Id: I6f52f2b0c1ef76de626657d79486d31e0f47f384
-
- Aug 09, 2022
-
-
Michal Arbet authored
Depends-On: https://review.opendev.org/c/openstack/kolla/+/769385 Depends-On: https://review.opendev.org/c/openstack/kolla/+/765781 Change-Id: I3c4182a6556dafd2c936eaab109a068674058fca
-
- May 23, 2022
-
-
Radosław Piliszek authored
Change-Id: Ib4b15ed4feac82d8492b1c0f0238a752eac668e6
-
- Apr 20, 2022
-
-
Marcin Juszkiewicz authored
We have only one value for install_type now and it gets removed from image names. Change-Id: I8bf95fd7aa9dd26b80d618ca0fcb097003b4cb0a
-
- Dec 31, 2021
-
-
Pierre Riteau authored
Role vars have a higher precedence than role defaults. This allows to import default vars from another role via vars_files without overriding project_name (see related bug for details). Change-Id: I3d919736e53d6f3e1a70d1267cf42c8d2c0ad221 Related-Bug: #1951785
-
- Dec 21, 2021
-
-
Dr. Jens Harbott authored
The admin interface for endpoints never had any real use, the functionality was the same as for the public or internal endpoints, except for Keystone. Even for Keystone with API v3 it would no longer really be needed, but it is still being required by some libraries that cannot be changed in order to stay backwards compatible. Signed-off-by:
Dr. Jens Harbott <harbott@osism.tech> Change-Id: Icf3bf08deab2c445361f0a0124d87ad8b0e4e9d9
-
- Jun 23, 2021
-
-
Mark Goddard authored
By default, Ansible injects a variable for every fact, prefixed with ansible_. This can result in a large number of variables for each host, which at scale can incur a performance penalty. Ansible provides a configuration option [0] that can be set to False to prevent this injection of facts. In this case, facts should be referenced via ansible_facts.<fact>. This change updates all references to Ansible facts within Kolla Ansible from using individual fact variables to using the items in the ansible_facts dictionary. This allows users to disable fact variable injection in their Ansible configuration, which may provide some performance improvement. This change disables fact variable injection in the ansible configuration used in CI, to catch any attempts to use the injected variables. [0] https://docs.ansible.com/ansible/latest/reference_appendices/config.html#inject-facts-as-vars Change-Id: I7e9d5c9b8b9164d4aee3abb4e37c8f28d98ff5d1 Partially-Implements: blueprint performance-improvements
-
- Mar 03, 2021
-
-
wuchunyang authored
This change enables the use of Docker healthchecks for cyborg services. Implements: blueprint container-health-check Change-Id: I5326b142eaa826f97c32498cd2a9a0cba65be698
-
- Aug 10, 2020
-
-
Mark Goddard authored
Previously we mounted /etc/timezone if the kolla_base_distro is debian or ubuntu. This would fail prechecks if debian or ubuntu images were deployed on CentOS. While this is not a supported combination, for correctness we should fix the condition to reference the host OS rather than the container OS, since that is where the /etc/timezone file is located. Change-Id: Ifc252ae793e6974356fcdca810b373f362d24ba5 Closes-Bug: #1882553
-
- Apr 22, 2020
-
-
ya.wang authored
Add privileged capability to cyborg agent. Change-Id: Id237df1acb1b44c4e6442b39838058be1a95fcc6 Closes-bug: #1873715
-
- Apr 09, 2020
-
-
Dincer Celik authored
Some services look for /etc/timezone on Debian/Ubuntu, so we should introduce it to the containers. In addition, added prechecks for /etc/localtime and /etc/timezone. Closes-Bug: #1821592 Change-Id: I9fef14643d1bcc7eee9547eb87fa1fb436d8a6b3
-
- Jan 10, 2020
-
-
Mark Goddard authored
For the CentOS 7 to 8 transition, we will have a period where both CentOS 7 and 8 images are available. We differentiate these images via a tag - the CentOS 8 images will have a tag of train-centos8 (or master-centos8 temporarily). To achieve this, and maintain backwards compatibility for the openstack_release variable, we introduce a new 'openstack_tag' variable. This variable is based on openstack_release, but has a suffix of 'openstack_tag_suffix', which is empty except on CentOS 8 where it has a value of '-centos8'. Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625 Partially-Implements: blueprint centos-rhel-8
-
- Oct 16, 2019
-
-
Radosław Piliszek authored
Introduce kolla_address filter. Introduce put_address_in_context filter. Add AF config to vars. Address contexts: - raw (default): <ADDR> - memcache: inet6:[<ADDR>] - url: [<ADDR>] Other changes: globals.yml - mention just IP in comment prechecks/port_checks (api_intf) - kolla_address handles validation 3x interface conditional (swift configs: replication/storage) 2x interface variable definition with hostname (haproxy listens; api intf) 1x interface variable definition with hostname with bifrost exclusion (baremetal pre-install /etc/hosts; api intf) neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network basic multinode source CI job for IPv6 prechecks for rabbitmq and qdrouterd use proper NSS database now MariaDB Galera Cluster WSREP SST mariabackup workaround (socat and IPv6) Ceph naming workaround in CI TODO: probably needs documenting RabbitMQ IPv6-only proto_dist Ceph ms switch to IPv6 mode Remove neutron-server ml2_type_vxlan/vxlan_group setting as it is not used (let's avoid any confusion) and could break setups without proper multicast routing if it started working (also IPv4-only) haproxy upgrade checks for slaves based on ipv6 addresses TODO: ovs-dpdk grabs ipv4 network address (w/ prefix len / submask) not supported, invalid by default because neutron_external has no address No idea whether ovs-dpdk works at all atm. ml2 for xenapi Xen is not supported too well. This would require working with XenAPI facts. rp_filter setting This would require meddling with ip6tables (there is no sysctl param). By default nothing is dropped. Unlikely we really need it. ironic dnsmasq is configured IPv4-only dnsmasq needs DHCPv6 options and testing in vivo. KNOWN ISSUES (beyond us): One cannot use IPv6 address to reference the image for docker like we currently do, see: https://github.com/moby/moby/issues/39033 (docker_registry; docker API 400 - invalid reference format) workaround: use hostname/FQDN RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4. This is due to old RabbitMQ versions available in images. IPv4 is preferred by default and may fail in the IPv6-only scenario. This should be no problem in real life as IPv6-only is indeed IPv6-only. Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will no longer be relevant as we supply all the necessary config. See: https://github.com/rabbitmq/rabbitmq-server/pull/1982 For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed to work well). Older Ansible versions are known to miss IPv6 addresses in interface facts. This may affect redeploys, reconfigures and upgrades which run after VIP address is assigned. See: https://github.com/ansible/ansible/issues/63227 Bifrost Train does not support IPv6 deployments. See: https://storyboard.openstack.org/#!/story/2006689 Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Implements: blueprint ipv6-control-plane Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Sep 17, 2019
-
-
Mark Goddard authored
Use upstream Ansible modules for registration of services, endpoints, users, projects, roles, and role grants. Change-Id: I7c9138d422cc91c177fd8992347176bb54156b5a
-
- Aug 15, 2019
-
-
Rafael Weingärtner authored
After all of the discussions we had on "https://review.opendev.org/#/c/670626/2", I studied all projects that have an "oslo_messaging" section. Afterwards, I applied the same method that is already used in "oslo_messaging" section in Nova, Cinder, and others. This guarantees that we have a consistent method to enable/disable notifications across projects based on components (e.g. Ceilometer) being enabled or disabled. Here follows the list of components, and the respective changes I did. * Aodh: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Congress: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Cinder: It was already properly configured. * Octavia: The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Heat: It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Ceilometer: Ceilometer publishes some messages in the rabbitMQ. However, the default driver is "messagingv2", and not ''(empty) as defined in Oslo; these configurations are defined in ceilometer/publisher/messaging.py. Therefore, we do not need to do anything for the "oslo_messaging_notifications" section in Ceilometer * Tacker: It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Neutron: It was already properly configured. * Nova It was already properly configured. However, we found another issue with its configuration. Kolla-ansible does not configure nova notifications as it should. If 'searchlight' is not installed (enabled) the 'notification_format' should be 'unversioned'. The default is 'both'; so nova will send a notification to the queue versioned_notifications; but that queue has no consumer when 'searchlight' is disabled. In our case, the queue got 511k messages. The huge amount of "stuck" messages made the Rabbitmq cluster unstable. https://bugzilla.redhat.com/show_bug.cgi?id=1478274 https://bugs.launchpad.net/ceilometer/+bug/1665449 * Nova_hyperv: I added the same configurations as in Nova project. * Vitrage It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Searchlight I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Ironic I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Glance It was already properly configured. * Trove It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Blazar It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Sahara It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Watcher I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Barbican I created a mechanism similar to what we have in Cinder, Nova, and others. I also added a configuration to 'keystone_notifications' section. Barbican needs its own queue to capture events from Keystone. Otherwise, it has an impact on Ceilometer and other systems that are connected to the "notifications" default queue. * Keystone Keystone is the system that triggered this work with the discussions that followed on https://review.opendev.org/#/c/670626/2 . After a long discussion, we agreed to apply the same approach that we have in Nova, Cinder and other systems in Keystone. That is what we did. Moreover, we introduce a new topic "barbican_notifications" when barbican is enabled. We also removed the "variable" enable_cadf_notifications, as it is obsolete, and the default in Keystone is CADF. * Mistral: It was hardcoded "noop" as the driver. However, that does not seem a good practice. Instead, I applied the same standard of using the driver and pushing to "notifications" queue if Ceilometer is enabled. * Cyborg: I created a mechanism similar to what we have in AODH, Cinder, Nova, and others. * Murano It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Senlin It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Manila It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Zun The section is declared, but it is not used. Therefore, it will be removed in an upcomming PR. * Designate It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components * Magnum It was already using a similar scheme; I just modified it a little bit to be the same as we have in all other components Closes-Bug: #1838985 Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202 Signed-off-by:
Rafael Weingärtner <rafael@apache.org>
-
- May 17, 2019
-
-
binhong.hua authored
When integrating 3rd party component into openstack with kolla-ansible, maybe have to mount some extra volumes to container. Change-Id: I69108209320edad4c4ffa37dabadff62d7340939 Implements: blueprint support-extra-volumes
-
- Mar 08, 2019
-
-
Bai Yongjun authored
Because kolla-ansible not have cyborg so should add it. Implements: blueprint add-cyborg-to-kolla-ansible Depend-On: I497e67e3a754fccfd2ef5a82f13ccfaf890a6fcd Change-Id: I6f7ae86f855c5c64697607356d0ff3161f91b239
-