- Dec 09, 2019
-
-
Doug Szumski authored
This allows users to supply an Elasticsearch Curator actions file to manage log retention [1]. Curator then runs on a cron job, which defaults to every day. A default curator actions file is provided, which can be customised by the end user if required. [1] https://www.elastic.co/guide/en/elasticsearch/client/curator/current/actionfile.html Change-Id: Ide9baea9190ae849e61b9d8b6cff3305bdcdd534
-
- Oct 25, 2019
-
-
Jan Vondra authored
Adds rabbitmq_server_additional_erl_args variable which is appended to RABBITMQ_SERVER_ADDITIONAL_ERL_ARGS environment variable to RabbitMQ server startup script. This can be used to configure the schedulers. Docs attached. Change-Id: Id683c8cc6dac61354ffd94f3b460335b42136ba2 Co-authored-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com> Related-bug: #1846467
-
- Oct 21, 2019
-
-
Eduardo Gonzalez authored
Tacker requires config for storing CSAR vnf packages. This patch adds it as well as relevant docs. Only one Tacker Conductor is deployed by default due to lack of a shared filesystem. Change-Id: Iad391f35105e79fa9319502256528990915df9b7 Co-authored-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com> Closes-Bug: #1845142
-
- Oct 20, 2019
-
-
Radosław Piliszek authored
This also enables Placement when Zun is enabled like Kolla Ansible already does with Nova. Change-Id: Id2a09f702e8503b49d2b9e73e06b2ce9f4d168a9 Closes-bug: #1840573
-
- Oct 17, 2019
-
-
Mark Goddard authored
Add documentation about deploying nova with multiple cells. Change-Id: I89ee276917e5b9170746e07b7f644c7593b03da1 Depends-On: https://review.opendev.org/#/c/675659/ Related: blueprint bp/support-nova-cells
-
- Oct 16, 2019
-
-
Radosław Piliszek authored
Introduce kolla_address filter. Introduce put_address_in_context filter. Add AF config to vars. Address contexts: - raw (default): <ADDR> - memcache: inet6:[<ADDR>] - url: [<ADDR>] Other changes: globals.yml - mention just IP in comment prechecks/port_checks (api_intf) - kolla_address handles validation 3x interface conditional (swift configs: replication/storage) 2x interface variable definition with hostname (haproxy listens; api intf) 1x interface variable definition with hostname with bifrost exclusion (baremetal pre-install /etc/hosts; api intf) neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network basic multinode source CI job for IPv6 prechecks for rabbitmq and qdrouterd use proper NSS database now MariaDB Galera Cluster WSREP SST mariabackup workaround (socat and IPv6) Ceph naming workaround in CI TODO: probably needs documenting RabbitMQ IPv6-only proto_dist Ceph ms switch to IPv6 mode Remove neutron-server ml2_type_vxlan/vxlan_group setting as it is not used (let's avoid any confusion) and could break setups without proper multicast routing if it started working (also IPv4-only) haproxy upgrade checks for slaves based on ipv6 addresses TODO: ovs-dpdk grabs ipv4 network address (w/ prefix len / submask) not supported, invalid by default because neutron_external has no address No idea whether ovs-dpdk works at all atm. ml2 for xenapi Xen is not supported too well. This would require working with XenAPI facts. rp_filter setting This would require meddling with ip6tables (there is no sysctl param). By default nothing is dropped. Unlikely we really need it. ironic dnsmasq is configured IPv4-only dnsmasq needs DHCPv6 options and testing in vivo. KNOWN ISSUES (beyond us): One cannot use IPv6 address to reference the image for docker like we currently do, see: https://github.com/moby/moby/issues/39033 (docker_registry; docker API 400 - invalid reference format) workaround: use hostname/FQDN RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4. This is due to old RabbitMQ versions available in images. IPv4 is preferred by default and may fail in the IPv6-only scenario. This should be no problem in real life as IPv6-only is indeed IPv6-only. Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will no longer be relevant as we supply all the necessary config. See: https://github.com/rabbitmq/rabbitmq-server/pull/1982 For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed to work well). Older Ansible versions are known to miss IPv6 addresses in interface facts. This may affect redeploys, reconfigures and upgrades which run after VIP address is assigned. See: https://github.com/ansible/ansible/issues/63227 Bifrost Train does not support IPv6 deployments. See: https://storyboard.openstack.org/#!/story/2006689 Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c Implements: blueprint ipv6-control-plane Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Oct 08, 2019
-
-
Mark Goddard authored
Adds a top-level guide for Nova, with links off to the various virt driver guides. Generalises the libvirt TLS guide into a libvirt guide, and adds info on hardware virtualisation and qemu vs. kvm. Adds information on configuring consoles. Change-Id: I36beaaee313bdbc4bcf8cc15c41dda245a5a81ba
-
- Sep 30, 2019
-
-
Joseph M authored
Add coordination backend configuration to designate.conf which is required in multinode environments. Fixes warning from designate: WARNING designate.coordination [-] No coordination backend configured, assuming we are the only worker. Please configure a coordination backend Change-Id: I23c4d2de7e3f9368795c423000a4f9a6c3a431e2 Closes-Bug: #1843842 Related-Bug: #1840070
-
- Sep 26, 2019
-
-
Michal Nasiadka authored
Add Neutron reference docs, especially a note around using OVS native firewall driver on recent (4.3+) kernels [1]. [1]: https://docs.openstack.org/neutron/latest/admin/config-ovsfwdriver.html Change-Id: I6994e364c116234b46f5d5e9f0a4666b83f86375 Closes-Bug: #1653987
-
- Sep 24, 2019
-
-
Dincer Celik authored
Change-Id: I8bb39eaf8a4239c37fcbf91b55ec8003542e2506
-
Alexis Deberg authored
The current tasks only use a hardcoded list deploying only the required files. When using multiple custom policies, additionnal object-*.builder and object*.gz files are to be deployed as well. This adds a new default-empty variable that can be overridden when needed Change-Id: I29c8e349c7cc83e3a2e01ff702d235a0cd97340e Closes-Bug: #1844752
-
- Sep 19, 2019
-
-
Kris Lindgren authored
To securely support live migration between computenodes we should enable tls, with cert auth, instead of TCP with no auth support. Implements: blueprint libvirt-tls Change-Id: I22ea6233933c840b853fdcc8e03400b2bf577271
-
- Sep 18, 2019
-
-
Mark Goddard authored
We have agreed to remove support for Oracle Linux. http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006896.html Change-Id: If11b4ff37af936a0cfd34443e8babb952307882b
-
- Sep 12, 2019
-
-
Scott Solkhon authored
This commit adds the necessary configuration to the Swift account, container and object configuration files to enable the Swift recon cli. In order to give the object server on each Swift host access to the recon files, a Docker volume is mounted into each container which generates them. The volume is then mounted read only into the object server container. Note that multiple containers append to the same file. This should not be a problem since Swift uses a lock when appending. Change-Id: I343d8f45a78ebc3c11ed0c68fe8bec24f9ea7929 Co-authored-by:
Doug Szumski <doug@stackhpc.com>
-
- Sep 10, 2019
-
-
Hongbin Lu authored
After the integration with placement [1], we need to configure how zun-compute is going to work with nova-compute. * If zun-compute and nova-compute run on the same compute node, we need to set 'host_shared_with_nova' as true so that Zun will use the resource provider (compute node) created by nova. In this mode, containers and VMs could claim allocations against the same resource provider. * If zun-compute runs on a node without nova-compute, no extra configuration is needed. By default, each zun-compute will create a resource provider in placement to represent the compute node it manages. [1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
-
- Sep 05, 2019
-
-
Marcin Juszkiewicz authored
Instead of changing Docker daemon command line let's change config for Docker instead. In /etc/docker/daemon.json file as it should be. Custom Docker options can be set with 'docker_custom_config' variable. Old 'docker_custom_option' is still present but should be avoided. Co-Authored-By:
Radosław Piliszek <radoslaw.piliszek@gmail.com> Change-Id: I1215e04ec15b01c0b43bac8c0e81293f6724f278
-
- Aug 23, 2019
-
-
Michal Nasiadka authored
ceph-ansible by default generates what we call nova.keyring as openstack.keyring - adding a note to not confuse users. Change-Id: I3992a037ab8e7947e35521b5c721a89bd954fdcd
-
- Aug 16, 2019
-
-
Radosław Piliszek authored
Change-Id: Icf3f01516185afb7b9f642407b06a0204c36ecbe Closes-Bug: #1840315 Signed-off-by:
Radosław Piliszek <radoslaw.piliszek@gmail.com>
-
- Aug 15, 2019
-
-
Kien Nguyen authored
Masakari provides Instances High Availability Service for OpenStack clouds by automatically recovering failed Instances. Depends-On: https://review.openstack.org/#/c/615469/ Change-Id: I0b3457232ee86576022cff64eb2e227ff9bbf0aa Implements: blueprint ansible-masakari Co-Authored-By:
Gaëtan Trellu <gaetan.trellu@incloudus.com>
-
- Aug 14, 2019
-
-
Scott Solkhon authored
This feature is disabled by default, and can be enabled by setting 'enable_swift_s3api' to 'true' in globals.yml. Two middlewares are required for Swift S3 - s3api and s3token. Additionally, we need to configure the authtoken middleware to delay auth decisions to give s3token a chance to authorise requests using EC2 credentials. Change-Id: Ib8e8e3a1c2ab383100f3c60ec58066e588d3b4db
-
- Aug 06, 2019
-
-
Mark Goddard authored
Docker is now always installed using the community edition (CE) packages. Change-Id: I8c3fe44fd9d2da99b5bb1c0ec3472d7e1b5fb295
-
- Jul 16, 2019
-
-
Michal Nasiadka authored
* Ubuntu ships with nfs-ganesha 2.6.0, which requires to do an rpcbind udp test on startup (was fixed later) * Add rpcbind package to be installed by kolla-ansible bootstrap when ceph_nfs is enabled * Update Ceph deployment docs with a note Change-Id: Ic19264191a0ed418fa959fdc122cef543446fbe5
-
- Jul 15, 2019
-
-
chenxing authored
Change-Id: I6974858a0a44d85a065502ed7b3a8e2797be7228 Closes-Bug: #1832979
-
- Jul 10, 2019
-
-
Raimund Hook authored
Updated the docs to refer to the openstack client, rather than the (old) neutron client. TrivialFix Change-Id: I82011175f7206f52570a0f7d1c6863ad8fa08fd0
-
chenxing authored
The "backup_driver" option should be configured to cinder.backup.drivers.ceph.CephBackupDriver instead of cinder.backup.drivers.ceph. Change-Id: I22457023c6ad76b508bcbe05e37517c18f1ffc81 Closes-Bug: #1832878
-
- Jul 04, 2019
-
-
Mark Goddard authored
There are now several good tools for deploying Ceph, including Ceph Ansible and ceph-deploy. Maintaining our own Ceph deployment is a significant maintenance burden, and we should focus on our core mission to deploy OpenStack. Given that this is a significant part of kolla ansible currently we will need a long deprecation period and a migration path to another tool. Change-Id: Ic603c85c04d8794580a19f9efaa7a8589565f4f6 Partially-Implements: blueprint remove-ceph
-
- Jun 24, 2019
-
-
chenxing authored
The Hitachi NAS Platform iSCSI driver was marked as not supported by Cinder in the Ocata realease[1]. [1] https://review.opendev.org/#/c/444287/ Change-Id: I1a25789374fddaefc57bc59badec06f91ee6a52a Closes-Bug: #1832821
-
- Jun 20, 2019
-
-
Doug Szumski authored
This commit should help guide people migrating to Kolla Monasca through the murky depths of the migration process. Since Kolla did not support Monasca in Queens, some of these steps which could be automated are not. Change-Id: I79051cca27178c3cf1671f5c603e38baf929c55c
-
- Jun 17, 2019
-
-
chenxing authored
This ensures we have version-specific references to other projects [1]. Note that this doesn't mean the URLs are actually valid - we need to do more work (linkcheck?) here, but it's an improvement nonetheless. [1] https://docs.openstack.org/openstackdocstheme/latest/#external-link-helper Change-Id: I118e4d211617c5df66ff04dc04e308a1d2fc67ad
-
- Jun 07, 2019
-
-
Carlos Goncalves authored
The project has been retired and there will be no Train release [1]. This patch removes Neutron LBaaS support in Kolla. [1] https://review.opendev.org/#/c/658494/ Change-Id: Ic0d3da02b9556a34d8c27ca21a1ebb3af1f5d34c
-
- Jun 05, 2019
-
-
Gaetan Trellu authored
- Remove trusted_cidrs that has just been removed from Qinling code. - Remove use_api_certificate because it's true by default - Improve list syntax - Add etcd section Change-Id: I0426a9d61fbeaa23a1affbc7e981a78283e88263
-
- May 31, 2019
-
-
Gaetan Trellu authored
Qinling is an OpenStack project to provide "Function as a Service". This project aims to provide a platform to support serverless functions. Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c Implements: blueprint ansible-qinling-support Story: 2005760 Task: 33468
-
- May 30, 2019
-
-
ZijianGuo authored
Change-Id: I75955012a839e52281e9a409eeab4a2c8d778cd2 Signed-off-by:
ZijianGuo <guozijn@gmail.com>
-
- May 17, 2019
-
-
Mark Goddard authored
Right now every controller rotates fernet keys. This is nice because should any controller die, we know the remaining ones will rotate the keys. However, we are currently over-rotating the keys. When we over rotate keys, we get logs like this: This is not a recognized Fernet token <token> TokenNotFound Most clients can recover and get a new token, but some clients (like Nova passing tokens to other services) can't do that because it doesn't have the password to regenerate a new token. With three controllers, in crontab in keystone-fernet we see the once a day correctly staggered across the three controllers: ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab 0 0 * * * /usr/bin/fernet-rotate.sh ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab 0 8 * * * /usr/bin/fernet-rotate.sh ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab 0 16 * * * /usr/bin/fernet-rotate.sh Currently with three controllers we have this keystone config: [token] expiration = 86400 (although, keystone default is one hour) allow_expired_window = 172800 (this is the keystone default) [fernet_tokens] max_active_keys = 4 Currently, kolla-ansible configures key rotation according to the following: rotation_interval = token_expiration / num_hosts This means we rotate keys more quickly the more hosts we have, which doesn't make much sense. Keystone docs state: max_active_keys = ((token_expiration + allow_expired_window) / rotation_interval) + 2 For details see: https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html Rotation is based on pushing out a staging key, so should any server start using that key, other servers will consider that valid. Then each server in turn starts using the staging key, each in term demoting the existing primary key to a secondary key. Eventually you prune the secondary keys when there is no token in the wild that would need to be decrypted using that key. So this all makes sense. This change adds new variables for fernet_token_allow_expired_window and fernet_key_rotation_interval, so that we can correctly calculate the correct number of active keys. We now set the default rotation interval so as to minimise the number of active keys to 3 - one primary, one secondary, one buffer. This change also fixes the fernet cron job generator, which was broken in the following cases: * requesting an interval of more than 1 day resulted in no jobs * requesting an interval of more than 60 minutes, unless an exact multiple of 60 minutes, resulted in no jobs It should now be possible to request any interval up to a week divided by the number of hosts. Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a Closes-Bug: #1809469
-
- Apr 08, 2019
-
-
Doug Szumski authored
The recent addition of this flag make the configuration of stand-alone Monasca slightly simpler. Change-Id: Ib4c03926daa3f0f3de0fa4412cd785d87ed5500c
-
- Mar 14, 2019
-
-
Scott Solkhon authored
Adds support to seperate Swift access and replication traffic from other storage traffic. In a deployment where both Ceph and Swift have been deployed, this changes adds functionalality to support optional seperation of storage network traffic. This adds two new network interfaces 'swift_storage_interface' and 'swift_replication_interface' which maintain backwards compatibility. The Swift access network interface is configured via 'swift_storage_interface', which defaults to 'storage_interface'. The Swift replication network interface is configured via 'swift_replication_interface', which defaults to 'swift_storage_interface'. If a separate replication network is used, Kolla Ansible now deploys separate replication servers for the accounts, containers and objects, that listen on this network. In this case, these services handle only replication traffic, and the original account-, container- and object- servers only handle storage user requests. Change-Id: Ib39e081574e030126f2d08f51de89641ddb0d42e
-
- Mar 08, 2019
-
-
Doug Szumski authored
In some scenarios it may be useful to perform custom formatting of logs before forwarding them. For example, the JSON formatter plugin can be used to convert an event to JSON. Change-Id: I3dd9240c5910a9477456283b392edc9566882dcd
-
- Mar 07, 2019
-
-
Arkadiy Shinkarev authored
When using custom storage backends with cinder.conf overrides file, precheck stage in kolla-ansible is fail. This commit adds option 'skip_cinder_backend_check' (default: False) to cinder role. Change-Id: Ifee138ad8b281903ea2365441aada044c80c46f0
-
- Feb 28, 2019
-
-
Mark Goddard authored
To avoid links to OpenStack docs getting out of date in our docs, use the latest version. Ideally after cutting each stable branch we should change these links to use the current release. Co-Authored-By: Isaiah Inuwa Change-Id: Ia1e3c720f4e688861b8f76874a3943b0f4e50b17
-
- Feb 25, 2019
-
-
Christian Berendt authored
Change-Id: Id8276448c6e779b2b4a0aafee45d953c4f009fc1
-