Skip to content
Snippets Groups Projects
  1. Dec 21, 2022
    • Matt Crees's avatar
      Integrate oslo-config-validator · 6c2aace8
      Matt Crees authored
      Regularly, we experience issues in Kolla Ansible deployments because we
      use wrong options in OpenStack configuration files. This is because
      OpenStack services ignore unknown options. We also need to keep on top
      of deprecated options that may be removed in the future. Integrating
      oslo-config-validator into Kolla Ansible will greatly help.
      
      Adds a shared role to run oslo-config-validator on each service. Takes
      into account that services have multiple containers, and these may also
      use multiple config files. Service roles are extended to use this shared
      role. Executed with the new command ``kolla-ansible validate-config``.
      
      Change-Id: Ic10b410fc115646d96d2ce39d9618e7c46cb3fbc
      6c2aace8
  2. Nov 02, 2022
  3. Oct 28, 2022
  4. Aug 09, 2022
  5. Jul 27, 2022
  6. Jul 25, 2022
    • Michal Nasiadka's avatar
      Fix var-spacing · dcf5a8b6
      Michal Nasiadka authored
      ansible-lint introduced var-spacing - let's fix our code.
      
      Change-Id: I0d8aaf3c522a5a6a5495032f6dbed8a2be0251f0
      dcf5a8b6
  7. Mar 24, 2022
  8. Mar 18, 2022
  9. Feb 21, 2022
    • Doug Szumski's avatar
      Remove classic queue mirroring for internal RabbitMQ · 6bfe1927
      Doug Szumski authored
      When OpenStack is deployed with Kolla-Ansible, by default there
      are no durable queues or exchanges created by the OpenStack
      services in RabbitMQ. In Rabbit terminology, not being durable
      is referred to as `transient`, and this means that the queue
      is generally held in memory.
      
      Whether OpenStack services create durable or transient queues is
      traditionally controlled by the Oslo Notification config option:
      `amqp_durable_queues`. In Kolla-Ansible, this remains set to
      the default of `False` in all services. The only `durable`
      objects are the `amq*` exchanges which are internal to RabbitMQ.
      
      More recently, Oslo Notification has introduced support for
      Quorum queues [7]. These are a successor to durable classic
      queues, however it isn't yet clear if they are a good fit for
      OpenStack in general [8].
      
      For clustered RabbitMQ deployments, Kolla-Ansible configures all
      queues as `replicated` [1]. Replication occurs over all nodes
      in the cluster. RabbitMQ refers to this as 'mirroring of classic
      queues'.
      
      In summary, this means that a multi-node Kolla-Ansible deployment
      will end up with a large number of transient, mirrored queues
      and exchanges. However, the RabbitMQ documentation warns against
      this, stating that 'For replicated queues, the only reasonable
      option is to use durable queues: [2]`. This is discussed
      further in the following bug report: [3].
      
      Whilst we could try enabling the `amqp_durable_queues` option
      for each service (this is suggested in [4]), there are
      a number of complexities with this approach, not limited to:
      
      1) RabbitMQ is planning to remove classic queue mirroring in
         favor of 'Quorum queues' in a forthcoming release [5].
      2) Durable queues will be written to disk, which may cause
         performance problems at scale. Note that this includes
         Quorum queues which are always durable.
      3) Potential for race conditions and other complexity
         discussed recently on the mailing list under:
         `[ops] [kolla] RabbitMQ High Availability`
      
      The remaining option, proposed here, is to use classic
      non-mirrored queues everywhere, and rely on services to recover
      if the node hosting a queue or exchange they are using fails.
      There is some discussion of this approach in [6]. The downside
      of potential message loss needs to be weighed against the real
      upsides of increasing the performance of RabbitMQ, and moving
      to a configuration which is officially supported and hopefully
      more stable. In the future, we can then consider promoting
      specific queues to quorum queues, in cases where message loss
      can result in failure states which are hard to recover from.
      
      [1] https://www.rabbitmq.com/ha.html
      [2] https://www.rabbitmq.com/queues.html
      [3] https://github.com/rabbitmq/rabbitmq-server/issues/2045
      [4] https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit
      [5] https://blog.rabbitmq.com/posts/2021/08/4.0-deprecation-announcements/
      [6] https://fuel-ccp.readthedocs.io/en/latest/design/ref_arch_1000_nodes.html#replication
      [7] https://bugs.launchpad.net/oslo.messaging/+bug/1942933
      [8] https://www.rabbitmq.com/quorum-queues.html#use-cases
      
      Partial-Bug: #1954925
      Change-Id: I91d0e23b22319cf3fdb7603f5401d24e3b76a56e
      6bfe1927
  10. Jan 09, 2022
    • LinPeiWen's avatar
      Support enable/disable rabbitmq prometheus plugins · 1f3dcce5
      LinPeiWen authored
      rabbitmq starting from 3.8.0, built-in Prometheus support,
      prometheus plugins are enabled by default, when the environment is
      "enable_prometheus is no", rabbitmq role will disable prometheus plugins
      
      Closes-Bug: #1885106
      
      Change-Id: I4d694d6224c813285d228d6bc7eece5731db1078
      1f3dcce5
  11. Aug 10, 2021
    • Radosław Piliszek's avatar
      Refactor and optimise image pulling · 9ff2ecb0
      Radosław Piliszek authored
      We get a nice optimisation by using a filtered loop instead
      of task skipping per service with 'when'.
      
      Partially-Implements: blueprint performance-improvements
      Change-Id: I8f68100870ab90cb2d6b68a66a4c97df9ea4ff52
      9ff2ecb0
  12. Jun 23, 2021
    • Mark Goddard's avatar
      Use ansible_facts to reference facts · ade5bfa3
      Mark Goddard authored
      By default, Ansible injects a variable for every fact, prefixed with
      ansible_. This can result in a large number of variables for each host,
      which at scale can incur a performance penalty. Ansible provides a
      configuration option [0] that can be set to False to prevent this
      injection of facts. In this case, facts should be referenced via
      ansible_facts.<fact>.
      
      This change updates all references to Ansible facts within Kolla Ansible
      from using individual fact variables to using the items in the
      ansible_facts dictionary. This allows users to disable fact variable
      injection in their Ansible configuration, which may provide some
      performance improvement.
      
      This change disables fact variable injection in the ansible
      configuration used in CI, to catch any attempts to use the injected
      variables.
      
      [0] https://docs.ansible.com/ansible/latest/reference_appendices/config.html#inject-facts-as-vars
      
      Change-Id: I7e9d5c9b8b9164d4aee3abb4e37c8f28d98ff5d1
      Partially-Implements: blueprint performance-improvements
      ade5bfa3
  13. Apr 14, 2021
  14. Dec 14, 2020
    • Mark Goddard's avatar
      Revert "Performance: Use import_tasks in the main plays" · db4fc85c
      Mark Goddard authored
      This reverts commit 9cae59be.
      
      Reason for revert: This patch was found to introduce issues with fluentd customisation. The underlying issue is not currently fully understood, but could be a sign of other obscure issues.
      
      Change-Id: Ia4859c23d85699621a3b734d6cedb70225576dfc
      Closes-Bug: #1906288
      db4fc85c
  15. Nov 19, 2020
  16. Oct 27, 2020
    • Radosław Piliszek's avatar
      Performance: Use import_tasks in the main plays · 9cae59be
      Radosław Piliszek authored
      Main plays are action-redirect-stubs, ideal for import_tasks.
      
      This avoids 'include' penalty and makes logs/ara look nicer.
      
      Fixes haproxy and rabbitmq not to check the host group as well.
      
      Change-Id: I46136fc40b815e341befff80b54a91ef431eabc0
      Partially-Implements: blueprint performance-improvements
      9cae59be
  17. Oct 12, 2020
    • Radosław Piliszek's avatar
      Performance: optimize genconfig · 3411b9e4
      Radosław Piliszek authored
      Config plays do not need to check containers. This avoids skipping
      tasks during the genconfig action.
      
      Ironic and Glance rolling upgrades are handled specially.
      
      Swift and Bifrost do not use the handlers at all.
      
      Partially-Implements: blueprint performance-improvements
      Change-Id: I140bf71d62e8f0932c96270d1f08940a5ba4542a
      3411b9e4
  18. Sep 17, 2020
    • Mark Goddard's avatar
      Support TLS encryption of RabbitMQ client-server traffic · 761ea9a3
      Mark Goddard authored
      This change adds support for encryption of communication between
      OpenStack services and RabbitMQ. Server certificates are supported, but
      currently client certificates are not.
      
      The kolla-ansible certificates command has been updated to support
      generating certificates for RabbitMQ for development and testing.
      
      RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when
      The Zuul 'tls_enabled' variable is true.
      
      Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5
      Implements: blueprint message-queue-ssl-support
      761ea9a3
  19. Aug 28, 2020
  20. Jul 28, 2020
    • Mark Goddard's avatar
      Performance: use import_tasks for check-containers.yml · 9702d4c3
      Mark Goddard authored
      Including tasks has a performance penalty when compared with importing
      tasks. If the include has a condition associated with it, then the
      overhead of the include may be lower than the overhead of skipping all
      imported tasks. In the case of the check-containers.yml include, the
      included file only has a single task, so the overhead of skipping this
      task will not be greater than the overhead of the task import. It
      therefore makes sense to switch to use import_tasks there.
      
      Partially-Implements: blueprint performance-improvements
      
      Change-Id: I65d911670649960708b9f6a4c110d1a7df1ad8f7
      9702d4c3
  21. Mar 02, 2020
  22. Feb 28, 2020
    • Mark Goddard's avatar
      Add Ansible group check to prechecks · 49fb55f1
      Mark Goddard authored
      We assume that all groups are present in the inventory, and quite obtuse
      errors can result if any are not.
      
      This change adds a precheck that checks for the presence of all expected
      groups in the inventory for each service. It also introduces a common
      service-precheck role that we can use for other common prechecks.
      
      Change-Id: Ia0af1e7df4fff7f07cd6530e5b017db8fba530b3
      Partially-Implements: blueprint improve-prechecks
      49fb55f1
  23. Feb 16, 2020
  24. Jan 13, 2020
  25. Oct 17, 2019
    • Radosław Piliszek's avatar
      Refactor NSS database var · 75862bc7
      Radosław Piliszek authored
      IPv6 control plane implementation [1] follow-up.
      
      [1] Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      
      Change-Id: I4c2bd81e77fc09a04838a62f008e5d6c5dc1483d
      75862bc7
  26. Oct 16, 2019
    • Radosław Piliszek's avatar
      Implement IPv6 support in the control plane · bc053c09
      Radosław Piliszek authored
      Introduce kolla_address filter.
      Introduce put_address_in_context filter.
      
      Add AF config to vars.
      
      Address contexts:
      - raw (default): <ADDR>
      - memcache: inet6:[<ADDR>]
      - url: [<ADDR>]
      
      Other changes:
      
      globals.yml - mention just IP in comment
      
      prechecks/port_checks (api_intf) - kolla_address handles validation
      
      3x interface conditional (swift configs: replication/storage)
      
      2x interface variable definition with hostname
      (haproxy listens; api intf)
      
      1x interface variable definition with hostname with bifrost exclusion
      (baremetal pre-install /etc/hosts; api intf)
      
      neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
      
      basic multinode source CI job for IPv6
      
      prechecks for rabbitmq and qdrouterd use proper NSS database now
      
      MariaDB Galera Cluster WSREP SST mariabackup workaround
      (socat and IPv6)
      
      Ceph naming workaround in CI
      TODO: probably needs documenting
      
      RabbitMQ IPv6-only proto_dist
      
      Ceph ms switch to IPv6 mode
      
      Remove neutron-server ml2_type_vxlan/vxlan_group setting
      as it is not used (let's avoid any confusion)
      and could break setups without proper multicast routing
      if it started working (also IPv4-only)
      
      haproxy upgrade checks for slaves based on ipv6 addresses
      
      TODO:
      
      ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
      not supported, invalid by default because neutron_external has no address
      No idea whether ovs-dpdk works at all atm.
      
      ml2 for xenapi
      Xen is not supported too well.
      This would require working with XenAPI facts.
      
      rp_filter setting
      This would require meddling with ip6tables (there is no sysctl param).
      By default nothing is dropped.
      Unlikely we really need it.
      
      ironic dnsmasq is configured IPv4-only
      dnsmasq needs DHCPv6 options and testing in vivo.
      
      KNOWN ISSUES (beyond us):
      
      One cannot use IPv6 address to reference the image for docker like we
      currently do, see: https://github.com/moby/moby/issues/39033
      (docker_registry; docker API 400 - invalid reference format)
      workaround: use hostname/FQDN
      
      RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
      This is due to old RabbitMQ versions available in images.
      IPv4 is preferred by default and may fail in the IPv6-only scenario.
      This should be no problem in real life as IPv6-only is indeed IPv6-only.
      Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
      no longer be relevant as we supply all the necessary config.
      See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
      
      For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
      to work well). Older Ansible versions are known to miss IPv6 addresses
      in interface facts. This may affect redeploys, reconfigures and
      upgrades which run after VIP address is assigned.
      See: https://github.com/ansible/ansible/issues/63227
      
      Bifrost Train does not support IPv6 deployments.
      See: https://storyboard.openstack.org/#!/story/2006689
      
      
      
      Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      Implements: blueprint ipv6-control-plane
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      bc053c09
  27. Sep 26, 2019
    • Kris Lindgren's avatar
      Add a job that *only* deploys updated containers · 2fe0d98e
      Kris Lindgren authored
      Sometimes as cloud admins, we want to only update code that is running
      in a cloud.  But we dont need to do anything else.  Make an action in
      kolla-ansible that allows us to do that.
      
      Change-Id: I904f595c69f7276e71692696471e32fd1f88e6e8
      Implements: blueprint deploy-containers-action
      2fe0d98e
  28. Jul 18, 2019
    • Radosław Piliszek's avatar
      Fix handling of docker restart policy · 6a737b19
      Radosław Piliszek authored
      Docker has no restart policy named 'never'. It has 'no'.
      This has bitten us already (see [1]) and might bite us again whenever
      we want to change the restart policy to 'no'.
      
      This patch makes our docker integration honor all valid restart policies
      and only valid restart policies.
      All relevant docker restart policy usages are patched as well.
      
      I added some FIXMEs around which are relevant to kolla-ansible docker
      integration. They are not fixed in here to not alter behavior.
      
      [1] https://review.opendev.org/667363
      
      
      
      Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      6a737b19
  29. Jun 06, 2019
    • Mark Goddard's avatar
      Use become for all docker tasks · b123bf66
      Mark Goddard authored
      Many tasks that use Docker have become specified already, but
      not all. This change ensures all tasks that use the following
      modules have become:
      
      * kolla_docker
      * kolla_ceph_keyring
      * kolla_toolbox
      * kolla_container_facts
      
      It also adds become for 'command' tasks that use docker CLI.
      
      Change-Id: I4a5ebcedaccb9261dbc958ec67e8077d7980e496
      b123bf66
  30. May 02, 2019
    • Raimund Hook's avatar
      Updating Jinja filters to conform to Ansible 2.5+ · 84ea42bd
      Raimund Hook authored
      Since Ansible 2.5, the use of jinja tests as filters has been
      deprecated.
      
      I've run the script provided by the ansible team to 'fix' the
      jinja filters to conform to the newer syntax.
      
      This fixes the deprecation warnings.
      
      Change-Id: I844ecb7bec94e561afb09580f58b1bf83a6d00bd
      Closes-bug: #1827370
      84ea42bd
  31. Apr 02, 2019
    • Mark Goddard's avatar
      Fix up config file permissions on the host · a4bb8567
      Mark Goddard authored
      Several config file permissions are incorrect on the host. In general,
      files should be 0660, and directories and executables 0770.
      
      Change-Id: Id276ac1864f280554e98b937f2845bb424d521de
      Closes-Bug: #1821579
      a4bb8567
  32. Feb 15, 2019
    • Mark Goddard's avatar
      Fix rabbitmq reconfigure, simplify role · 1e2a1a8f
      Mark Goddard authored
      Since Id724b44a3edd951fa8b06c9f2c347e9ed8c5ffd9, there is a reference to a
      non-existent variable, rabbitmq_confs, that causes deployment to fail if
      rabbitmq configuration other than config.json is changed.
      
      I'm taking this opportunity to simplify the role, since we can use the Ansible
      handler notification system to determine when handlers need to run, without
      registering and checking variables. This simpler approach was used in the
      haproxy refactor.
      
      Change-Id: Ibe0e7fda93afff741243ff9c350db1c8c6e1e6d3
      Closes-Bug: #1816053
      1e2a1a8f
  33. Dec 14, 2018
  34. Nov 26, 2018
    • Eduardo Gonzalez's avatar
      Support stop specific containers · 1a682fab
      Eduardo Gonzalez authored
      With this change, an operator may be able to stop a
      service container without stopping all services in a host.
      This change is the starting point to start
      fast-forward upgrades support.
      In next changes new flags will be introducced to disable
      stop dataplane services during upgrades.
      
      Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
      Implements: blueprint support-stop-containers
      1a682fab
  35. Sep 26, 2018
    • Adam Harwell's avatar
      Refactor haproxy config (split by service) V2.0 · f1c81365
      Adam Harwell authored
      Having all services in one giant haproxy file makes altering
      configuration for a service both painful and dangerous. Each service
      should be configured with a simple set of variables and rendered with a
      single unified template.
      
      Available are two new templates:
      
      * haproxy_single_service_listen.cfg.j2: close to the original style, but
      only one service per file
      * haproxy_single_service_split.cfg.j2: using the newer haproxy syntax
      for separated frontend and backend
      
      For now the default will be the single listen block, for ease of
      transition.
      
      Change-Id: I6e237438fbc0aa3c89a3c8bd706a53b74e71904b
      f1c81365
  36. Sep 21, 2018
  37. Aug 21, 2018
    • Paul Bourke's avatar
      Temporarily remove the rabbitmq clusterer plugin · 0d03fc27
      Paul Bourke authored
      In order to migrate to the latest release of rabbitmq (3.7), we need to
      first remove this deprecated plugin which is no longer supported (the
      problems it solved are now addressed in rabbitmq itself).
      
      This avoids a circular dependency in CI where the new images depend on
      the new clustering and the new clustering depends on the new images.
      
      Change-Id: I921459f3e40b9e0d4af9497384e49aabf0abe79b
      0d03fc27
  38. Jul 25, 2018
  39. Jul 23, 2018
  40. Jun 08, 2018
Loading