Skip to content
Snippets Groups Projects
  1. Apr 16, 2020
  2. Apr 13, 2020
    • Radosław Piliszek's avatar
      Fix Designate not to use etcd coordination backend · 3c234603
      Radosław Piliszek authored
      etcd via tooz does not support group membership required by
      Designate coordination.
      The best k-a can do is not to configure etcd in Designate.
      
      Change-Id: I2f64f928e730355142ac369d8868cf9f65ca357e
      Closes-bug: #1872205
      Related-bug: #1840070
      3c234603
  3. Apr 11, 2020
    • Rafael Weingärtner's avatar
      Allow operators to use "ceilometer-upgrade" parameters · 6fcccdae
      Rafael Weingärtner authored
      Allow operators to use custom parameters with the ceilometer-upgrade
      command. This is quite useful when using the dynamic pollster subsystem;
      that sub-system provides flexibility to create and edit pollsters configs,
      which affects gnocchi resource-type configurations. However, Ceilometer
      uses default and hard-coded resource-type configurations; if one customizes
      some of its default resource-types, he/she can get into trouble during
      upgrades. Therefore, the only way to work around it is to use the
      "--skip-gnocchi-resource-types" flag. This PR introduces a method for
      operators to execute such customization, and many others if needed.
      
      Depends-On: https://review.opendev.org/#/c/718190/
      Change-Id: I92f0edba92c9e3707d89b3ff4033ac886b29cf6d
      6fcccdae
  4. Apr 10, 2020
  5. Apr 09, 2020
    • Dincer Celik's avatar
      Introduce /etc/timezone to Debian/Ubuntu containers · 4b5df0d8
      Dincer Celik authored
      Some services look for /etc/timezone on Debian/Ubuntu, so we should
      introduce it to the containers.
      
      In addition, added prechecks for /etc/localtime and /etc/timezone.
      
      Closes-Bug: #1821592
      Change-Id: I9fef14643d1bcc7eee9547eb87fa1fb436d8a6b3
      4b5df0d8
    • John Garbutt's avatar
      Fix live migration to use migration int. address · 628c27ce
      John Garbutt authored
      In kolla ansible we typically configure services to communicate via IP
      addresses rather than hostnames. One accidental exception to this was
      live migration, which used the hostname of the destination even when
      not required (i.e. TLS not being used for libvirt).
      
      To make such hostnames work, k-a adds entries to /etc/hosts in the
      bootstrap-servers command. Alternatively users may provide DNS.
      
      One problem with using /etc/hosts is that, if a new compute host is
      added to the cloud, or an IP address is changed, that will not be
      reflected in the /etc/hosts file of other hosts. This would cause live
      migration to the new host from an old host to fail, as the name cannot
      be resolved.
      
      The workaround for this was to update the /etc/hosts file (perhaps via
      bootstrap-servers) on all hosts after adding new compute hosts. Then the
      nova_libvirt container had to be restarted to pick up the change.
      
      Similarly, if user has overridden the migration_interface, the used
      hostname could point to a wrong address on which libvirt would not
      listen.
      
      This change adds the live_migration_inbound_addr option to nova.conf. If
      TLS is not in use for libvirt, this will be set to the IP address of the
      host on the migration network. If TLS is enabled for libvirt,
      live_migration_inbound_addr will be set to migration_hostname, since
      certificates will typically reference the hostname rather than the
      host's IP. With libvirt TLS enabled, DNS is recommended to avoid the
      /etc/hosts issue which is likely the case in production deployments.
      
      Change-Id: I0201b46a9fbab21433a9f53685131aeb461543a8
      Closes-Bug: #1729566
      628c27ce
    • James Kirsch's avatar
      Add support for encrypting backend Keystone HAProxy traffic · b475643c
      James Kirsch authored
      This patch introduces an optional backend encryption for Keystone
      service. When used in conjunction with enabling TLS for service API
      endpoints, network communcation will be encrypted end to end, from
      client through HAProxy to the Keystone service.
      
      Change-Id: I6351147ddaff8b2ae629179a9bc3bae2ebac9519
      Partially-Implements: blueprint add-ssl-internal-network
      b475643c
    • Michal Nasiadka's avatar
      OVN Support · 8a0740df
      Michal Nasiadka authored
      Implement OVN Ansible role.
      
      Implements: blueprint ovn-controller-neutron-ansible
      
      Depends-On: https://review.opendev.org/713422
      Change-Id: Icd425dea85d58db49c838839d8f0b864b4a89a78
      8a0740df
  6. Apr 08, 2020
    • Mark Goddard's avatar
      Perform host configuration during upgrade · 1d70f509
      Mark Goddard authored
      This is a follow up to I001defc75d1f1e6caa9b1e11246abc6ce17c775b. To
      maintain previous behaviour, and ensure we catch any host configuration
      changes, we should perform host configuration during upgrade.
      
      Change-Id: I79fcbf1efb02b7187406d3c3fccea6f200bcea69
      Related-Bug: #1860161
      1d70f509
  7. Apr 06, 2020
  8. Apr 05, 2020
    • linpeiwen's avatar
      manila share container name variable · fa161909
      linpeiwen authored
      manila share container name variable is fixed in some places,
      but in the defaults directory, manila share container_name variable
      is variable. If the manila share container_name variable is changed
      during deployment, it will not be assigned to container name,
      but a fixed 'manila_share' name.
      
      Change-Id: Iea23c62518add8d6820b76b16edd3221906b0ffb
      fa161909
  9. Apr 04, 2020
    • Andreas Jaeger's avatar
      Update hacking for Python3 · 45448976
      Andreas Jaeger authored
      The repo is Python 3 now, so update hacking to version 3.0 which
      supports Python 3.
      
      Fix problems found by updated hacking version.
      
      Remove hacking and friends from lower-constraints, they are not needed
      during installation.
      
      Change-Id: I7ef5ac8a89e94f5da97780198619b6facc86ecfe
      45448976
  10. Apr 03, 2020
  11. Apr 02, 2020
    • Mark Goddard's avatar
      Separate per-service host configuration tasks · fdea19a3
      Mark Goddard authored
      Currently there are a few services that perform host configuration
      tasks. This is done in config.yml. This means that these changes are
      performed during 'kolla-ansible genconfig', when we might expect not to
      be making any changes to the remote system.
      
      This change separates out these host configuration tasks into a
      config-host.yml file, which is included directly from deploy.yml.
      
      One change in behaviour is that this prevents these tasks from running
      during an upgrade or genconfig. This is probably what we want, but we
      should be careful when any of these host configuration tasks are
      changed, to ensure they are applied during an upgrade if necessary.
      
      Change-Id: I001defc75d1f1e6caa9b1e11246abc6ce17c775b
      Closes-Bug: #1860161
      fdea19a3
    • Mark Goddard's avatar
      Avoid unconditional fact gathering · e0ba55a8
      Mark Goddard authored
      One way to improve the performance of Ansible is through fact caching.
      Rather than gather facts in every play, we can configure Ansible to
      cache them in a persistent store. An example Ansible configuration for
      doing this is as follows:
      
      [defaults]
      gathering = smart
      fact_caching = jsonfile
      fact_caching_connection = ./facts
      fact_caching_timeout = 86400
      
      This does not affect Kolla Ansible however, since we use the setup
      module which unconditionally gathers facts regardless of the state of
      the cache. This gets worse with large inventories limited to a small
      batch of hosts via --limit or serial, since the limited hosts must
      gather facts for all others.
      
      One way to detect whether facts exist for a host is via the
      'module_setup' variable, which exists only when facts exist. This change
      uses the 'module_setup' fact to determine whether facts need to be
      gathered for hosts outside of the batch. For hosts in the batch, we
      switch from using the setup module to gather_facts on the play, which
      can use the 'smart' gathering logic.
      
      Change-Id: I04841fb62b2e1d9e97ce4b75ce3a7349b9c74036
      Partially-Implements: blueprint performance-improvements
      e0ba55a8
  12. Apr 01, 2020
    • Radosław Piliszek's avatar
      Fix ovs fw driver for the other ovs agent · c033ddca
      Radosław Piliszek authored
      In [1] only neutron-openvswitch-agent was fixed and not xenapi.
      That merged in Ussuri and went cleanly into Train.
      In Stein and Rocky, the backport was not clean and
      accidentally fixed xenapi instead of the regular one.
      
      Neither the original bug nor its incomplete fix were released,
      except for Rocky. :-(
      Hence this patch also removes the confusing reno instead of
      adding a new one.
      
      [1] https://review.opendev.org/713129
      
      Change-Id: I331417c8d61ba6f180bcafa943be697418326645
      Closes-bug: #1869832
      Related-bug: #1867506
      c033ddca
  13. Mar 30, 2020
    • Doug Szumski's avatar
      Support setting Kafka storage volume · b7588834
      Doug Szumski authored
      Not everyone wants Kafka data stored on a Docker volume. This
      change allows a user to flexibly control where the data is stored.
      
      Change-Id: I2ba8c7a85c7bf2564f954a43c6e6dbb3257fe902
      b7588834
  14. Mar 27, 2020
    • linpeiwen's avatar
      keystone roles container name variable · 56591770
      linpeiwen authored
      keystone and keystone_fernet container name variable is fixed
      in some places, but in the defaults directory, keystone
      and keystone_fernet container_name variable is variable.
      If the keystone and keystone_fernet container_name variable is
      changed during deployment, it will not be assigned to keystone
      and keystone_fernet, but a fixed 'keystone' and 'keystone_fernet' name.
      
      Change-Id: Ifc8ac69e6abc4586f0e4fd820b9022aea9f76396
      56591770
  15. Mar 26, 2020
    • LinPeiWen's avatar
      kolla-toolbox container name variable · 8721ca35
      LinPeiWen authored
      kolla-toolbox container name variable is fixed in some places,
      but in the defaults directory, kolla-toolbox container_name variable
      is variable. If the kolla-toolbox container_name variable is changed
      during deployment, it will not be assigned to kolla-toolbox,
      but a fixed 'kolla-toolbox' name.
      
      Change-Id: I9579017761ff47477dba597282be9ae6fab4242a
      8721ca35
    • Jeffrey Zhang's avatar
      Add clients ca_file in heat · 34a331ab
      Jeffrey Zhang authored
      This patch fix creating statck resource failure in heat.
      
      Change-Id: I00c23f8b89765e266d045cc463ce4d863d0d6089
      Closes-Bug: #1869137
      34a331ab
    • Jeffrey Zhang's avatar
      Add glance_ca_certificates_file when using self sign cert in glance · 04382c80
      Jeffrey Zhang authored
      Change-Id: I9395ae32378f4ff1fd57be78d7daec7745579e04
      Closes-Bug: #1869133
      04382c80
  16. Mar 25, 2020
    • Mark Goddard's avatar
      Fix HAProxy prechecks during scale-out with limit · f3350d4e
      Mark Goddard authored
      Deploy HAProxy on one or more servers. Add another server to the
      inventory in the haproxy group, and run the following:
      
      kolla-ansible prechecks --limit <new host>
      
      The following task will fail:
      
          TASK [haproxy : Checking if kolla_internal_vip_address and
          kolla_external_vip_address are not pingable from any node]
      
      This happens because ansible does not execute on hosts where
      haproxy/keepalived is running, and therefore does not know that the VIP
      should be active.
      
      This change skips VIP prechecks when not all HAProxy hosts are in the
      play.
      
      Closes-Bug: #1868986
      
      Change-Id: Ifbc73806b768f76f803ab01c115a9e5c2e2492ac
      f3350d4e
    • LinPeiWen's avatar
      mariadb container name variable · 8a206699
      LinPeiWen authored
      mariadb container name variable is fixed in some places,
      but in the defaults directory, mariadb container_name variable
      is variable. If the mariadb container_name variable is changed
      during deployment, it will not be assigned to container_name,
      but a fixed 'mariadb' name.
      
      Change-Id: Ie8efa509953d5efa5c3073c9b550be051a7f4f9b
      8a206699
  17. Mar 23, 2020
    • Mark Goddard's avatar
      Fix kolla-ansible stop with heterogeneous hosts · 89df07e8
      Mark Goddard authored
      The 'kolla-ansible stop' command can be used to stop the services
      running on hosts. However, if you run this command in an environment
      with heterogeneous nodes (most real world scenarios have at least
      control/compute), then it fails. This is because it only checks
      whether a container is enabled, and not whether the host is in the
      correct group. For example, it fails with nova-libvirt:
      
          No such container: nova_libvirt to stop.
      
      This change fixes the issue by only attempting to stop containers on
      hosts to which they are mapped.
      
      Change-Id: Ibecac60d1417269bbe25a280996ca9de6e6d018f
      Closes-Bug: #1868596
      89df07e8
  18. Mar 21, 2020
  19. Mar 20, 2020
    • Doug Szumski's avatar
      Support disabling Prometheus server · 505cded2
      Doug Szumski authored
      This is useful to people who manage their Prometheus Server
      externally to Kolla Ansible, or want to use the exporters with
      another framework such as Monasca.
      
      Change-Id: Ie3f61e2e186c8e77e21a7b53d2bd7d2a27eee18e
      505cded2
  20. Mar 18, 2020
  21. Mar 17, 2020
    • Doug Szumski's avatar
      Make Fluentd config folders readable · c92378d7
      Doug Szumski authored
      Currently, config folders lack the execute bit so Fluentd
      cannot read the config and just does nothing when it starts up. This
      change explicitly sets the execute bit on folders which need it,
      rather than doing it in a more generic way which is more risky from
      a security perspective.
      
      Change-Id: Ia840f4b67043df4eaa654f47673dcdc973f13d9c
      Closes-Bug: #1867754
      c92378d7
  22. Mar 16, 2020
  23. Mar 15, 2020
  24. Mar 12, 2020
  25. Mar 11, 2020
    • Mark Goddard's avatar
      Host OS prechecks follow up · 96151a35
      Mark Goddard authored
      We only log the release in the 'Checking host OS release or version'
      precheck, but we allow either the release or version to be included in
      the list. For example, on CentOS 7:
      
          CentOS release Core is not supported. Supported releases are: 8
      
      Include the version in the failure message too.
      
      Change-Id: I0302cd4fc94a0c3a6aa1dbac7b9fedf37c11b81e
      Related: blueprint improve-prechecks
      96151a35
  26. Mar 10, 2020
    • yj.bai's avatar
      support ipv6 for grafana.ini.j2 · 3e582a05
      yj.bai authored
      
      grafana not support ipv6 in grafana.ini.j2.
      
      Closes-Bug: #1866141
      
      Change-Id: Ia89a9283e70c10a624f25108b487528dbb370ee4
      Signed-off-by: default avataryj.bai <bai.yongjun@99cloud.net>
      3e582a05
    • Will Szumski's avatar
      Use macro to avoid repetition · a1c51b73
      Will Szumski authored
      I didn't use a for loop as the logic for omitting the
      comma for the final element dirties the logic.
      
      Change-Id: Id29d5deebcc5126d69a1bd8395e0df989f2081f0
      a1c51b73
    • Mark Goddard's avatar
      Check supported host OS distributions in prechecks · d20c65ed
      Mark Goddard authored
      This should help to ensure that users are running tested and supported
      host OS distributions.
      
      Change-Id: I6ee76463d284ad4f3646af1c7ec2b7e50e2f3b15
      Partially-Implements: blueprint improve-prechecks
      d20c65ed
    • Mark Goddard's avatar
      Fix HAProxy monitor VIP precheck · 93a4dcc1
      Mark Goddard authored
      If haproxy is running somewhere in the cluster and listening on the VIP,
      but not running locally, then the following precheck may fail:
      
         TASK [haproxy : Checking free port for HAProxy monitor (vip interface)]
      
         msg: Timeout when waiting for 192.0.2.10:61313 to stop.
      
      This change fixes the issue by skipping the check if HAProxy is running
      on any host.
      
      Change-Id: I831eb2f700ef3fcf65b7e08382c3b4fcc4ce8d8d
      Closes-Bug: #1866617
      93a4dcc1
Loading