Skip to content
Snippets Groups Projects
  1. Jun 16, 2021
  2. Jun 15, 2021
  3. Jun 11, 2021
    • Matthias Runge's avatar
      Remove support for panko · ccf8cc5d
      Matthias Runge authored
      the project is deprecated and in the process of being removed
      from OpenStack upstream.
      
      Change-Id: I9d5ebed293a5fb25f4cd7daa473df152440e8b50
      ccf8cc5d
  4. Jun 10, 2021
    • Radosław Piliszek's avatar
      Disable docker's ip-forward when iptables disabled · 0fa4ee56
      Radosław Piliszek authored
      With the new default since Wallaby, starting Docker makes it
      enable forwarding and not filter it at all.
      This may pose a security risk and should be mitigated.
      
      Closes-Bug: #1931615
      Change-Id: I5129136c066489fdfaa4d93741c22e5010b7e89d
      0fa4ee56
  5. Jun 08, 2021
    • Mark Goddard's avatar
      Fix RabbitMQ restart ordering · 0cd5b027
      Mark Goddard authored
      The host list order seen during Ansible handlers may differ to the usual
      play host list order, due to race conditions in notifying handlers. This
      means that restart_services.yml for RabbitMQ may be included in a
      different order than the rabbitmq group, resulting in a node other than
      the 'first' being restarted first. This can cause some nodes to fail to
      join the cluster. The include_tasks loop was introduced in [1].
      
      This change fixes the issue by splitting the handler into two tasks, and
      restarting the first node before all others.
      
      [1] https://review.opendev.org/c/openstack/kolla-ansible/+/763137
      
      Change-Id: I1823301d5889589bfd48326ed7de03c6061ea5ba
      Closes-Bug: #1930293
      0cd5b027
  6. Jun 07, 2021
    • Maksim Malchuk's avatar
      Add forgotten 'Restart container' handler for swift · 5c19f9a5
      Maksim Malchuk authored
      
      Since I0474324b60a5f792ef5210ab336639edf7a8cd9e swift role uses the new
      service-cert-copy role introduced in the
      I6351147ddaff8b2ae629179a9bc3bae2ebac9519 but the swift role itself
      doesn't contain the handler used in the service-cert-copy. Right now,
      restarting the swift container isn't necessary, but the handler should
      exist. Also we should fix the name of the service used.
      
      Closes-Bug: #1931097
      Change-Id: I2d0615ce6914e1f875a2647c8a95b86dd17eeb22
      Signed-off-by: default avatarMaksim Malchuk <maksim.malchuk@gmail.com>
      5c19f9a5
    • John Garbutt's avatar
      Reduce RabbitMQ busy waiting, lowering CPU load · 70f6f8e4
      John Garbutt authored
      On machines with many cores, we were seeing excessive CPU load on systems
      that were not very busy. With the following Erlang VM argument we saw
      RabbitMQ CPU usage drop from about 150% to around 20%, on a system with
      40 hyperthreads.
      
          +S 2:2
      
      By default RabbitMQ starts N schedulers where N is the number of CPU
      cores, including hyper-threaded cores. This is fine when you assume all
      your CPUs are dedicated to RabbitMQ. Its not a good idea in a typical
      Kolla Ansible setup. Here we go for two scheduler threads.
      More details can be found here:
      https://www.rabbitmq.com/runtime.html#scheduling
      and here:
      https://erlang.org/doc/man/erl.html#emulator-flags
      
          +sbwt none
      
      This stops busy waiting of the scheduler, for more details see:
      https://www.rabbitmq.com/runtime.html#busy-waiting
      Newer versions of rabbit may need additional flags:
      "+sbwt none +sbwtdcpu none +sbwtdio none"
      But this patch should be back portable to older versions of RabbitMQ
      used in Train and Stein.
      
      Note that information on this tuning was found by looking at data from:
      rabbitmq-diagnostics runtime_thread_stats
      More details on that can be found here:
      https://www.rabbitmq.com/runtime.html#thread-stats
      
      Related-Bug: #1846467
      
      Change-Id: Iced014acee7e590c10848e73feca166f48b622dc
      70f6f8e4
  7. Jun 02, 2021
    • Mark Goddard's avatar
      chrony: allow to remove the container · 84ac7b30
      Mark Goddard authored
      The chrony container is deprecated in Wallaby, and disabled by default.
      This change allows to remove the container if chrony is disabled.
      
      Change-Id: I1c4436072c2d47a95625e64b731edb473384b395
      84ac7b30
  8. May 30, 2021
  9. May 28, 2021
  10. May 26, 2021
  11. May 21, 2021
  12. May 19, 2021
  13. May 14, 2021
    • Michał Nasiadka's avatar
      baremetal: Don't start Docker after install on Debian/Ubuntu · bc961791
      Michał Nasiadka authored
      docker-ce on Debian/Ubuntu gets started just after installation, before
      baremetal role configures daemon.json - which results in iptables rules
      being implemented - but not removed on docker engine restart.
      
      Closes-Bug: #1923203
      
      Change-Id: Ib1faa092e0b8f0668d1752490a34d0c2165d58d2
      bc961791
  14. May 13, 2021
  15. May 11, 2021
  16. May 10, 2021
    • John Garbutt's avatar
      Use @type instead of type · fe664774
      John Garbutt authored
      This is a follow up on the change with the following ID:
      
      I337f42e174393f68b43e876ef075a74c887a5314
      
      TrivialFix
      
      Change-Id: Ibb67811d7b086ef9ef4c695ae589171af0c4d657
      fe664774
    • wu.chunyang's avatar
      cleanup no longer needed task for cinder · f94c7bea
      wu.chunyang authored
      we don't need this task anymore.
      
      Change-Id: I1ba60fa51ecc86c74d05898b897d7b84c70707ef
      f94c7bea
    • Michal Arbet's avatar
      Do not write octavia_amp_ssh_key if auto_config disabled · 41fe771b
      Michal Arbet authored
      This task is writing private key from passwords to
      /etc/kolla/octavia-worker/{{ octavia_amp_ssh_key_name }} even
      if user disabled octavia auto configure.
      
      This patch is adding conditional for this task and skipping
      it if octavia_auto_configure: "no".
      
      Closes-Bug: #1927727
      
      Change-Id: Ib993b387d681921d804f654bea780a1481b2b0d0
      41fe771b
  17. May 07, 2021
  18. May 06, 2021
  19. May 05, 2021
  20. Apr 27, 2021
  21. Apr 24, 2021
  22. Apr 21, 2021
  23. Apr 15, 2021
  24. Apr 14, 2021
  25. Apr 12, 2021
  26. Apr 08, 2021
  27. Apr 07, 2021
    • Michal Arbet's avatar
      Refactor mariadb to support shards · 09b3c6ca
      Michal Arbet authored
      
      Kolla-ansible is currently installing mariadb
      cluster on hosts defined in group['mariadb']
      and render haproxy configuration for this hosts.
      
      This is not enough if user want to have several
      service databases in several mariadb clusters (shards).
      
      Spread service databases to multiple clusters (shards)
      is usefull especially for databases with high load
      (neutron,nova).
      
      How it works ?
      
      It works exactly same as now, but group reference 'mariadb'
      is now used as group where all mariadb clusters (shards)
      are located, and mariadb clusters are installed to
      dynamic groups created by group_by and host variable
      'mariadb_shard_id'.
      
      It also adding special user 'shard_X' which will be used
      for creating users and databases, but only if haproxy
      is not used as load-balance solution.
      
      This patch will not affect user which has all databases
      on same db cluster on hosts in group 'mariadb', host
      variable 'mariadb_shard_id' is set to 0 if not defined.
      
      Mariadb's task in loadbalancer.yml (haproxy) is configuring
      mariadb default shard hosts as haproxy backends. If mariadb
      role is used to install several clusters (shards), only
      default one is loadbalanced via haproxy.
      
      Mariadb's backup is working only for default shard (cluster)
      when using haproxy as mariadb loadbalancer, if proxysql
      is used, all shards are backuped.
      
      After this patch will be merged, there will be way for proxysql
      patches which will implement L7 SQL balancing based on
      users and schemas.
      
      Example of inventory:
      
      [mariadb]
      server1
      server2
      server3 mariadb_shard_id=1
      server4 mariadb_shard_id=1
      server5 mariadb_shard_id=2
      server6 mariadb_shard_id=3
      
      Extra:
      wait_for_loadbalancer is removed instead of modified as its role
      is served by check already. The relevant refactor is applied as
      well.
      
      Change-Id: I933067f22ecabc03247ea42baf04f19100dffd08
      Co-Authored-By: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      09b3c6ca
    • Mark Goddard's avatar
      masakari: fix minor issues with instance monitor · 0b0dd358
      Mark Goddard authored
      * Don't generate masakari.conf for instance monitor
      * Don't generate masakari-monitors.conf for API or engine
      * Use a consistent name for dimensions -
        masakari_instancemonitor_dimensions
      * Fix source code paths in dev mode
      
      Change-Id: I551f93c9bf1ad6712b53c316074ae1df84e4352b
      0b0dd358
  28. Apr 06, 2021
    • Radosław Piliszek's avatar
      Drop the NTP service precheck · 04315751
      Radosław Piliszek authored
      We can't check this with timedatectl as it is not aware
      of any "non-native" NTP daemon.
      
      This could be a warning-level message but we don't have
      such messages from the prechecks.
      
      Closes-Bug: #1922721
      Change-Id: I6db37576118cf5cff4ba7a63e179f0ab37467d22
      04315751
Loading