Skip to content
Snippets Groups Projects
  1. Oct 16, 2019
    • Zuul's avatar
      e7a85726
    • Zuul's avatar
      21babd3f
    • Zuul's avatar
      Merge "CI: Increase job run attempts to 5" · 5bf83cfe
      Zuul authored
      5bf83cfe
    • Zuul's avatar
      Merge "Fixes glance image cache deployment." · 7bde217a
      Zuul authored
      7bde217a
    • Radosław Piliszek's avatar
      CI: Increase timeout for upgrade jobs by 30 minutes · f69a8a9b
      Radosław Piliszek authored
      Upgrade jobs like to timeout in the 2-hour window when they must
      build their images.
      This increase is already applied in ceph jobs.
      
      Change-Id: Ic1118760d9192cc15e1ebf37fb8adf3440f18a78
      f69a8a9b
    • Radosław Piliszek's avatar
      Implement IPv6 support in the control plane · bc053c09
      Radosław Piliszek authored
      Introduce kolla_address filter.
      Introduce put_address_in_context filter.
      
      Add AF config to vars.
      
      Address contexts:
      - raw (default): <ADDR>
      - memcache: inet6:[<ADDR>]
      - url: [<ADDR>]
      
      Other changes:
      
      globals.yml - mention just IP in comment
      
      prechecks/port_checks (api_intf) - kolla_address handles validation
      
      3x interface conditional (swift configs: replication/storage)
      
      2x interface variable definition with hostname
      (haproxy listens; api intf)
      
      1x interface variable definition with hostname with bifrost exclusion
      (baremetal pre-install /etc/hosts; api intf)
      
      neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
      
      basic multinode source CI job for IPv6
      
      prechecks for rabbitmq and qdrouterd use proper NSS database now
      
      MariaDB Galera Cluster WSREP SST mariabackup workaround
      (socat and IPv6)
      
      Ceph naming workaround in CI
      TODO: probably needs documenting
      
      RabbitMQ IPv6-only proto_dist
      
      Ceph ms switch to IPv6 mode
      
      Remove neutron-server ml2_type_vxlan/vxlan_group setting
      as it is not used (let's avoid any confusion)
      and could break setups without proper multicast routing
      if it started working (also IPv4-only)
      
      haproxy upgrade checks for slaves based on ipv6 addresses
      
      TODO:
      
      ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
      not supported, invalid by default because neutron_external has no address
      No idea whether ovs-dpdk works at all atm.
      
      ml2 for xenapi
      Xen is not supported too well.
      This would require working with XenAPI facts.
      
      rp_filter setting
      This would require meddling with ip6tables (there is no sysctl param).
      By default nothing is dropped.
      Unlikely we really need it.
      
      ironic dnsmasq is configured IPv4-only
      dnsmasq needs DHCPv6 options and testing in vivo.
      
      KNOWN ISSUES (beyond us):
      
      One cannot use IPv6 address to reference the image for docker like we
      currently do, see: https://github.com/moby/moby/issues/39033
      (docker_registry; docker API 400 - invalid reference format)
      workaround: use hostname/FQDN
      
      RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
      This is due to old RabbitMQ versions available in images.
      IPv4 is preferred by default and may fail in the IPv6-only scenario.
      This should be no problem in real life as IPv6-only is indeed IPv6-only.
      Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
      no longer be relevant as we supply all the necessary config.
      See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
      
      For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
      to work well). Older Ansible versions are known to miss IPv6 addresses
      in interface facts. This may affect redeploys, reconfigures and
      upgrades which run after VIP address is assigned.
      See: https://github.com/ansible/ansible/issues/63227
      
      Bifrost Train does not support IPv6 deployments.
      See: https://storyboard.openstack.org/#!/story/2006689
      
      
      
      Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      Implements: blueprint ipv6-control-plane
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      bc053c09
    • Radosław Piliszek's avatar
      CI: Increase job run attempts to 5 · f3f4a93e
      Radosław Piliszek authored
      Attempts affect pre failures.
      This means we can increase stability of jobs by rejecting nodes
      that fail pre without failing runs at the same time (unless we
      are really unlucky and hit b0rken nodes 5 times in a row).
      
      Change-Id: I17b7f878c742fa8db66f738526855a02ab9f1905
      f3f4a93e
  2. Oct 15, 2019
  3. Oct 14, 2019
  4. Oct 13, 2019
  5. Oct 12, 2019
  6. Oct 11, 2019
  7. Oct 10, 2019
  8. Oct 09, 2019
  9. Oct 08, 2019
  10. Oct 07, 2019
    • Zuul's avatar
      e689d14d
    • Mark Goddard's avatar
      CI: Use any_errors_fatal in pre.yml and run.yml · fac16704
      Mark Goddard authored
      This ensures that failure of a single host fails the whole play at that
      task. This can avoid confusing errors such as when the task
      "Assert that the nodepool private IPv4 address is assigned" fails on one
      host, causing subsequent errors on other hosts.
      
      Note that this only affects the Zuul playbooks, not Kolla Ansible's
      playbooks.
      
      Change-Id: I77a6534dd2ddd188f795e17d17a44be249d01f31
      fac16704
    • Mark Goddard's avatar
      Fix swift-proxy-server memcached configuration · 3488479d
      Mark Goddard authored
      Currently, swift-proxy config uses hosts in the swift-proxy-server group
      to generate the list of memcached servers. However, memcached is
      deployed to hosts in the memcached group.
      
      This change fixes the memcached_servers option for swift-proxy to be the
      same as other services.
      
      Change-Id: Ib850a1bb2a504ac3e1396846ca3f1d9a30e8fca0
      Closes-Bug: #1774313
      3488479d
  11. Oct 05, 2019
  12. Oct 04, 2019
  13. Oct 03, 2019
  14. Oct 02, 2019
  15. Oct 01, 2019
Loading