Skip to content
Snippets Groups Projects
  1. Apr 08, 2020
    • Mark Goddard's avatar
      Remove support for CentOS 7 · f4e20a1f
      Mark Goddard authored
      CentOS 8 support is now fairly complete - time to drop CentOS 7.
      
      Partially-Implements: blueprint centos-rhel-8
      
      Change-Id: I940b1d3eceb98e16fa366c243672f588b1412d70
      f4e20a1f
  2. Mar 19, 2020
  3. Feb 21, 2020
  4. Feb 20, 2020
  5. Feb 11, 2020
  6. Feb 06, 2020
    • Mark Goddard's avatar
      CI: Use auto-detected python interpreter except on CentOS 7 · 5b38fbfc
      Mark Goddard authored
      This switches to python 3 as the remote python interpreter on
      Debian/Ubuntu jobs, with CentOS 7 as the only exception using python 2.
      
      Also switch to auto-detection of the interpeter except for CentOS 7,
      which should be based on the one used by ansible-playbook (python 3).
      
      Change-Id: Ie4aff6123dfc7267fe78f4bd736565fb72fe135e
      Partially-Implements: python-3
      5b38fbfc
    • Radosław Piliszek's avatar
      CentOS 8: Add deploy jobs in CI · 287adab0
      Radosław Piliszek authored
      Adds new CI job definitions for CentOS 8:
      
      - kolla-ansible-centos8-source
      - kolla-ansible-centos8-binary
      - kolla-ansible-centos8-source-ceph-ansible
      - kolla-ansible-centos8-source-cinder-lvm
      - kolla-ansible-centos8-source-mariadb
      - kolla-ansible-centos8-source-bifrost
      - kolla-ansible-centos8-source-zun
      - kolla-ansible-centos8-source-swift
      - kolla-ansible-centos8-source-scenario-nfv
      - kolla-ansible-centos8-source-ironic
      - kolla-ansible-centos8-binary-ironic
      - kolla-ansible-centos8-source-masakari
      - kolla-ansible-centos8-source-cells
      
      The following jobs are added to the check pipeline:
      
      - kolla-ansible-centos8-source
      - kolla-ansible-centos8-binary
      - kolla-ansible-centos8-source-cinder-lvm
      - kolla-ansible-centos8-source-mariadb
      - kolla-ansible-centos8-source-zun
      - kolla-ansible-centos8-source-swift
      - kolla-ansible-centos8-source-scenario-nfv
      - kolla-ansible-centos8-source-ironic
      - kolla-ansible-centos8-binary-ironic
      - kolla-ansible-centos8-source-cells
      
      The following jobs are not yet passing so are not added to the check
      pipeline:
      
      - kolla-ansible-centos8-source-ceph-ansible
      - kolla-ansible-centos8-source-bifrost
      - kolla-ansible-centos8-source-masakari
      
      The kolla-ansible-centos8-source job is added to the gate.
      
      Upgrade jobs will be added when CentOS 8 support exists in Train.
      
      Depends-On: https://review.opendev.org/704337
      Depends-On: https://review.opendev.org/704848
      Depends-On: https://review.opendev.org/704965
      
      
      
      Co-Authored-By: default avatarMark Goddard <mark@stackhpc.com>
      
      Change-Id: Ibd806feee71721b122b77d7eff33228ca1cc2853
      Partially-Implements: blueprint centos-rhel-8
      287adab0
  7. Feb 05, 2020
  8. Jan 29, 2020
    • Michal Nasiadka's avatar
      External Ceph: add ceph_*_user variables · fdf3729f
      Michal Nasiadka authored
      To make the configuration easier for the user, and to allow non-standard
      ceph authentication ids - introduce ceph_*_user variables.
      
      Change-Id: I24e01c43c826b62b6748d93a498f4b7d8ce9e309
      fdf3729f
  9. Jan 28, 2020
    • generalfuzz's avatar
      CI: Add TLS tests · 6404d0e0
      generalfuzz authored
      Add a TLS scenario in zuul to generate self signed certificates and
      to configure TLS to be enabled in the open stack deployment.
      
      Change-Id: If10a23dfa67212e843ef26486c9523074cc920e7
      Partially-Implements: blueprint custom-cacerts
      6404d0e0
  10. Jan 24, 2020
  11. Jan 10, 2020
    • Mark Goddard's avatar
      CentOS 8: Support variable image tag suffix · 9755c924
      Mark Goddard authored
      For the CentOS 7 to 8 transition, we will have a period where both
      CentOS 7 and 8 images are available. We differentiate these images via a
      tag - the CentOS 8 images will have a tag of train-centos8 (or
      master-centos8 temporarily).
      
      To achieve this, and maintain backwards compatibility for the
      openstack_release variable, we introduce a new 'openstack_tag' variable.
      This variable is based on openstack_release, but has a suffix of
      'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
      value of '-centos8'.
      
      Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
      Partially-Implements: blueprint centos-rhel-8
      9755c924
  12. Dec 10, 2019
  13. Dec 09, 2019
    • Mark Goddard's avatar
      CI: Use python 3 for local kolla-ansible execution · a5408f42
      Mark Goddard authored
      This change switches the CI jobs to use python 3 for local execution of
      the kolla-ansible commands.
      
      For upgrades, we use python 2 for the previous (Train) deploy, then
      reinstall using python 3 for the (Ussuri) upgrade.
      
      NOTE: This is separate from the python interpreter used on remote hosts,
      which is configured via ansible_python_interpreter.
      
      Partially Implements: blueprint python-3
      Related: blueprint drop-py2-support
      
      Change-Id: I5bdc165f68b7bde1f9ef30fe8216f2a44e6d4706
      a5408f42
  14. Nov 26, 2019
    • Radosław Piliszek's avatar
      CI: Refactor a lot · a2fc6841
      Radosław Piliszek authored
      Separate upgrade logic to is_upgrade job var and rename
      scenarios to match.
      
      Rename "ACTION" to "SCENARIO" (as it is a scenario).
      
      Separate testing of dashboard (aka Horizon) and increase
      its timeout to 5 minutes (CentOS 7 slow as always).
      
      Separate initialization of core OpenStack.
      
      Use gate setup script from ./tests/
      
      Remove useless tox setupenv.
      
      Do not deploy Heat when not really necessary.
      
      Change-Id: I4fca319ccc3de7188f8b7b44c9c71321e3899467
      a2fc6841
  15. Nov 14, 2019
  16. Nov 07, 2019
  17. Oct 25, 2019
  18. Oct 20, 2019
  19. Oct 16, 2019
    • Doug Szumski's avatar
      Support multiple nova cells · 78a828ef
      Doug Szumski authored
      
      This patch adds initial support for deploying multiple Nova cells.
      
      Splitting a nova-cell role out from the Nova role allows a more granular
      approach to deploying and configuring Nova services.
      
      A new enable_cells flag has been added that enables the support of
      multiple cells via the introduction of a super conductor in addition to
      cell-specific conductors. When this flag is not set (the default), nova
      is configured in the same manner as before - with a single conductor.
      
      The nova role now deploys the global services:
      
      * nova-api
      * nova-scheduler
      * nova-super-conductor (if enable_cells is true)
      
      The nova-cell role handles services specific to a cell:
      
      * nova-compute
      * nova-compute-ironic
      * nova-conductor
      * nova-libvirt
      * nova-novncproxy
      * nova-serialproxy
      * nova-spicehtml5proxy
      * nova-ssh
      
      This patch does not support using a single cell controller for managing
      more than one cell. Support for sharing a cell controller will be added
      in a future patch.
      
      This patch should be backwards compatible and is tested by existing CI
      jobs. A new CI job has been added that tests a multi-cell environment.
      
      ceph-mon has been removed from the play hosts list as it is not
      necessary - delegate_to does not require the host to be in the play.
      
      Documentation will be added in a separate patch.
      
      Partially Implements: blueprint support-nova-cells
      Co-Authored-By: default avatarMark Goddard <mark@stackhpc.com>
      Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
      78a828ef
    • Radosław Piliszek's avatar
      Implement IPv6 support in the control plane · bc053c09
      Radosław Piliszek authored
      Introduce kolla_address filter.
      Introduce put_address_in_context filter.
      
      Add AF config to vars.
      
      Address contexts:
      - raw (default): <ADDR>
      - memcache: inet6:[<ADDR>]
      - url: [<ADDR>]
      
      Other changes:
      
      globals.yml - mention just IP in comment
      
      prechecks/port_checks (api_intf) - kolla_address handles validation
      
      3x interface conditional (swift configs: replication/storage)
      
      2x interface variable definition with hostname
      (haproxy listens; api intf)
      
      1x interface variable definition with hostname with bifrost exclusion
      (baremetal pre-install /etc/hosts; api intf)
      
      neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
      
      basic multinode source CI job for IPv6
      
      prechecks for rabbitmq and qdrouterd use proper NSS database now
      
      MariaDB Galera Cluster WSREP SST mariabackup workaround
      (socat and IPv6)
      
      Ceph naming workaround in CI
      TODO: probably needs documenting
      
      RabbitMQ IPv6-only proto_dist
      
      Ceph ms switch to IPv6 mode
      
      Remove neutron-server ml2_type_vxlan/vxlan_group setting
      as it is not used (let's avoid any confusion)
      and could break setups without proper multicast routing
      if it started working (also IPv4-only)
      
      haproxy upgrade checks for slaves based on ipv6 addresses
      
      TODO:
      
      ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
      not supported, invalid by default because neutron_external has no address
      No idea whether ovs-dpdk works at all atm.
      
      ml2 for xenapi
      Xen is not supported too well.
      This would require working with XenAPI facts.
      
      rp_filter setting
      This would require meddling with ip6tables (there is no sysctl param).
      By default nothing is dropped.
      Unlikely we really need it.
      
      ironic dnsmasq is configured IPv4-only
      dnsmasq needs DHCPv6 options and testing in vivo.
      
      KNOWN ISSUES (beyond us):
      
      One cannot use IPv6 address to reference the image for docker like we
      currently do, see: https://github.com/moby/moby/issues/39033
      (docker_registry; docker API 400 - invalid reference format)
      workaround: use hostname/FQDN
      
      RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
      This is due to old RabbitMQ versions available in images.
      IPv4 is preferred by default and may fail in the IPv6-only scenario.
      This should be no problem in real life as IPv6-only is indeed IPv6-only.
      Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
      no longer be relevant as we supply all the necessary config.
      See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
      
      For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
      to work well). Older Ansible versions are known to miss IPv6 addresses
      in interface facts. This may affect redeploys, reconfigures and
      upgrades which run after VIP address is assigned.
      See: https://github.com/ansible/ansible/issues/63227
      
      Bifrost Train does not support IPv6 deployments.
      See: https://storyboard.openstack.org/#!/story/2006689
      
      
      
      Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      Implements: blueprint ipv6-control-plane
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      bc053c09
  20. Oct 01, 2019
  21. Sep 23, 2019
    • Mark Goddard's avatar
      CI: Reinstate use of Docker registry mirror · 5c9a7983
      Mark Goddard authored
      After modernising docker configuration
      (I1215e04ec15b01c0b43bac8c0e81293f6724f278), we lost our
      registry-mirrors configuration in CI that lets us use a mirror of
      Dockerhub.
      
      This change uses the new docker_custom_config variable to configure the
      registry mirror.
      
      Change-Id: I1430413c12e9d0b59e4f216ff66372de0f3a4f21
      5c9a7983
  22. Sep 19, 2019
  23. Sep 18, 2019
  24. Sep 14, 2019
  25. Sep 10, 2019
    • Hongbin Lu's avatar
      Configure Zun for Placement (Train+) · 0f5e0658
      Hongbin Lu authored
      After the integration with placement [1], we need to configure how
      zun-compute is going to work with nova-compute.
      
      * If zun-compute and nova-compute run on the same compute node,
        we need to set 'host_shared_with_nova' as true so that Zun
        will use the resource provider (compute node) created by nova.
        In this mode, containers and VMs could claim allocations against
        the same resource provider.
      * If zun-compute runs on a node without nova-compute, no extra
        configuration is needed. By default, each zun-compute will create
        a resource provider in placement to represent the compute node
        it manages.
      
      [1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
      
      Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
      0f5e0658
  26. Sep 05, 2019
  27. Aug 16, 2019
    • Radosław Piliszek's avatar
      CI: Zun jobs · d4de1d75
      Radosław Piliszek authored
      - Test Zun on CentOS too
      - Make etcd change also trigger Zun jobs (like kuryr and zun)
      - Test multinode Zun deployments instead of AIO
        (more likely to break)
      - In Zun scenario, stop configuring docker for legacy swarm mode
        (Zun is no swarm)
      - Separate test-zun.sh testing script
      - Show appcontainer to see which node it has been started on
      
      Change-Id: I289b1009fe00aedb9b78cbd83298b14da5fd9670
      Depends-On: https://review.opendev.org/676736
      
      
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      d4de1d75
  28. Aug 14, 2019
  29. Jul 26, 2019
  30. Jul 18, 2019
    • Radosław Piliszek's avatar
      Fix handling of docker restart policy · 6a737b19
      Radosław Piliszek authored
      Docker has no restart policy named 'never'. It has 'no'.
      This has bitten us already (see [1]) and might bite us again whenever
      we want to change the restart policy to 'no'.
      
      This patch makes our docker integration honor all valid restart policies
      and only valid restart policies.
      All relevant docker restart policy usages are patched as well.
      
      I added some FIXMEs around which are relevant to kolla-ansible docker
      integration. They are not fixed in here to not alter behavior.
      
      [1] https://review.opendev.org/667363
      
      
      
      Change-Id: I1c9764fb9bbda08a71186091aced67433ad4e3d6
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      6a737b19
  31. Jul 16, 2019
  32. Jul 01, 2019
  33. Jun 21, 2019
  34. Jun 11, 2019
    • Mark Goddard's avatar
      Add CI job for ironic · 845040ad
      Mark Goddard authored
      Adds four new CI jobs for testing centos/ubuntu binary/source deploys
      with ironic enabled. These are run only when there are changes to the
      ironic role.
      
      Performs some simple testing by creating a node using the fake-hardware
      hardware type and creating a server.
      
      Change-Id: Ie669e57ce2af53257b4ca05f45193cb73f48827a
      Depends-On: https://review.opendev.org/664011
      845040ad
  35. Jun 03, 2019
    • Mark Goddard's avatar
      Test Ceph upgrade in CI · 78ee0287
      Mark Goddard authored
      Add CI jobs for testing an upgrade of a multinode system with Ceph
      enabled. As for the existing upgrade job, we upgrade from the previous
      release to the current release.
      
      Change-Id: I931772ca4c63757769467a57c80dc0726a11167a
      Depends-On: https://review.opendev.org/658163
      78ee0287
Loading