Skip to content
Snippets Groups Projects
  1. Feb 14, 2020
  2. Feb 12, 2020
    • Michal Nasiadka's avatar
      Upgrade virtualenv in pre · 241e3474
      Michal Nasiadka authored
      Since virtualenv 20.0 (amongst other changes) six version >1.12.0 is required.
      This change adds upgrade of virtualenv and six in pre - to be reverted once
      infra CentOS images are sorted out.
      
      Change-Id: I0ca0347bb6ebc5d8f5d22f708211e01221165262
      241e3474
  3. Feb 11, 2020
  4. Feb 06, 2020
    • Radosław Piliszek's avatar
      CentOS 8: Add deploy jobs in CI · 287adab0
      Radosław Piliszek authored
      Adds new CI job definitions for CentOS 8:
      
      - kolla-ansible-centos8-source
      - kolla-ansible-centos8-binary
      - kolla-ansible-centos8-source-ceph-ansible
      - kolla-ansible-centos8-source-cinder-lvm
      - kolla-ansible-centos8-source-mariadb
      - kolla-ansible-centos8-source-bifrost
      - kolla-ansible-centos8-source-zun
      - kolla-ansible-centos8-source-swift
      - kolla-ansible-centos8-source-scenario-nfv
      - kolla-ansible-centos8-source-ironic
      - kolla-ansible-centos8-binary-ironic
      - kolla-ansible-centos8-source-masakari
      - kolla-ansible-centos8-source-cells
      
      The following jobs are added to the check pipeline:
      
      - kolla-ansible-centos8-source
      - kolla-ansible-centos8-binary
      - kolla-ansible-centos8-source-cinder-lvm
      - kolla-ansible-centos8-source-mariadb
      - kolla-ansible-centos8-source-zun
      - kolla-ansible-centos8-source-swift
      - kolla-ansible-centos8-source-scenario-nfv
      - kolla-ansible-centos8-source-ironic
      - kolla-ansible-centos8-binary-ironic
      - kolla-ansible-centos8-source-cells
      
      The following jobs are not yet passing so are not added to the check
      pipeline:
      
      - kolla-ansible-centos8-source-ceph-ansible
      - kolla-ansible-centos8-source-bifrost
      - kolla-ansible-centos8-source-masakari
      
      The kolla-ansible-centos8-source job is added to the gate.
      
      Upgrade jobs will be added when CentOS 8 support exists in Train.
      
      Depends-On: https://review.opendev.org/704337
      Depends-On: https://review.opendev.org/704848
      Depends-On: https://review.opendev.org/704965
      
      
      
      Co-Authored-By: default avatarMark Goddard <mark@stackhpc.com>
      
      Change-Id: Ibd806feee71721b122b77d7eff33228ca1cc2853
      Partially-Implements: blueprint centos-rhel-8
      287adab0
  5. Feb 05, 2020
  6. Jan 29, 2020
    • Michal Nasiadka's avatar
      External Ceph: add ceph_*_user variables · fdf3729f
      Michal Nasiadka authored
      To make the configuration easier for the user, and to allow non-standard
      ceph authentication ids - introduce ceph_*_user variables.
      
      Change-Id: I24e01c43c826b62b6748d93a498f4b7d8ce9e309
      fdf3729f
  7. Jan 28, 2020
    • generalfuzz's avatar
      CI: Add TLS tests · 6404d0e0
      generalfuzz authored
      Add a TLS scenario in zuul to generate self signed certificates and
      to configure TLS to be enabled in the open stack deployment.
      
      Change-Id: If10a23dfa67212e843ef26486c9523074cc920e7
      Partially-Implements: blueprint custom-cacerts
      6404d0e0
  8. Jan 24, 2020
  9. Jan 10, 2020
    • Mark Goddard's avatar
      CentOS 8: Support variable image tag suffix · 9755c924
      Mark Goddard authored
      For the CentOS 7 to 8 transition, we will have a period where both
      CentOS 7 and 8 images are available. We differentiate these images via a
      tag - the CentOS 8 images will have a tag of train-centos8 (or
      master-centos8 temporarily).
      
      To achieve this, and maintain backwards compatibility for the
      openstack_release variable, we introduce a new 'openstack_tag' variable.
      This variable is based on openstack_release, but has a suffix of
      'openstack_tag_suffix', which is empty except on CentOS 8 where it has a
      value of '-centos8'.
      
      Change-Id: I12ce4661afb3c255136cdc1aabe7cbd25560d625
      Partially-Implements: blueprint centos-rhel-8
      9755c924
  10. Jan 09, 2020
  11. Jan 08, 2020
    • Mark Goddard's avatar
      Configure Cinder to use lioadm on CentOS/RHEL 8 · 350bb171
      Mark Goddard authored
      In CentOS/RHEL 8 there is no scsi-target-utils package, nor is it
      available in EPEL. It is removed from kolla in [1]. In RHEL 7 and beyond
      the LIO kernel subsystem can be used instead of the tgtd daemon.
      
      This change removes support for the SCSI target daemon on CentOS/RHEL 8.
      The 'tgtd' image is no longer available for CentOS/RHEL 8.
      
      [1] https://review.openstack.org/#/c/613815/5
      
      Change-Id: I718fc16cde2dd177b2a1c2f79b932426034897fe
      Related: blueprint centos-rhel-8
      350bb171
  12. Dec 19, 2019
  13. Dec 11, 2019
    • Mark Goddard's avatar
      Drop python 2 support from action plugins · 3f10f708
      Mark Goddard authored
      These are executed on the local host where we run ansible-playbook,
      and we have agreed to drop Python 2 support there.
      
      Partially Implements: blueprint drop-py2-support
      Change-Id: Id2190c3a22a56f4f048afbf0f7200daa8f41a292
      3f10f708
  14. Dec 10, 2019
  15. Dec 09, 2019
    • Mark Goddard's avatar
      CI: Use python 3 for local kolla-ansible execution · a5408f42
      Mark Goddard authored
      This change switches the CI jobs to use python 3 for local execution of
      the kolla-ansible commands.
      
      For upgrades, we use python 2 for the previous (Train) deploy, then
      reinstall using python 3 for the (Ussuri) upgrade.
      
      NOTE: This is separate from the python interpreter used on remote hosts,
      which is configured via ansible_python_interpreter.
      
      Partially Implements: blueprint python-3
      Related: blueprint drop-py2-support
      
      Change-Id: I5bdc165f68b7bde1f9ef30fe8216f2a44e6d4706
      a5408f42
    • Mark Goddard's avatar
      CI: Move ansible installation & configuration to Ansible · c320077f
      Mark Goddard authored
      Continue to reduce the scope of setup_gate.sh. Allows us to more easily
      select python 2 or 3.
      
      Change-Id: If2eeeacbbbdf58afb765b4a39772b5a1af7b952b
      Partially Implements: blueprint python-3
      c320077f
  16. Dec 08, 2019
  17. Dec 01, 2019
  18. Nov 28, 2019
    • Mark Goddard's avatar
      Support configuration of Docker client timeout · 01050dc0
      Mark Goddard authored
      Adds support for configuration of the Docker client timeout via
      'docker_client_timeout'.
      
      This change also increases the default timeout to 120 seconds, as we
      sometimes see timeouts in CI and heavily loaded or underpowered
      environments. Increasing 'docker_client_timeout' further may be helpful
      in cases where Docker reports 'Read timed out'.
      
      Change-Id: I73745771078cb2c0ebae2b1d87ba2c4c12958d82
      Closes-Bug: #1809844
      01050dc0
  19. Nov 26, 2019
    • Radosław Piliszek's avatar
      CI: Refactor a lot · a2fc6841
      Radosław Piliszek authored
      Separate upgrade logic to is_upgrade job var and rename
      scenarios to match.
      
      Rename "ACTION" to "SCENARIO" (as it is a scenario).
      
      Separate testing of dashboard (aka Horizon) and increase
      its timeout to 5 minutes (CentOS 7 slow as always).
      
      Separate initialization of core OpenStack.
      
      Use gate setup script from ./tests/
      
      Remove useless tox setupenv.
      
      Do not deploy Heat when not really necessary.
      
      Change-Id: I4fca319ccc3de7188f8b7b44c9c71321e3899467
      a2fc6841
  20. Nov 21, 2019
    • Radosław Piliszek's avatar
      CI: Wait for Zun to delete the test container · a3c8a848
      Radosław Piliszek authored
      We fail randomly on check-failure.sh which checks for
      containers being down.
      Since we share Docker with Zun, the script sees Zun test container
      and may fail when it is stopped but not yet removed.
      
      Change-Id: If8b001f7507663e49e8e535f1889592e5f428ab5
      Closes-bug: #1853452
      a3c8a848
  21. Nov 18, 2019
  22. Nov 15, 2019
  23. Nov 14, 2019
    • Mark Goddard's avatar
      Attempt to pull image before stopping and removing container · 64d07c0b
      Mark Goddard authored
      * Deploy services using kolla-ansible deploy
      * Reconfigure the image for one or more services to use an invalid
      * config
      * Deploy/reconfigure services using kolla-ansible reconfigure
      
      The invalid config could be a wrong docker registry, wrong image name,
      wrong tag, etc.
      
      The restart handler for the service fails, and the old container is
      left running.
      
      The restart handler for the service fails, and the old container is
      stopped and removed. This leaves the service in a broken state.
      
      This change fixes the issue by pulling the image if necessary prior to
      stopping and removing the container.
      
      Change-Id: I85b2a1b224d4c4d85c32c4922a2cd2c41171a1dc
      Closes-Bug: #1852572
      64d07c0b
    • Mark Goddard's avatar
      CI: Remove Stein upgrade support from CI · 6f876254
      Mark Goddard authored
      Resolves a number of TODOs in the CI configuration that provide support
      for upgrading from the Stein release.
      
      Change-Id: I9bac5c230b82ac7c097fe6ca2556e428abda31a1
      Depends-On: https://review.opendev.org/694254
      6f876254
  24. Nov 07, 2019
  25. Oct 25, 2019
  26. Oct 23, 2019
  27. Oct 20, 2019
  28. Oct 16, 2019
    • Doug Szumski's avatar
      Support multiple nova cells · 78a828ef
      Doug Szumski authored
      
      This patch adds initial support for deploying multiple Nova cells.
      
      Splitting a nova-cell role out from the Nova role allows a more granular
      approach to deploying and configuring Nova services.
      
      A new enable_cells flag has been added that enables the support of
      multiple cells via the introduction of a super conductor in addition to
      cell-specific conductors. When this flag is not set (the default), nova
      is configured in the same manner as before - with a single conductor.
      
      The nova role now deploys the global services:
      
      * nova-api
      * nova-scheduler
      * nova-super-conductor (if enable_cells is true)
      
      The nova-cell role handles services specific to a cell:
      
      * nova-compute
      * nova-compute-ironic
      * nova-conductor
      * nova-libvirt
      * nova-novncproxy
      * nova-serialproxy
      * nova-spicehtml5proxy
      * nova-ssh
      
      This patch does not support using a single cell controller for managing
      more than one cell. Support for sharing a cell controller will be added
      in a future patch.
      
      This patch should be backwards compatible and is tested by existing CI
      jobs. A new CI job has been added that tests a multi-cell environment.
      
      ceph-mon has been removed from the play hosts list as it is not
      necessary - delegate_to does not require the host to be in the play.
      
      Documentation will be added in a separate patch.
      
      Partially Implements: blueprint support-nova-cells
      Co-Authored-By: default avatarMark Goddard <mark@stackhpc.com>
      Change-Id: I810aad7d49db3f5a7fd9a2f0f746fd912fe03917
      78a828ef
    • Radosław Piliszek's avatar
      Implement IPv6 support in the control plane · bc053c09
      Radosław Piliszek authored
      Introduce kolla_address filter.
      Introduce put_address_in_context filter.
      
      Add AF config to vars.
      
      Address contexts:
      - raw (default): <ADDR>
      - memcache: inet6:[<ADDR>]
      - url: [<ADDR>]
      
      Other changes:
      
      globals.yml - mention just IP in comment
      
      prechecks/port_checks (api_intf) - kolla_address handles validation
      
      3x interface conditional (swift configs: replication/storage)
      
      2x interface variable definition with hostname
      (haproxy listens; api intf)
      
      1x interface variable definition with hostname with bifrost exclusion
      (baremetal pre-install /etc/hosts; api intf)
      
      neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
      
      basic multinode source CI job for IPv6
      
      prechecks for rabbitmq and qdrouterd use proper NSS database now
      
      MariaDB Galera Cluster WSREP SST mariabackup workaround
      (socat and IPv6)
      
      Ceph naming workaround in CI
      TODO: probably needs documenting
      
      RabbitMQ IPv6-only proto_dist
      
      Ceph ms switch to IPv6 mode
      
      Remove neutron-server ml2_type_vxlan/vxlan_group setting
      as it is not used (let's avoid any confusion)
      and could break setups without proper multicast routing
      if it started working (also IPv4-only)
      
      haproxy upgrade checks for slaves based on ipv6 addresses
      
      TODO:
      
      ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
      not supported, invalid by default because neutron_external has no address
      No idea whether ovs-dpdk works at all atm.
      
      ml2 for xenapi
      Xen is not supported too well.
      This would require working with XenAPI facts.
      
      rp_filter setting
      This would require meddling with ip6tables (there is no sysctl param).
      By default nothing is dropped.
      Unlikely we really need it.
      
      ironic dnsmasq is configured IPv4-only
      dnsmasq needs DHCPv6 options and testing in vivo.
      
      KNOWN ISSUES (beyond us):
      
      One cannot use IPv6 address to reference the image for docker like we
      currently do, see: https://github.com/moby/moby/issues/39033
      (docker_registry; docker API 400 - invalid reference format)
      workaround: use hostname/FQDN
      
      RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
      This is due to old RabbitMQ versions available in images.
      IPv4 is preferred by default and may fail in the IPv6-only scenario.
      This should be no problem in real life as IPv6-only is indeed IPv6-only.
      Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
      no longer be relevant as we supply all the necessary config.
      See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
      
      For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
      to work well). Older Ansible versions are known to miss IPv6 addresses
      in interface facts. This may affect redeploys, reconfigures and
      upgrades which run after VIP address is assigned.
      See: https://github.com/ansible/ansible/issues/63227
      
      Bifrost Train does not support IPv6 deployments.
      See: https://storyboard.openstack.org/#!/story/2006689
      
      
      
      Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      Implements: blueprint ipv6-control-plane
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      bc053c09
Loading