Skip to content
Snippets Groups Projects
  1. Dec 09, 2019
  2. Oct 25, 2019
  3. Oct 21, 2019
  4. Oct 20, 2019
  5. Oct 17, 2019
  6. Oct 16, 2019
    • Radosław Piliszek's avatar
      Implement IPv6 support in the control plane · bc053c09
      Radosław Piliszek authored
      Introduce kolla_address filter.
      Introduce put_address_in_context filter.
      
      Add AF config to vars.
      
      Address contexts:
      - raw (default): <ADDR>
      - memcache: inet6:[<ADDR>]
      - url: [<ADDR>]
      
      Other changes:
      
      globals.yml - mention just IP in comment
      
      prechecks/port_checks (api_intf) - kolla_address handles validation
      
      3x interface conditional (swift configs: replication/storage)
      
      2x interface variable definition with hostname
      (haproxy listens; api intf)
      
      1x interface variable definition with hostname with bifrost exclusion
      (baremetal pre-install /etc/hosts; api intf)
      
      neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
      
      basic multinode source CI job for IPv6
      
      prechecks for rabbitmq and qdrouterd use proper NSS database now
      
      MariaDB Galera Cluster WSREP SST mariabackup workaround
      (socat and IPv6)
      
      Ceph naming workaround in CI
      TODO: probably needs documenting
      
      RabbitMQ IPv6-only proto_dist
      
      Ceph ms switch to IPv6 mode
      
      Remove neutron-server ml2_type_vxlan/vxlan_group setting
      as it is not used (let's avoid any confusion)
      and could break setups without proper multicast routing
      if it started working (also IPv4-only)
      
      haproxy upgrade checks for slaves based on ipv6 addresses
      
      TODO:
      
      ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
      not supported, invalid by default because neutron_external has no address
      No idea whether ovs-dpdk works at all atm.
      
      ml2 for xenapi
      Xen is not supported too well.
      This would require working with XenAPI facts.
      
      rp_filter setting
      This would require meddling with ip6tables (there is no sysctl param).
      By default nothing is dropped.
      Unlikely we really need it.
      
      ironic dnsmasq is configured IPv4-only
      dnsmasq needs DHCPv6 options and testing in vivo.
      
      KNOWN ISSUES (beyond us):
      
      One cannot use IPv6 address to reference the image for docker like we
      currently do, see: https://github.com/moby/moby/issues/39033
      (docker_registry; docker API 400 - invalid reference format)
      workaround: use hostname/FQDN
      
      RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
      This is due to old RabbitMQ versions available in images.
      IPv4 is preferred by default and may fail in the IPv6-only scenario.
      This should be no problem in real life as IPv6-only is indeed IPv6-only.
      Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
      no longer be relevant as we supply all the necessary config.
      See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
      
      For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
      to work well). Older Ansible versions are known to miss IPv6 addresses
      in interface facts. This may affect redeploys, reconfigures and
      upgrades which run after VIP address is assigned.
      See: https://github.com/ansible/ansible/issues/63227
      
      Bifrost Train does not support IPv6 deployments.
      See: https://storyboard.openstack.org/#!/story/2006689
      
      
      
      Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      Implements: blueprint ipv6-control-plane
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      bc053c09
  7. Oct 08, 2019
    • Mark Goddard's avatar
      Docs: improve Nova documentation · e91186c6
      Mark Goddard authored
      Adds a top-level guide for Nova, with links off to the various virt
      driver guides.
      
      Generalises the libvirt TLS guide into a libvirt guide, and adds info on
      hardware virtualisation and qemu vs. kvm.
      
      Adds information on configuring consoles.
      
      Change-Id: I36beaaee313bdbc4bcf8cc15c41dda245a5a81ba
      e91186c6
  8. Sep 30, 2019
    • Joseph M's avatar
      [designate] Add coordination backend for designate workers · 9cae6083
      Joseph M authored
      Add coordination backend configuration to designate.conf which is
      required in multinode environments. Fixes warning from designate:
      
      WARNING designate.coordination [-] No coordination backend configured,
      assuming we are the only worker. Please configure a coordination backend
      
      Change-Id: I23c4d2de7e3f9368795c423000a4f9a6c3a431e2
      Closes-Bug: #1843842
      Related-Bug: #1840070
      9cae6083
  9. Sep 26, 2019
  10. Sep 24, 2019
  11. Sep 19, 2019
    • Kris Lindgren's avatar
      Add support for libvirt+tls · f8cfccb9
      Kris Lindgren authored
      To securely support live migration between computenodes we should enable
      tls, with cert auth, instead of TCP with no auth support.
      
      Implements: blueprint libvirt-tls
      
      Change-Id: I22ea6233933c840b853fdcc8e03400b2bf577271
      f8cfccb9
  12. Sep 18, 2019
  13. Sep 12, 2019
    • Scott Solkhon's avatar
      Enable Swift Recon · d463d3f7
      Scott Solkhon authored
      
      This commit adds the necessary configuration to the Swift account,
      container and object configuration files to enable the Swift recon
      cli.
      
      In order to give the object server on each Swift host access to the
      recon files, a Docker volume is mounted into each container which
      generates them. The volume is then mounted read only into the object
      server container. Note that multiple containers append to the same
      file. This should not be a problem since Swift uses a lock when
      appending.
      
      Change-Id: I343d8f45a78ebc3c11ed0c68fe8bec24f9ea7929
      Co-authored-by: default avatarDoug Szumski <doug@stackhpc.com>
      d463d3f7
  14. Sep 10, 2019
    • Hongbin Lu's avatar
      Configure Zun for Placement (Train+) · 0f5e0658
      Hongbin Lu authored
      After the integration with placement [1], we need to configure how
      zun-compute is going to work with nova-compute.
      
      * If zun-compute and nova-compute run on the same compute node,
        we need to set 'host_shared_with_nova' as true so that Zun
        will use the resource provider (compute node) created by nova.
        In this mode, containers and VMs could claim allocations against
        the same resource provider.
      * If zun-compute runs on a node without nova-compute, no extra
        configuration is needed. By default, each zun-compute will create
        a resource provider in placement to represent the compute node
        it manages.
      
      [1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
      
      Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
      0f5e0658
  15. Sep 05, 2019
  16. Aug 23, 2019
  17. Aug 16, 2019
  18. Aug 15, 2019
  19. Aug 14, 2019
    • Scott Solkhon's avatar
      Add support for Swift S3 API · d72b27f2
      Scott Solkhon authored
      This feature is disabled by default, and can be enabled by setting
      'enable_swift_s3api' to 'true' in globals.yml.
      
      Two middlewares are required for Swift S3 - s3api and s3token. Additionally, we
      need to configure the authtoken middleware to delay auth decisions to give
      s3token a chance to authorise requests using EC2 credentials.
      
      Change-Id: Ib8e8e3a1c2ab383100f3c60ec58066e588d3b4db
      d72b27f2
  20. Aug 06, 2019
  21. Jul 16, 2019
    • Michal Nasiadka's avatar
      ceph-nfs: Add rpcbind to Ubuntu host bootstrap · efcaf400
      Michal Nasiadka authored
      * Ubuntu ships with nfs-ganesha 2.6.0, which requires to do an rpcbind
      udp test on startup (was fixed later)
      * Add rpcbind package to be installed by kolla-ansible bootstrap when
      ceph_nfs is enabled
      * Update Ceph deployment docs with a note
      
      Change-Id: Ic19264191a0ed418fa959fdc122cef543446fbe5
      efcaf400
  22. Jul 15, 2019
  23. Jul 10, 2019
  24. Jul 04, 2019
    • Mark Goddard's avatar
      Deprecate Ceph deployment · e6d0e610
      Mark Goddard authored
      There are now several good tools for deploying Ceph, including Ceph
      Ansible and ceph-deploy. Maintaining our own Ceph deployment is a
      significant maintenance burden, and we should focus on our core mission
      to deploy OpenStack. Given that this is a significant part of kolla
      ansible currently we will need a long deprecation period and a migration
      path to another tool.
      
      Change-Id: Ic603c85c04d8794580a19f9efaa7a8589565f4f6
      Partially-Implements: blueprint remove-ceph
      e6d0e610
  25. Jun 24, 2019
  26. Jun 20, 2019
    • Doug Szumski's avatar
      Add some notes for users Migrating to Kolla Monasca · c4f488ad
      Doug Szumski authored
      This commit should help guide people migrating to Kolla Monasca
      through the murky depths of the migration process. Since Kolla
      did not support Monasca in Queens, some of these steps which
      could be automated are not.
      
      Change-Id: I79051cca27178c3cf1671f5c603e38baf929c55c
      c4f488ad
  27. Jun 17, 2019
  28. Jun 07, 2019
  29. Jun 05, 2019
    • Gaetan Trellu's avatar
      Improve Qinling documentation · 557193a7
      Gaetan Trellu authored
      - Remove trusted_cidrs that has just been removed from
      Qinling code.
      - Remove use_api_certificate because it's true by default
      - Improve list syntax
      - Add etcd section
      
      Change-Id: I0426a9d61fbeaa23a1affbc7e981a78283e88263
      557193a7
  30. May 31, 2019
    • Gaetan Trellu's avatar
      Adds Qinling Ansible role · edb34898
      Gaetan Trellu authored
      Qinling is an OpenStack project to provide "Function as a Service".
      This project aims to provide a platform to support serverless functions.
      
      Change-Id: I239a0130f8c8b061b531dab530d65172b0914d7c
      Implements: blueprint ansible-qinling-support
      Story: 2005760
      Task: 33468
      edb34898
  31. May 30, 2019
  32. May 17, 2019
    • Mark Goddard's avatar
      Fix keystone fernet key rotation scheduling · 6c1442c3
      Mark Goddard authored
      Right now every controller rotates fernet keys. This is nice because
      should any controller die, we know the remaining ones will rotate the
      keys. However, we are currently over-rotating the keys.
      
      When we over rotate keys, we get logs like this:
      
       This is not a recognized Fernet token <token> TokenNotFound
      
      Most clients can recover and get a new token, but some clients (like
      Nova passing tokens to other services) can't do that because it doesn't
      have the password to regenerate a new token.
      
      With three controllers, in crontab in keystone-fernet we see the once a day
      correctly staggered across the three controllers:
      
      ssh ctrl1 sudo cat /etc/kolla/keystone-fernet/crontab
      0 0 * * * /usr/bin/fernet-rotate.sh
      ssh ctrl2 sudo cat /etc/kolla/keystone-fernet/crontab
      0 8 * * * /usr/bin/fernet-rotate.sh
      ssh ctrl3 sudo cat /etc/kolla/keystone-fernet/crontab
      0 16 * * * /usr/bin/fernet-rotate.sh
      
      Currently with three controllers we have this keystone config:
      
      [token]
      expiration = 86400 (although, keystone default is one hour)
      allow_expired_window = 172800 (this is the keystone default)
      
      [fernet_tokens]
      max_active_keys = 4
      
      Currently, kolla-ansible configures key rotation according to the following:
      
         rotation_interval = token_expiration / num_hosts
      
      This means we rotate keys more quickly the more hosts we have, which doesn't
      make much sense.
      
      Keystone docs state:
      
         max_active_keys =
           ((token_expiration + allow_expired_window) / rotation_interval) + 2
      
      For details see:
      https://docs.openstack.org/keystone/stein/admin/fernet-token-faq.html
      
      Rotation is based on pushing out a staging key, so should any server
      start using that key, other servers will consider that valid. Then each
      server in turn starts using the staging key, each in term demoting the
      existing primary key to a secondary key. Eventually you prune the
      secondary keys when there is no token in the wild that would need to be
      decrypted using that key. So this all makes sense.
      
      This change adds new variables for fernet_token_allow_expired_window and
      fernet_key_rotation_interval, so that we can correctly calculate the
      correct number of active keys. We now set the default rotation interval
      so as to minimise the number of active keys to 3 - one primary, one
      secondary, one buffer.
      
      This change also fixes the fernet cron job generator, which was broken
      in the following cases:
      
      * requesting an interval of more than 1 day resulted in no jobs
      * requesting an interval of more than 60 minutes, unless an exact
        multiple of 60 minutes, resulted in no jobs
      
      It should now be possible to request any interval up to a week divided
      by the number of hosts.
      
      Change-Id: I10c82dc5f83653beb60ddb86d558c5602153341a
      Closes-Bug: #1809469
      6c1442c3
  33. Apr 08, 2019
  34. Mar 14, 2019
    • Scott Solkhon's avatar
      Support separate Swift storage networks · a781c643
      Scott Solkhon authored
      Adds support to seperate Swift access and replication traffic from other storage traffic.
      
      In a deployment where both Ceph and Swift have been deployed,
      this changes adds functionalality to support optional seperation
      of storage network traffic. This adds two new network interfaces
      'swift_storage_interface' and 'swift_replication_interface' which maintain
      backwards compatibility.
      
      The Swift access network interface is configured via 'swift_storage_interface',
      which defaults to 'storage_interface'. The Swift replication network
      interface is configured via 'swift_replication_interface', which
      defaults to 'swift_storage_interface'.
      
      If a separate replication network is used, Kolla Ansible now deploys separate
      replication servers for the accounts, containers and objects, that listen on
      this network. In this case, these services handle only replication traffic, and
      the original account-, container- and object- servers only handle storage
      user requests.
      
      Change-Id: Ib39e081574e030126f2d08f51de89641ddb0d42e
      a781c643
  35. Mar 08, 2019
    • Doug Szumski's avatar
      Support customising Fluentd formatting · c8a22f10
      Doug Szumski authored
      In some scenarios it may be useful to perform custom formatting of logs
      before forwarding them. For example, the JSON formatter plugin can be
      used to convert an event to JSON.
      
      Change-Id: I3dd9240c5910a9477456283b392edc9566882dcd
      c8a22f10
  36. Mar 07, 2019
    • Arkadiy Shinkarev's avatar
      Added ability to skip enabled backends pre-check · 1d9f4f9f
      Arkadiy Shinkarev authored
      When using custom storage backends with cinder.conf overrides file,
      precheck stage in kolla-ansible is fail. This commit adds option
      'skip_cinder_backend_check' (default: False) to cinder role.
      
      Change-Id: Ifee138ad8b281903ea2365441aada044c80c46f0
      1d9f4f9f
  37. Feb 28, 2019
    • Mark Goddard's avatar
      Update links in docs to latest · fba5e1ce
      Mark Goddard authored
      To avoid links to OpenStack docs getting out of date in our docs, use
      the latest version.
      
      Ideally after cutting each stable branch we should change these links to
      use the current release.
      
      Co-Authored-By: Isaiah Inuwa
      Change-Id: Ia1e3c720f4e688861b8f76874a3943b0f4e50b17
      fba5e1ce
  38. Feb 25, 2019
Loading