Skip to content
Snippets Groups Projects
  1. Jun 23, 2021
  2. Jun 22, 2021
  3. Sep 22, 2020
    • Pierre Riteau's avatar
      Reduce the use of SQLAlchemy connection pooling · c8177202
      Pierre Riteau authored
      When the internal VIP is moved in the event of a failure of the active
      controller, OpenStack services can become unresponsive as they try to
      talk with MariaDB using connections from the SQLAlchemy pool.
      
      It has been argued that OpenStack doesn't really need to use connection
      pooling with MariaDB [1]. This commit reduces the use of connection
      pooling via two configuration options:
      
      - max_pool_size is set to 1 to allow only a single connection in the
        pool (it is not possible to disable connection pooling entirely via
        oslo.db, and max_pool_size = 0 means unlimited pool size)
      - lower connection_recycle_time from the default of one hour to 10
        seconds, which means the single connection in the pool will be
        recreated regularly
      
      These settings have shown better reactivity of the system in the event
      of a failover.
      
      [1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html
      
      Change-Id: Ib6a62d4428db9b95569314084090472870417f3d
      Closes-Bug: #1896635
      c8177202
  4. Sep 17, 2020
    • Mark Goddard's avatar
      Support TLS encryption of RabbitMQ client-server traffic · 761ea9a3
      Mark Goddard authored
      This change adds support for encryption of communication between
      OpenStack services and RabbitMQ. Server certificates are supported, but
      currently client certificates are not.
      
      The kolla-ansible certificates command has been updated to support
      generating certificates for RabbitMQ for development and testing.
      
      RabbitMQ TLS is enabled in the all-in-one source CI jobs, or when
      The Zuul 'tls_enabled' variable is true.
      
      Change-Id: I4f1d04150fb2b5af085b762890092f87ae6076b5
      Implements: blueprint message-queue-ssl-support
      761ea9a3
  5. Aug 03, 2020
    • likui's avatar
      Update conf for magnum · 908845d3
      likui authored
      Deprecated: Option "cafile" from group "keystone_authtoken" is deprecated.
      Use option "cafile" from group "keystone_auth".
      
      Change-Id: Ia372b1b73afc0bea6a68dcd156cf963c01e3f3ab
      908845d3
  6. Jul 01, 2020
    • Bharat Kunwar's avatar
      Use public interface for Magnum client and trustee Keystone interface · 78bb5942
      Bharat Kunwar authored
      While all other clients should use internalURL, the Magnum client itself
      and Keystone interface for trustee credentials should be publicly
      accessible (upstream default when no config is specified) since
      instances need to be able to reach them.
      
      Closes-Bug: #1885420
      Change-Id: I74359cec7147a80db24eb4aa4156c35d31a026bf
      78bb5942
  7. Jun 25, 2020
  8. Apr 03, 2020
  9. Jan 13, 2020
    • James Kirsch's avatar
      Configure services to use Certificate Authority · c15dc203
      James Kirsch authored
      Include a reference to the globally configured Certificate Authority to
      all services. Services use the CA to verify HTTPs connections.
      
      Change-Id: I38da931cdd7ff46cce1994763b5c713652b096cc
      Partially-Implements: blueprint support-trusted-ca-certificate-file
      c15dc203
  10. Oct 16, 2019
    • Radosław Piliszek's avatar
      Implement IPv6 support in the control plane · bc053c09
      Radosław Piliszek authored
      Introduce kolla_address filter.
      Introduce put_address_in_context filter.
      
      Add AF config to vars.
      
      Address contexts:
      - raw (default): <ADDR>
      - memcache: inet6:[<ADDR>]
      - url: [<ADDR>]
      
      Other changes:
      
      globals.yml - mention just IP in comment
      
      prechecks/port_checks (api_intf) - kolla_address handles validation
      
      3x interface conditional (swift configs: replication/storage)
      
      2x interface variable definition with hostname
      (haproxy listens; api intf)
      
      1x interface variable definition with hostname with bifrost exclusion
      (baremetal pre-install /etc/hosts; api intf)
      
      neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
      
      basic multinode source CI job for IPv6
      
      prechecks for rabbitmq and qdrouterd use proper NSS database now
      
      MariaDB Galera Cluster WSREP SST mariabackup workaround
      (socat and IPv6)
      
      Ceph naming workaround in CI
      TODO: probably needs documenting
      
      RabbitMQ IPv6-only proto_dist
      
      Ceph ms switch to IPv6 mode
      
      Remove neutron-server ml2_type_vxlan/vxlan_group setting
      as it is not used (let's avoid any confusion)
      and could break setups without proper multicast routing
      if it started working (also IPv4-only)
      
      haproxy upgrade checks for slaves based on ipv6 addresses
      
      TODO:
      
      ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
      not supported, invalid by default because neutron_external has no address
      No idea whether ovs-dpdk works at all atm.
      
      ml2 for xenapi
      Xen is not supported too well.
      This would require working with XenAPI facts.
      
      rp_filter setting
      This would require meddling with ip6tables (there is no sysctl param).
      By default nothing is dropped.
      Unlikely we really need it.
      
      ironic dnsmasq is configured IPv4-only
      dnsmasq needs DHCPv6 options and testing in vivo.
      
      KNOWN ISSUES (beyond us):
      
      One cannot use IPv6 address to reference the image for docker like we
      currently do, see: https://github.com/moby/moby/issues/39033
      (docker_registry; docker API 400 - invalid reference format)
      workaround: use hostname/FQDN
      
      RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
      This is due to old RabbitMQ versions available in images.
      IPv4 is preferred by default and may fail in the IPv6-only scenario.
      This should be no problem in real life as IPv6-only is indeed IPv6-only.
      Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
      no longer be relevant as we supply all the necessary config.
      See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
      
      For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
      to work well). Older Ansible versions are known to miss IPv6 addresses
      in interface facts. This may affect redeploys, reconfigures and
      upgrades which run after VIP address is assigned.
      See: https://github.com/ansible/ansible/issues/63227
      
      Bifrost Train does not support IPv6 deployments.
      See: https://storyboard.openstack.org/#!/story/2006689
      
      
      
      Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      Implements: blueprint ipv6-control-plane
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      bc053c09
  11. Aug 15, 2019
    • Rafael Weingärtner's avatar
      Standardize the configuration of "oslo_messaging" section · 22a6223b
      Rafael Weingärtner authored
      After all of the discussions we had on
      "https://review.opendev.org/#/c/670626/2", I studied all projects that
      have an "oslo_messaging" section. Afterwards, I applied the same method
      that is already used in "oslo_messaging" section in Nova, Cinder, and
      others. This guarantees that we have a consistent method to
      enable/disable notifications across projects based on components (e.g.
      Ceilometer) being enabled or disabled. Here follows the list of
      components, and the respective changes I did.
      
      * Aodh:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Congress:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Cinder:
      It was already properly configured.
      
      * Octavia:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Heat:
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Ceilometer:
      Ceilometer publishes some messages in the rabbitMQ. However, the
      default driver is "messagingv2", and not ''(empty) as defined in Oslo;
      these configurations are defined in ceilometer/publisher/messaging.py.
      Therefore, we do not need to do anything for the
      "oslo_messaging_notifications" section in Ceilometer
      
      * Tacker:
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Neutron:
      It was already properly configured.
      
      * Nova
      It was already properly configured. However, we found another issue
      with its configuration. Kolla-ansible does not configure nova
      notifications as it should. If 'searchlight' is not installed (enabled)
      the 'notification_format' should be 'unversioned'. The default is
      'both'; so nova will send a notification to the queue
      versioned_notifications; but that queue has no consumer when
      'searchlight' is disabled. In our case, the queue got 511k messages.
      The huge amount of "stuck" messages made the Rabbitmq cluster
      unstable.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1478274
      https://bugs.launchpad.net/ceilometer/+bug/1665449
      
      * Nova_hyperv:
      I added the same configurations as in Nova project.
      
      * Vitrage
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Searchlight
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Ironic
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Glance
      It was already properly configured.
      
      * Trove
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Blazar
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Sahara
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Watcher
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Barbican
      I created a mechanism similar to what we have in Cinder, Nova,
      and others. I also added a configuration to 'keystone_notifications'
      section. Barbican needs its own queue to capture events from Keystone.
      Otherwise, it has an impact on Ceilometer and other systems that are
      connected to the "notifications" default queue.
      
      * Keystone
      Keystone is the system that triggered this work with the discussions
      that followed on https://review.opendev.org/#/c/670626/2
      
      . After a long
      discussion, we agreed to apply the same approach that we have in Nova,
      Cinder and other systems in Keystone. That is what we did. Moreover, we
      introduce a new topic "barbican_notifications" when barbican is
      enabled. We also removed the "variable" enable_cadf_notifications, as
      it is obsolete, and the default in Keystone is CADF.
      
      * Mistral:
      It was hardcoded "noop" as the driver. However, that does not seem a
      good practice. Instead, I applied the same standard of using the driver
      and pushing to "notifications" queue if Ceilometer is enabled.
      
      * Cyborg:
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Murano
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Senlin
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Manila
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Zun
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Designate
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Magnum
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      Closes-Bug: #1838985
      
      Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202
      Signed-off-by: default avatarRafael Weingärtner <rafael@apache.org>
      22a6223b
  12. Mar 06, 2019
    • Jim Rollenhagen's avatar
      Use keystone_*_url var in all configs · 2e4e6050
      Jim Rollenhagen authored
      We're duplicating code to build the keystone URLs in nearly every
      config, where we've already done it in group_vars. Replace the
      redundancy with a variable that does the same thing.
      
      Change-Id: I207d77870e2535c1cdcbc5eaf704f0448ac85a7a
      2e4e6050
  13. Aug 15, 2018
    • Murali Annamneni's avatar
      Include default_docker_volume_type for magnum.conf · e1c5bbd9
      Murali Annamneni authored
      To create a magnum cluster, its required to specify
      'default_docker_volume_type' with some default value (default cinder
       volume type). And, it also enables users to select
      diffferent cinder volume types for their volumes.
      
      Change-Id: I50b4c436875e4daac48a14fc1e119136eb5fd844
      e1c5bbd9
  14. Aug 07, 2018
  15. Jun 01, 2018
    • Zhangfei Gao's avatar
      osprofiler support redis · ce809aea
      Zhangfei Gao authored
      Currently osprofiler only choose elasticsearch,
      which is only supported on x86.
      On other platform like aarch64 osprofiler can
      not be used since no elasticsearch package.
      
      Enable osprofiler by enable_osprofiler: "yes",
      which choose elasticsearch by default.
      Choose redis by enable_redis: "yes" & osprofiler_backend: "redis"
      On platform without elasticsearch support like aarch64
      set enable_elasticsearch: "no"
      
      Change-Id: I68fe7a33e11d28684962fc5d0b3d326e90784d78
      ce809aea
  16. Apr 18, 2018
    • Kevin TIBI's avatar
      Fix SSL api for multiple services · a81a5d5d
      Kevin TIBI authored
      If SSL is enabled, api of multiple services returns
      wrong external URL without https prefix.
      
      Removal of condition for deletion of http  header.
      
      Change-Id: I4264e04d0d6b9a3e11ef7dd7add6c5e166cf9fb4
      Closes-Bug: #1749155
      Closes-Bug: #1717491
      a81a5d5d
  17. Mar 09, 2018
    • ZhongShengping's avatar
      Duplicated [oslo_policy] · af87ad7c
      ZhongShengping authored
      Remove duplicated [oslo_policy] in magnum.conf.
      
      Change-Id: I69c82e31d7041d7e8f9c31ba1bf54f0906f2a6dc
      Closes-Bug: #1754593
      af87ad7c
  18. Jan 22, 2018
  19. Jan 12, 2018
  20. Nov 22, 2017
    • Andrew Smith's avatar
      Add support for hybrid messaging backends · fd1d3af0
      Andrew Smith authored
      This commit separates the messaging rpc and notify transports in order
      to support separate and different oslo.messaging backends
      
      This patch:
      * add rpc and notify variables
      * update service role conf templates
      * add example to globals.yaml
      * add release note
      
      Implements: blueprint hybrid-messaging
      Change-Id: I34691c2895c8563f1f322f0850ecff98d11b5185
      fd1d3af0
  21. Jul 13, 2017
  22. Jul 06, 2017
  23. Jul 04, 2017
    • Bertrand Lallau's avatar
      Magnum: update clients config groups · fdc75cdd
      Bertrand Lallau authored
      * add additional options called 'endpoint_type' for each of config groups
      related to openstack clients used by Magnum.
      * add Glance, Neutron and Nova config groups.
      
      Change-Id: Ie74979e05c4f5763674ba2fc5b9f07bd51ad9454
      fdc75cdd
  24. Jun 02, 2017
  25. May 23, 2017
  26. Apr 12, 2017
  27. Mar 10, 2017
  28. Jan 24, 2017
  29. Jan 12, 2017
  30. Nov 06, 2016
  31. Oct 07, 2016
    • Martin Matyáš's avatar
      Fix genconfig and reconfigure for magnum · 4fa2508e
      Martin Matyáš authored
      Genconfig and reconfigure failing for magnum.
      Chainging magnum trust configuretion parameters
      to user/domain names instead of ids so they don't
      depend on register.yml task anymore.
      
      Change-Id: I55fddf48eafc44892fd0ab96835bfb0b51849d37
      Closes-bug: #1630248
      4fa2508e
  32. Sep 28, 2016
    • Vikram Hosakote's avatar
      Fix Magnum trustee issues · 3c456251
      Vikram Hosakote authored
      
      This patch set fixes all Magnum issues in kolla master.
      
      The [trust] section set to magnum.conf
      using created trustee domain and user for Magnum
      in ansible/roles/magnum/tasks/register.yml using ansible
      openstack modules.
      
      Bump shade to 1.5.0 in kolla-toolbox because of
      os_user_role ansible module dependency.
      
      Certificate storage is changed from 'local' (non-production)
      to magnum's internal storage (x509keypair) or barbican.
      
      Co-Authored-By: default avatarMartin Matyas <martinx.maty@intel.com>
      Change-Id: Ifcb016c0bc4c8c3fc20e063fa05dc8838aae838c
      Closes-Bug: #1551992
      3c456251
  33. Aug 25, 2016
  34. Apr 11, 2016
    • Ryan Hallisey's avatar
      Set db connection retry to infinity · 67333e4d
      Ryan Hallisey authored
      Make sure that all the sevices will attempt to
      connect to the database an infinite about of times.
      If the database ever disappears for some reason we
      want the services to try and reconnect more than just
      10 times.
      
      Closes-bug: #1505636
      Change-Id: I77abbf72ce5bfd68faa451bb9a72bd2544963f4b
      67333e4d
  35. Mar 19, 2016
    • SamYaple's avatar
      Add memcached_servers to keystone_auth section · d4535b6d
      SamYaple authored
      The in-process cache for keystone tokens has been deprecated due to
      "incosistent results and high memory usage" with the expectation we
      switch to memcached_servers if we want to stay performant.
      
      Add memcache_servers [cache] section to the appropriate servers as the
      [DEFAULT]\memcache_servers options was deprecated.
      
      TrivialFix
      Related-Id: Ied2b88c8cefe5655a88d0c2f334de04e588fa75a
      
      Change-Id: Ic971bdddc0be3338b15924f7cc0f97d4a3ad2440
      d4535b6d
  36. Feb 26, 2016
    • SamYaple's avatar
      Change kolla_internal_address variable · d3cfb205
      SamYaple authored
      Due to poor planning on our variable names we have a situation where
      we have "internal_address" which must be a VIP, but "external_address"
      which should be a DNS name. Now with two vips "external_vip_address"
      is a new variable.
      
      This corrects that issue by deprecating kolla_internal_address and
      replacing it with 4 nicely named variables.
      
      kolla_internal_vip_address
      kolla_internal_fqdn
      kolla_external_vip_address
      kolla_external_fqdn
      
      The default behaviour will remain the same, and the way the variable
      inheritance is setup the kolla_internal_address variable can still be
      set in globals.yml and propogate out to these 4 new variables like it
      normally would, but all reference to kolla_internal_address has been
      completely removed.
      
      Change-Id: I4556dcdbf4d91a8d2751981ef9c64bad44a719e5
      Partially-Implements: blueprint ssl-kolla
      d3cfb205
  37. Feb 19, 2016
  38. Feb 15, 2016
    • Dave McCowan's avatar
      Use variables to specify http or https when constructing URLs · 1cedf77f
      Dave McCowan authored
      To allow for TLS to protect the service endpoints, the protocol
      in the URLs for the endpoints will be either http or https.
      
      This patch removes the hardcoded values of http and replaces them
      with variables that can be adjusted accordingly in future patches.
      
      Change-Id: Ibca6f8aac09c65115d1ac9957410e7f81ac7671e
      Partially-implements: blueprint ssl-kolla
      1cedf77f
  39. Jan 20, 2016
Loading