Skip to content
Snippets Groups Projects
  1. Sep 22, 2020
    • Pierre Riteau's avatar
      Reduce the use of SQLAlchemy connection pooling · c8177202
      Pierre Riteau authored
      When the internal VIP is moved in the event of a failure of the active
      controller, OpenStack services can become unresponsive as they try to
      talk with MariaDB using connections from the SQLAlchemy pool.
      
      It has been argued that OpenStack doesn't really need to use connection
      pooling with MariaDB [1]. This commit reduces the use of connection
      pooling via two configuration options:
      
      - max_pool_size is set to 1 to allow only a single connection in the
        pool (it is not possible to disable connection pooling entirely via
        oslo.db, and max_pool_size = 0 means unlimited pool size)
      - lower connection_recycle_time from the default of one hour to 10
        seconds, which means the single connection in the pool will be
        recreated regularly
      
      These settings have shown better reactivity of the system in the event
      of a failover.
      
      [1] http://lists.openstack.org/pipermail/openstack-dev/2015-April/061808.html
      
      Change-Id: Ib6a62d4428db9b95569314084090472870417f3d
      Closes-Bug: #1896635
      c8177202
  2. Aug 19, 2020
    • Rafael Weingärtner's avatar
      Standardize use and construction of endpoint URLs · f425c067
      Rafael Weingärtner authored
      
      The goal for this push request is to normalize the construction and use
       of internal, external, and admin URLs. While extending Kolla-ansible
       to enable a more flexible method to manage external URLs, we noticed
       that the same URL was constructed multiple times in different parts
       of the code. This can make it difficult for people that want to work
       with these URLs and create inconsistencies in a large code base with
       time. Therefore, we are proposing here the use of
       "single Kolla-ansible variable" per endpoint URL, which facilitates
       for people that are interested in overriding/extending these URLs.
      
      As an example, we extended Kolla-ansible to facilitate the "override"
      of public (external) URLs with the following standard
      "<component/serviceName>.<companyBaseUrl>".
      Therefore, the "NAT/redirect" in the SSL termination system (HAproxy,
      HTTPD or some other) is done via the service name, and not by the port.
      This allows operators to easily and automatically create more friendly
       URL names. To develop this feature, we first applied this patch that
       we are sending now to the community. We did that to reduce the surface
        of changes in Kolla-ansible.
      
      Another example is the integration of Kolla-ansible and Consul, which
      we also implemented internally, and also requires URLs changes.
      Therefore, this PR is essential to reduce code duplicity, and to
      facility users/developers to work/customize the services URLs.
      
      Change-Id: I73d483e01476e779a5155b2e18dd5ea25f514e93
      Signed-off-by: default avatarRafael Weingärtner <rafael@apache.org>
      f425c067
  3. May 14, 2020
    • generalfuzz's avatar
      Fix Heat WSGI Logging · 67a31fd2
      generalfuzz authored
      Fix Heat WSGI logging directives and correct access log name.
      
      Change-Id: Iac09e481ae46934fc26300eba8c5d81ccd0504e8
      Partially-Implements: blueprint add-ssl-internal-network
      67a31fd2
  4. Apr 03, 2020
  5. Mar 26, 2020
    • Jeffrey Zhang's avatar
      Add clients ca_file in heat · 34a331ab
      Jeffrey Zhang authored
      This patch fix creating statck resource failure in heat.
      
      Change-Id: I00c23f8b89765e266d045cc463ce4d863d0d6089
      Closes-Bug: #1869137
      34a331ab
  6. Jan 13, 2020
    • James Kirsch's avatar
      Configure services to use Certificate Authority · c15dc203
      James Kirsch authored
      Include a reference to the globally configured Certificate Authority to
      all services. Services use the CA to verify HTTPs connections.
      
      Change-Id: I38da931cdd7ff46cce1994763b5c713652b096cc
      Partially-Implements: blueprint support-trusted-ca-certificate-file
      c15dc203
  7. Nov 16, 2019
  8. Oct 16, 2019
    • Radosław Piliszek's avatar
      Implement IPv6 support in the control plane · bc053c09
      Radosław Piliszek authored
      Introduce kolla_address filter.
      Introduce put_address_in_context filter.
      
      Add AF config to vars.
      
      Address contexts:
      - raw (default): <ADDR>
      - memcache: inet6:[<ADDR>]
      - url: [<ADDR>]
      
      Other changes:
      
      globals.yml - mention just IP in comment
      
      prechecks/port_checks (api_intf) - kolla_address handles validation
      
      3x interface conditional (swift configs: replication/storage)
      
      2x interface variable definition with hostname
      (haproxy listens; api intf)
      
      1x interface variable definition with hostname with bifrost exclusion
      (baremetal pre-install /etc/hosts; api intf)
      
      neutron's ml2 'overlay_ip_version' set to 6 for IPv6 on tunnel network
      
      basic multinode source CI job for IPv6
      
      prechecks for rabbitmq and qdrouterd use proper NSS database now
      
      MariaDB Galera Cluster WSREP SST mariabackup workaround
      (socat and IPv6)
      
      Ceph naming workaround in CI
      TODO: probably needs documenting
      
      RabbitMQ IPv6-only proto_dist
      
      Ceph ms switch to IPv6 mode
      
      Remove neutron-server ml2_type_vxlan/vxlan_group setting
      as it is not used (let's avoid any confusion)
      and could break setups without proper multicast routing
      if it started working (also IPv4-only)
      
      haproxy upgrade checks for slaves based on ipv6 addresses
      
      TODO:
      
      ovs-dpdk grabs ipv4 network address (w/ prefix len / submask)
      not supported, invalid by default because neutron_external has no address
      No idea whether ovs-dpdk works at all atm.
      
      ml2 for xenapi
      Xen is not supported too well.
      This would require working with XenAPI facts.
      
      rp_filter setting
      This would require meddling with ip6tables (there is no sysctl param).
      By default nothing is dropped.
      Unlikely we really need it.
      
      ironic dnsmasq is configured IPv4-only
      dnsmasq needs DHCPv6 options and testing in vivo.
      
      KNOWN ISSUES (beyond us):
      
      One cannot use IPv6 address to reference the image for docker like we
      currently do, see: https://github.com/moby/moby/issues/39033
      (docker_registry; docker API 400 - invalid reference format)
      workaround: use hostname/FQDN
      
      RabbitMQ may fail to bind to IPv6 if hostname resolves also to IPv4.
      This is due to old RabbitMQ versions available in images.
      IPv4 is preferred by default and may fail in the IPv6-only scenario.
      This should be no problem in real life as IPv6-only is indeed IPv6-only.
      Also, when new RabbitMQ (3.7.16/3.8+) makes it into images, this will
      no longer be relevant as we supply all the necessary config.
      See: https://github.com/rabbitmq/rabbitmq-server/pull/1982
      
      For reliable runs, at least Ansible 2.8 is required (2.8.5 confirmed
      to work well). Older Ansible versions are known to miss IPv6 addresses
      in interface facts. This may affect redeploys, reconfigures and
      upgrades which run after VIP address is assigned.
      See: https://github.com/ansible/ansible/issues/63227
      
      Bifrost Train does not support IPv6 deployments.
      See: https://storyboard.openstack.org/#!/story/2006689
      
      
      
      Change-Id: Ia34e6916ea4f99e9522cd2ddde03a0a4776f7e2c
      Implements: blueprint ipv6-control-plane
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      bc053c09
  9. Sep 20, 2019
    • Mark Goddard's avatar
      Remove some deprecated config options · e127627d
      Mark Goddard authored
      Heat's [DEFAULT] deferred_auth_method is deprecated, and we are setting
      the default value of 'trusts'.
      
      Glance's [DEFAULT] registry_host is deprecated, and we do not deploy a
      registry.
      
      Change-Id: I80024907c575982699ce323cd9a93bab94c988d3
      e127627d
  10. Aug 15, 2019
    • Rafael Weingärtner's avatar
      Standardize the configuration of "oslo_messaging" section · 22a6223b
      Rafael Weingärtner authored
      After all of the discussions we had on
      "https://review.opendev.org/#/c/670626/2", I studied all projects that
      have an "oslo_messaging" section. Afterwards, I applied the same method
      that is already used in "oslo_messaging" section in Nova, Cinder, and
      others. This guarantees that we have a consistent method to
      enable/disable notifications across projects based on components (e.g.
      Ceilometer) being enabled or disabled. Here follows the list of
      components, and the respective changes I did.
      
      * Aodh:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Congress:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Cinder:
      It was already properly configured.
      
      * Octavia:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Heat:
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Ceilometer:
      Ceilometer publishes some messages in the rabbitMQ. However, the
      default driver is "messagingv2", and not ''(empty) as defined in Oslo;
      these configurations are defined in ceilometer/publisher/messaging.py.
      Therefore, we do not need to do anything for the
      "oslo_messaging_notifications" section in Ceilometer
      
      * Tacker:
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Neutron:
      It was already properly configured.
      
      * Nova
      It was already properly configured. However, we found another issue
      with its configuration. Kolla-ansible does not configure nova
      notifications as it should. If 'searchlight' is not installed (enabled)
      the 'notification_format' should be 'unversioned'. The default is
      'both'; so nova will send a notification to the queue
      versioned_notifications; but that queue has no consumer when
      'searchlight' is disabled. In our case, the queue got 511k messages.
      The huge amount of "stuck" messages made the Rabbitmq cluster
      unstable.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1478274
      https://bugs.launchpad.net/ceilometer/+bug/1665449
      
      * Nova_hyperv:
      I added the same configurations as in Nova project.
      
      * Vitrage
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Searchlight
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Ironic
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Glance
      It was already properly configured.
      
      * Trove
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Blazar
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Sahara
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Watcher
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Barbican
      I created a mechanism similar to what we have in Cinder, Nova,
      and others. I also added a configuration to 'keystone_notifications'
      section. Barbican needs its own queue to capture events from Keystone.
      Otherwise, it has an impact on Ceilometer and other systems that are
      connected to the "notifications" default queue.
      
      * Keystone
      Keystone is the system that triggered this work with the discussions
      that followed on https://review.opendev.org/#/c/670626/2
      
      . After a long
      discussion, we agreed to apply the same approach that we have in Nova,
      Cinder and other systems in Keystone. That is what we did. Moreover, we
      introduce a new topic "barbican_notifications" when barbican is
      enabled. We also removed the "variable" enable_cadf_notifications, as
      it is obsolete, and the default in Keystone is CADF.
      
      * Mistral:
      It was hardcoded "noop" as the driver. However, that does not seem a
      good practice. Instead, I applied the same standard of using the driver
      and pushing to "notifications" queue if Ceilometer is enabled.
      
      * Cyborg:
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Murano
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Senlin
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Manila
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Zun
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Designate
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Magnum
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      Closes-Bug: #1838985
      
      Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202
      Signed-off-by: default avatarRafael Weingärtner <rafael@apache.org>
      22a6223b
    • Mark Goddard's avatar
      Use internal API for heat -> heat communication · d54c8fbd
      Mark Goddard authored
      Heat has a new option (server_keystone_endpoint_type), which can be used
      to set the keystone endpoint used by instances to make callbacks to
      heat. This needs to be public, since we can't assume users have access
      to the internal API. However, the current method of setting
      [clients_heat] endpoint_type means that communication from heat to its
      own API (e.g. when a stack is a resource in another stack) uses the
      public network also, and this might not work if TLS is enabled.
      
      This change uses server_keystone_endpoint_type to keep instance traffic
      on the public API, and removes the [clients_heat] endpoint_type option
      to use the default in [clients] endpoint_type of internalURL.
      
      This feature was added to heat in https://review.opendev.org/#/c/650967.
      
      Change-Id: I932ea55a3c2a411557c34361db08bcb3a2b27eaf
      Closes-Bug: #1812864
      Related-Bug: #1762754
      Related-Bug: #1688331
      d54c8fbd
  11. Jun 13, 2019
  12. Mar 06, 2019
    • Jim Rollenhagen's avatar
      Allow heat services to use independent hostnames · d0fc1ec2
      Jim Rollenhagen authored
      This allows heat service endpoints to use custom hostnames, and adds the
      following variables:
      
      * heat_internal_fqdn
      * heat_external_fqdn
      * heat_cfn_internal_fqdn
      * heat_cfn_external_fqdn
      
      These default to the old values of kolla_internal_fqdn or
      kolla_external_fqdn.
      
      This also adds heat_api_listen_port and heat_api_cfn_listen_port
      options, which default to heat_api_port and heat_api_cfn_port for
      backward compatibility.
      
      These options allow the user to differentiate between the port the
      service listens on, and the port the service is reachable on. This is
      useful for external load balancers which live on the same host as the
      service itself.
      
      Change-Id: Ifb8bb55799703883d81be6a55641be7b2474fd4e
      Implements: blueprint service-hostnames
      d0fc1ec2
    • Jim Rollenhagen's avatar
      Use keystone_*_url var in all configs · 2e4e6050
      Jim Rollenhagen authored
      We're duplicating code to build the keystone URLs in nearly every
      config, where we've already done it in group_vars. Replace the
      redundancy with a variable that does the same thing.
      
      Change-Id: I207d77870e2535c1cdcbc5eaf704f0448ac85a7a
      2e4e6050
  13. Feb 21, 2019
    • Mark Goddard's avatar
      Configure region_name_for_services in heat.conf · 54203843
      Mark Goddard authored
      
      backport: rocky
      
      Not including this means that SoftwareDeployments do not have a
      configured region (it's set to 'null'), and can therefore not
      communicate back to the heat API. In particular, this breaks Magnum with
      the following error in the journal on the deployed servers:
      
      publicURL endpoint for orchestration service in null region not found
      
      Change-Id: Ia2c18ef10727391812368c958262a92385374ace
      Co-Authored-By: default avatarJohn Garbutt <john@stackhpc.com>
      Closes-Bug: #1817051
      54203843
  14. Aug 07, 2018
  15. Jul 03, 2018
  16. Jun 01, 2018
    • Zhangfei Gao's avatar
      osprofiler support redis · ce809aea
      Zhangfei Gao authored
      Currently osprofiler only choose elasticsearch,
      which is only supported on x86.
      On other platform like aarch64 osprofiler can
      not be used since no elasticsearch package.
      
      Enable osprofiler by enable_osprofiler: "yes",
      which choose elasticsearch by default.
      Choose redis by enable_redis: "yes" & osprofiler_backend: "redis"
      On platform without elasticsearch support like aarch64
      set enable_elasticsearch: "no"
      
      Change-Id: I68fe7a33e11d28684962fc5d0b3d326e90784d78
      ce809aea
  17. May 04, 2018
    • Bharat Kunwar's avatar
      kolla-ansible fix to correct magnum k8s deployment · c20c69ee
      Bharat Kunwar authored
      Magnum was unable to fire up k8s cluster because heat-container-agent
      inside kube-master was pointing to internal keystone endpoint instead of
      public endpoint. This fix tells kolla ansible to set clients_keystone
      auth_uri to public endpoint so that heat-container-agent communication
      with heat is successfully authenticated by keystone.
      
      Change-Id: Ida49528f88685710b5e6b8f3c4d4622506af5ae1
      Closes-Bug: #1762754
      c20c69ee
  18. Apr 18, 2018
    • Kevin TIBI's avatar
      Fix SSL api for multiple services · a81a5d5d
      Kevin TIBI authored
      If SSL is enabled, api of multiple services returns
      wrong external URL without https prefix.
      
      Removal of condition for deletion of http  header.
      
      Change-Id: I4264e04d0d6b9a3e11ef7dd7add6c5e166cf9fb4
      Closes-Bug: #1749155
      Closes-Bug: #1717491
      a81a5d5d
  19. Jan 22, 2018
  20. Jan 12, 2018
  21. Nov 22, 2017
    • Andrew Smith's avatar
      Add support for hybrid messaging backends · fd1d3af0
      Andrew Smith authored
      This commit separates the messaging rpc and notify transports in order
      to support separate and different oslo.messaging backends
      
      This patch:
      * add rpc and notify variables
      * update service role conf templates
      * add example to globals.yaml
      * add release note
      
      Implements: blueprint hybrid-messaging
      Change-Id: I34691c2895c8563f1f322f0850ecff98d11b5185
      fd1d3af0
  22. Jul 18, 2017
  23. Jul 06, 2017
  24. Jun 02, 2017
  25. May 05, 2017
    • Eduardo Gonzalez's avatar
      Fix heat ec2 keystone auth · de31cdc7
      Eduardo Gonzalez authored
      Heat-api-cfn need to point to keystone v3 version.
      Otherwise heat fail while authenticating for scaling policies.
      
      ``AWS authentication failure.``
      
      Change-Id: I1950cd7359d8ad589feced870de76f02ef2c8a76
      Closes-Bug: #1672431
      de31cdc7
  26. May 04, 2017
  27. Mar 22, 2017
  28. Mar 10, 2017
  29. Mar 01, 2017
    • pomac's avatar
      Enable heat-api proxy header parsing · 63e5c444
      pomac authored
      heat-api kept redirecting clients to use http:// instead of https://
      when communicating with our https:// only loadbalancer
      
      Please examine the logic for enabling it carefully, it's hard to know
      if it should be enabled or not, potenitially it could be a security
      risk.
      
      Based on openstack-ansible-os_heat:
      commit 4033a0f854cba6719c61812ef5b553e932a6c6c2
      Author: Kyle L. Henderson <kyleh@us.ibm.com>
      
          Enable oslo_middleware proxy header parsing
      
      "Heat has moved to using oslo_middleware for the http proxy header
      parsing, however the default is to not parse the headers.  When
      the external protocol differs from the internal protocol this
      parsing is required in order for heat to work properly since it
      will return 302 redirects to the client during some operations
      (such as delete stack).
      
      An example of this is when using haproxy with https configured
      for the external protocol and http for the internal protocol.
      If the oslo_middleware does not parse the headers, then any
      302 redirects would specify a url with http rather than
      correctly specifying https and the heat client would fail to
      connect on the redirect url."
      
      Change-Id: I38661a0bc2163a7f72febd98b7ae6f51c5d45ad5
      63e5c444
  30. Jan 25, 2017
  31. Jan 10, 2017
  32. Oct 10, 2016
  33. Sep 28, 2016
    • Martin Matyáš's avatar
      Fix wrong heat trustee configuration · 57ba2cd2
      Martin Matyáš authored
      "project_domain_id" and "project_name"
      cannot be specified [trustee] section or keystone will throw a
      "cannot be scoped to multiple targets" error when we attempt to get
      a token scoped to a trust.
      
      Change-Id: I167c0e31835d05b8069fd931ef76fb337dd99207
      Closes-Bug: #1628353
      57ba2cd2
  34. Sep 19, 2016
  35. Sep 12, 2016
  36. Aug 25, 2016
  37. Jul 27, 2016
    • Jeffrey Zhang's avatar
      Use a lower number of the workers · 3c3b0288
      Jeffrey Zhang authored
      Use a lower number of workers rather than the default value, which is
      equal to the number of the cpu. Otherwise, in a multi cpu environment,
      the number of the processes will very high.
      
      In this PS, we use min(5, << number of cpu >>) as the default worker
      count.
      
      Closes-Bug: #1582254
      Change-Id: I1c32cf0db794b43b8fb8be18f39190422ca5846f
      3c3b0288
  38. Apr 11, 2016
    • Ryan Hallisey's avatar
      Set db connection retry to infinity · 67333e4d
      Ryan Hallisey authored
      Make sure that all the sevices will attempt to
      connect to the database an infinite about of times.
      If the database ever disappears for some reason we
      want the services to try and reconnect more than just
      10 times.
      
      Closes-bug: #1505636
      Change-Id: I77abbf72ce5bfd68faa451bb9a72bd2544963f4b
      67333e4d
Loading