Skip to content
Snippets Groups Projects
  1. Oct 01, 2019
    • Doug Szumski's avatar
      Copy Nova role as a basis for the Nova cell role · 952b5308
      Doug Szumski authored
      The idea is to factor out a role for deploying Nova related services
      to cells. Since all deployments use cells, this role can be used
      in both regular deployments which have just cell0 and cell1,
      and deployments with many cells.
      
      Partially Implements: blueprint support-nova-cells
      Change-Id: Ib1f36ec0a773c384f2c1eac1843782a3e766045a
      952b5308
    • Mark Goddard's avatar
      Add service-rabbitmq role · 039cc2be
      Mark Goddard authored
      This role can be used by other roles to register RabbitMQ resources.
      Currently support is provided for creating virtual hosts and users.
      
      Change-Id: Ie1774a10b4d629508584af679b8aa9e372847804
      Partially Implements: blueprint support-nova-cells
      Depends-On: https://review.opendev.org/684742
      039cc2be
  2. Sep 30, 2019
  3. Sep 29, 2019
  4. Sep 26, 2019
  5. Sep 25, 2019
  6. Sep 24, 2019
    • Mark Goddard's avatar
      Switch default cloudkitty storage backend to influxdb · 27f4876e
      Mark Goddard authored
      Backport: stein
      
      In the Stein release, cloudkitty switched the default storage backend
      from sqlalchemy to influxdb. In kolla-ansible stein configuration, we
      did not explicitly set the storage backend, and so we automatically
      picked up this change. However, prior to
      https://review.opendev.org/#/c/615928/ we did not have full support for
      InfluxDB as a storage backend, and so this has broken the Rocky-Stein
      upgrade (https://bugs.launchpad.net/kolla-ansible/+bug/1838641), which
      fails with this during the DB sync:
      
      ERROR cloudkitty InfluxDBClientError: get_list_retention_policies()
      requires a database as a parameter or the client to be using a database
      
      This change synchronises our default with cloudkitty's (influxdb), and
      also provides an upgrade transition to create the influxdb database.
      
      We also move the cloudkitty_storage_backend variable to
      group_vars/all.yml, since it is used to determine whether to enable
      influxdb.
      
      Finally, the section name in cloudkitty.conf was incorrect - it was
      storage_influx,  but should be storage_influxdb.
      
      Change-Id: I71f2ed11bd06f58e141d222e2709835b7ddb2c71
      Closes-Bug: #1838641
      27f4876e
    • Mark Goddard's avatar
      Create and grant all keystone roles in service-ks-register · 741f6d9b
      Mark Goddard authored
      This ensures we execute the keystone os_* modules in one place.
      
      Also rework some of the task names and loop item display.
      
      Change-Id: I6764a71e8147410e7b24b0b73d0f92264f45240c
      741f6d9b
    • Alexis Deberg's avatar
      Swift: add swift_extra_ring_files variable to handle multi-policies deployment · 0adbbb26
      Alexis Deberg authored
      The current tasks only use a hardcoded list deploying only the required files.
      When using multiple custom policies, additionnal object-*.builder and
      object*.gz files are to be deployed as well.
      This adds a new default-empty variable that can be overridden when needed
      
      Change-Id: I29c8e349c7cc83e3a2e01ff702d235a0cd97340e
      Closes-Bug: #1844752
      0adbbb26
  7. Sep 23, 2019
    • Mark Goddard's avatar
      Ensure keepalived is restarted during upgrade · 6f05f1b8
      Mark Goddard authored
      During upgrade, we stop all slave keepalived containers. However, if the
      keepalived container configuration has not changed, we never restart
      them.
      
      This change fixes the issue by notifying the restart handler when the
      containers are stopped.
      
      Change-Id: Ibe094b0c14a70a0eb811182d96f045027aa02c2a
      Closes-Bug: #1836368
      6f05f1b8
    • Mark Goddard's avatar
      Add <project>_install_type for all projects · cc555c41
      Mark Goddard authored
      This allows the install type for the project to be different than
      kolla_install_type
      
      This can be used to avoid hitting bug 1786238, since kuryr only supports
      the source type.
      
      Change-Id: I2b6fc85bac092b1614bccfd22bee48442c55dda4
      Closes-Bug: #1786238
      cc555c41
    • Dincer Celik's avatar
      [prometheus] Added support for extra options · 5ff7bab4
      Dincer Celik authored
      This change introduces the way to pass extra options to prometheus.
      
      Currently, prometheus runs with nearly default options, and when clouds
      start getting bigger, you need to pass extra parameters to prometheus.
      
      Change-Id: Ic773c0b73062cf3b2285343bafb25d5923911834
      5ff7bab4
  8. Sep 20, 2019
    • Mark Goddard's avatar
      Remove some deprecated config options · e127627d
      Mark Goddard authored
      Heat's [DEFAULT] deferred_auth_method is deprecated, and we are setting
      the default value of 'trusts'.
      
      Glance's [DEFAULT] registry_host is deprecated, and we do not deploy a
      registry.
      
      Change-Id: I80024907c575982699ce323cd9a93bab94c988d3
      e127627d
    • Mark Goddard's avatar
      Add retries to keystone resource registration tasks · 2ddf1fbf
      Mark Goddard authored
      Sometimes things go wrong. We shouldn't fail a Kolla Ansible run because
      of a temporary failure when creating keystone resources.
      
      This task adds retries to the tasks in the service-ks-tasks role.
      Default is 5 retries with a 10 second delay, as is used in OpenStack
      Ansible.
      
      Change-Id: Ib692062fb93ba330bb9c8a35c684ad06652be8a2
      2ddf1fbf
  9. Sep 19, 2019
  10. Sep 18, 2019
    • Mark Goddard's avatar
      Remove support for OracleLinux · 15e35333
      Mark Goddard authored
      We have agreed to remove support for Oracle Linux.
      
      http://lists.openstack.org/pipermail/openstack-discuss/2019-June/006896.html
      
      Change-Id: If11b4ff37af936a0cfd34443e8babb952307882b
      15e35333
    • Scott Solkhon's avatar
      Adding Prometheus blackbox exporter · b22375eb
      Scott Solkhon authored
      
      This commit follows up the work in Kolla to provide deploy and configure the
      Prometheus blackbox exporter.
      
      An example blackbox-exporter module has been added (disabled by default)
      called os_endpoint. This allows for the probing of endpoints over HTTP
      and HTTPS. This can be used to monitor that OpenStack endpoints return a status
      code of either 200 or 300, and the word 'versions' in the payload.
      
      This change introduces a new variable `prometheus_blackbox_exporter_endpoints`.
      Currently no defaults are specified because the configuration is heavily
      dependent on the deployment.
      
      Co-authored-by: default avatarJack Heskett <Jack.Heskett@gresearch.co.uk>
      Change-Id: I36ad4961078d90e2fd70c9a3368f5157d6fd89cd
      b22375eb
  11. Sep 17, 2019
  12. Sep 16, 2019
    • Mark Goddard's avatar
      Catch errors and changes in kolla_toolbox module · 70b515bf
      Mark Goddard authored
      The kolla_toolbox Ansible module executes as-hoc ansible commands in the
      kolla_toolbox container, and parses the output to make it look as if
      ansible-playbook executed the command. Currently however, this module
      sometimes fails to catch failures of the underlying command, and also
      sometimes shows tasks as 'ok' when the underlying command was changed.
      This has been tested both before and after the upgrade to ansible 2.8.
      
      This change fixes this issue by configuring ansible to emit output in
      JSON format, to make parsing simpler. We can now pick up errors and
      changes, and signal them to the caller.
      
      This change also adds an ansible playbook, tests/test-kolla-toolbox.yml,
      that can be executed to test the module. It's not currently integrated
      with any CI jobs.
      
      Note that this change cannot be backported as the JSON output callback
      plugin was added in Ansible 2.5.
      
      Change-Id: I8236dd4165f760c819ca972b75cbebc62015fada
      Closes-Bug: #1844114
      70b515bf
    • Mark Goddard's avatar
      Add custom filters for checking services · af2e7fd7
      Mark Goddard authored
      These filters can be used to capture a lot of the logic that we
      currently have in 'when' statements, about which services are enabled
      for a particular host.
      
      In order to use these filters, it is necessary to install the
      kolla_ansible python module, and not just the dependencies listed in
      requirements.txt. The CI test and quickstart install from source
      documentation has been updated accordingly.
      
      Ansible is not currently in OpenStack global requirements, so for unit
      tests we avoid a direct dependency on Ansible and provide fakes where
      necessary.
      
      Change-Id: Ib91cac3c28e2b5a834c9746b1d2236a309529556
      af2e7fd7
    • chenxing's avatar
      Update "openstack_release" variable to static brach name · 4eceb48d
      chenxing authored
      Since we use the release name as the default tag to publish images
      to Dockerhub, we should use this by default.
      
      This change also removes support for the magic value "auto".
      
      Change-Id: I5610cc7729e9311709147ba5532199a033dfd156
      Closes-Bug: #1843518
      4eceb48d
  13. Sep 15, 2019
  14. Sep 14, 2019
  15. Sep 13, 2019
    • Mark Flynn's avatar
      Fix prometheus-alertmanager cluster bug · 01eb7a63
      Mark Flynn authored
      
      Edited the
      ansible/roles/prometheus/templates/prometheus-alertmanager.json.j2 file
      to change the mesh.peer and mesh.listen-address to cluter.peer and
      cluster.listen-address.  This stopped alertmanager from crashing with
      error "--mesh.peer is an invalid flag"
      
      Change-Id: Ia0447674b9ec377a814f37b70b4863a2bd1348ce
      Signed-off-by: default avatarMark Flynn <markandrewflynn@gmail.com>
      01eb7a63
  16. Sep 12, 2019
    • Mark Goddard's avatar
      Sync enable flags in globals.yml · fd1fcdc4
      Mark Goddard authored
      Change-Id: I593b06c447d156c7a981d1c617f4f9baa82884de
      Closes-Bug: #1841175
      fd1fcdc4
    • Scott Solkhon's avatar
      Enable Swift Recon · d463d3f7
      Scott Solkhon authored
      
      This commit adds the necessary configuration to the Swift account,
      container and object configuration files to enable the Swift recon
      cli.
      
      In order to give the object server on each Swift host access to the
      recon files, a Docker volume is mounted into each container which
      generates them. The volume is then mounted read only into the object
      server container. Note that multiple containers append to the same
      file. This should not be a problem since Swift uses a lock when
      appending.
      
      Change-Id: I343d8f45a78ebc3c11ed0c68fe8bec24f9ea7929
      Co-authored-by: default avatarDoug Szumski <doug@stackhpc.com>
      d463d3f7
  17. Sep 11, 2019
  18. Sep 10, 2019
    • liyingjun's avatar
      Fixes default volumes config for masakari-instancemonitor · 04975cea
      liyingjun authored
      Change-Id: Idee76f6da357c600d52b4280d29b685ed443191a
      04975cea
    • Hongbin Lu's avatar
      Configure Zun for Placement (Train+) · 0f5e0658
      Hongbin Lu authored
      After the integration with placement [1], we need to configure how
      zun-compute is going to work with nova-compute.
      
      * If zun-compute and nova-compute run on the same compute node,
        we need to set 'host_shared_with_nova' as true so that Zun
        will use the resource provider (compute node) created by nova.
        In this mode, containers and VMs could claim allocations against
        the same resource provider.
      * If zun-compute runs on a node without nova-compute, no extra
        configuration is needed. By default, each zun-compute will create
        a resource provider in placement to represent the compute node
        it manages.
      
      [1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
      
      Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
      0f5e0658
  19. Sep 09, 2019
  20. Sep 06, 2019
    • Mark Goddard's avatar
      Add [nova] section to ironic.conf · 8489a753
      Mark Goddard authored
      In the Train cycle, ironic added a [nova] section to its configuration.
      This is used to configure access to Nova API, for sending power state
      callbacks.
      
      This change adds the [nova] section to ironic.conf.
      
      Change-Id: Ib891af1db2a2c838c887e858ea0721f5e6a4fab0
      Closes-Bug: #1843070
      8489a753
    • Mark Goddard's avatar
      Fix removed and deprecated options in ironic.conf · 3da05319
      Mark Goddard authored
      The ironic configuration in ironic.conf uses several options which have
      been removed in the Train cycle:
      
      [glance] glance_api_servers was removed in https://review.opendev.org/#/c/665929.
      [neutron] url was removed in https://review.opendev.org/#/c/672971.
      
      We should use the endpoint catalog instead of specifying the endpoint
      for both of these, and also ironic inspector. region_name and
      valid_interfaces have been added for that purpose.
      
      Other options are deprecated.
      
      [conductor] api_url: Use [service_catalog] section to lookup ironic API
      endpoint instead.
      
      [inspector] enabled: No longer used.
      
      Change-Id: If07c4ff9bfea7d780aeff5c3295a0ace7d10ecdc
      Closes-Bug: #1843067
      3da05319
    • Q.hongtao's avatar
      Fix misspell word · dd6a9d7d
      Q.hongtao authored
      Change-Id: I124cba4bfe85e76f732ae618619594004a5c911f
      dd6a9d7d
Loading