Skip to content
Snippets Groups Projects
  1. Sep 14, 2019
  2. Sep 12, 2019
    • Mark Goddard's avatar
      Sync enable flags in globals.yml · fd1fcdc4
      Mark Goddard authored
      Change-Id: I593b06c447d156c7a981d1c617f4f9baa82884de
      Closes-Bug: #1841175
      fd1fcdc4
    • Scott Solkhon's avatar
      Enable Swift Recon · d463d3f7
      Scott Solkhon authored
      
      This commit adds the necessary configuration to the Swift account,
      container and object configuration files to enable the Swift recon
      cli.
      
      In order to give the object server on each Swift host access to the
      recon files, a Docker volume is mounted into each container which
      generates them. The volume is then mounted read only into the object
      server container. Note that multiple containers append to the same
      file. This should not be a problem since Swift uses a lock when
      appending.
      
      Change-Id: I343d8f45a78ebc3c11ed0c68fe8bec24f9ea7929
      Co-authored-by: default avatarDoug Szumski <doug@stackhpc.com>
      d463d3f7
  3. Sep 11, 2019
  4. Sep 10, 2019
    • liyingjun's avatar
      Fixes default volumes config for masakari-instancemonitor · 04975cea
      liyingjun authored
      Change-Id: Idee76f6da357c600d52b4280d29b685ed443191a
      04975cea
    • Hongbin Lu's avatar
      Configure Zun for Placement (Train+) · 0f5e0658
      Hongbin Lu authored
      After the integration with placement [1], we need to configure how
      zun-compute is going to work with nova-compute.
      
      * If zun-compute and nova-compute run on the same compute node,
        we need to set 'host_shared_with_nova' as true so that Zun
        will use the resource provider (compute node) created by nova.
        In this mode, containers and VMs could claim allocations against
        the same resource provider.
      * If zun-compute runs on a node without nova-compute, no extra
        configuration is needed. By default, each zun-compute will create
        a resource provider in placement to represent the compute node
        it manages.
      
      [1] https://blueprints.launchpad.net/zun/+spec/use-placement-resource-management
      
      Change-Id: I2d85911c4504e541d2994ce3d48e2fbb1090b813
      0f5e0658
  5. Sep 06, 2019
  6. Sep 05, 2019
  7. Sep 04, 2019
    • Xing Zhang's avatar
      Improve admin-openrc · f8c3dccd
      Xing Zhang authored
      add clear old environment
      set openstack client to use internalURL
      set manila client to use internalURL
      
      Change-Id: I263fa11ff5439b28d63a6a9ce7ba460cb56fb8e2
      f8c3dccd
  8. Sep 03, 2019
    • Doug Szumski's avatar
      Fix Nova cell search · 7b636033
      Doug Szumski authored
      The output from `nova-manage cell_v2 list_cells --verbose` contains
      an extra column, stating whether the cell is enabled or not. This means
      that the regex never matches, so existing_cells is always empty.
      
      This fix updates the regex by adding a match group for this field which
      may be used in a later change.
      
      Unfortuately the CLI doesn't output in JSON format, which would make
      this a lot less messy.
      
      Closes-Bug: #1842460
      Change-Id: Ib6400b33785f3ef674bffc9329feb3e33bd3f9a3
      7b636033
  9. Sep 02, 2019
  10. Aug 30, 2019
    • Joseph M's avatar
      [nova] Fix service catalog lookup of Neutron endpoint · 096555dc
      Joseph M authored
      nova.conf currently uses the [neutron] "url" parameter which has been
      deprecated since 17.0.0. In multi-region environments this can
      cause Nova to look up the Neutron endpoint for a different region.
      Remove this parameter and set region_name and
      valid_interfaces to allow the correct lookup to be performed.
      
      Change-Id: I1bbc73728439a460447bc8edd264f9f2d3c814e0
      Closes-Bug: #1836952
      096555dc
    • Jan Horstmann's avatar
      Use net_default_mac in ansible/roles/ironic/templates/ironic_pxe_uefi.default.j2 · 870cb1be
      Jan Horstmann authored
      Upstream ironic went from $net_default_ip to $net_default_mac in
      ironic/drivers/modules/master_grub_cfg.txt with
      https://review.opendev.org/#/c/578959/
      
      This commit makes the same change for
      ansible/roles/ironic/templates/ironic_pxe_uefi.default.j2
      
      Using $net_default_ip breaks ironic standalone deployments with
      [dhcp]dhcp_provider = none
      
      Change-Id: I2ca9a66d2bdb0aab5cd9936c8be8206e6ade3bd5
      Closes-Bug: 1842078
      870cb1be
  11. Aug 29, 2019
  12. Aug 26, 2019
    • Joseph M's avatar
      [octavia] Add region-specific catalog lookups · 51033d9b
      Joseph M authored
      octavia.conf is missing configuration values required to do service
      catalog lookups in multiple region environments. Without them Octavia
      can try to contact a service in a different region than its own. Specify
      region_name and endpoint_type for the glance, neutron, and nova services
      to prevent this from happening.
      
      Change-Id: I753cf443c1506bbd7b69fc47e2e0a9b39857509c
      Closes-Bug: #1841479
      51033d9b
  13. Aug 23, 2019
  14. Aug 22, 2019
    • Krzysztof Klimonda's avatar
      Implement TLS encryption for internal endpoints · b0ecd8b6
      Krzysztof Klimonda authored
      This review is the first one in a series of patches and it introduces an
      optional encryption for internal openstack endpoints, implementing part
      of the add-ssl-internal-network spec.
      
      Change-Id: I6589751626486279bf24725f22e71da8cd7f0a43
      b0ecd8b6
    • Mark Goddard's avatar
      Don't assume etcd group exists in baremetal role · 331d373b
      Mark Goddard authored
      The baremetal role does not currently assume too much about the
      inventory, and in kayobe the seed is deployed using a very minimal
      inventory.
      
      Icf3f01516185afb7b9f642407b06a0204c36ecbe added a reference to the etcd
      group in the baremetal role, which causes kayobe seed deployment to fail
      with the following error:
      
          AnsibleUndefinedVariable: 'dict object' has no attribute 'etcd'
      
      This change defaults the group lookup to an empty list.
      
      Change-Id: Ib3252143a97652c5cf70b56cbfd7c7ce69f93a55
      Closes-Bug: #1841073
      331d373b
    • Michal Nasiadka's avatar
      Use fluentd image labels · 4180bee0
      Michal Nasiadka authored
      In order to orchestrate smooth transition to fluentd 0.14.x
      aka 1.0 stable branch aka td-agent 3
      from td-agent repository - use image labels (fluentd_version
      and fluentd_binary).
      
      Depends-On: https://review.opendev.org/676411
      Change-Id: Iab8518c34ef876056c6abcdb5f2e9fc9f1f7dbdd
      4180bee0
    • Mark Goddard's avatar
      Remove stale nova-consoleauth variables · 67c59b1c
      Mark Goddard authored
      Nova-consoleauth support was removed in
      I099080979f5497537e390f531005a517ab12aa7a, but these variables were
      left.
      
      Change-Id: I1ce1631119bba991225835e8e409f11d53276550
      67c59b1c
  15. Aug 21, 2019
    • Michal Nasiadka's avatar
      Add --force to ceph mgr dashboard enablement · 361f61d4
      Michal Nasiadka authored
      Sometimes mgr dashboard enablement fails with following message:
      "Error ENOENT: all mgr daemons do not support module 'dashboard',
      pass --force to force enablement"
      
      Change-Id: Ie7052dbdccb855e02da849dbc207b5d1778e2c82
      361f61d4
    • ljhuang's avatar
      Add meta for some roles · 74edd54b
      ljhuang authored
      The meta is missing, this PS to add it
      
      Change-Id: Ib7e39820a48659202ddd1c1f91b2e8c3f0529443
      74edd54b
  16. Aug 20, 2019
    • Dincer Celik's avatar
      Fix import of horizon custom_local_settings on python3 · 120e8080
      Dincer Celik authored
      Change-Id: I71f3e8ab50426246b595755a8f3298ba7ca0a50d
      Closes-Bug: #1803029
      120e8080
    • Doug Szumski's avatar
      Fix HAProxy check for MariaDB · d34147b8
      Doug Szumski authored
      The MariaDB role HAProxy config section exposes MariaDB on the
      mariadb_port which may not always be the same as database_port. The
      HAProxy role checks that the database_port is free, and not the
      mariadb_port. This could mean that the check passes, but the actual
      port which HAProxy will attempt to use is taken.
      
      This change configures HAProxy to talk to the MariaDB instances on
      the mariadb_port, and maps them to the database_port which is used by
      most services as part of the DB connection string.
      
      There is a small risk that it may break someones override config.
      
      Change-Id: I9507ee709cb21eb743112107770ed3170c61ef74
      d34147b8
  17. Aug 19, 2019
    • Isaac Prior's avatar
      Removes monasca_grafana persistent volume · ff8c24d6
      Isaac Prior authored
      The monasca_grafana docker volume currently persists across container
      builds, causing changes to installed plugins during build to be ignored.
      This change deletes the volume entirely and forces plugin changes to be
      applied via rebuild.
      
      Change-Id: I36e62235a085e5c1955fdb5ae31f603be8ba69bf
      ff8c24d6
    • Mark Goddard's avatar
      Set default timeout to 60 seconds for docker stop · 33efcb81
      Mark Goddard authored
      The previous default timeout was 10 seconds, which does not always
      allow services enough time to shut down safely.
      
      Change-Id: I54eff91567108a7e5d99f067829ae4a6900cd859
      33efcb81
  18. Aug 18, 2019
  19. Aug 16, 2019
  20. Aug 15, 2019
    • Rafael Weingärtner's avatar
      Standardize the configuration of "oslo_messaging" section · 22a6223b
      Rafael Weingärtner authored
      After all of the discussions we had on
      "https://review.opendev.org/#/c/670626/2", I studied all projects that
      have an "oslo_messaging" section. Afterwards, I applied the same method
      that is already used in "oslo_messaging" section in Nova, Cinder, and
      others. This guarantees that we have a consistent method to
      enable/disable notifications across projects based on components (e.g.
      Ceilometer) being enabled or disabled. Here follows the list of
      components, and the respective changes I did.
      
      * Aodh:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Congress:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Cinder:
      It was already properly configured.
      
      * Octavia:
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Heat:
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Ceilometer:
      Ceilometer publishes some messages in the rabbitMQ. However, the
      default driver is "messagingv2", and not ''(empty) as defined in Oslo;
      these configurations are defined in ceilometer/publisher/messaging.py.
      Therefore, we do not need to do anything for the
      "oslo_messaging_notifications" section in Ceilometer
      
      * Tacker:
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Neutron:
      It was already properly configured.
      
      * Nova
      It was already properly configured. However, we found another issue
      with its configuration. Kolla-ansible does not configure nova
      notifications as it should. If 'searchlight' is not installed (enabled)
      the 'notification_format' should be 'unversioned'. The default is
      'both'; so nova will send a notification to the queue
      versioned_notifications; but that queue has no consumer when
      'searchlight' is disabled. In our case, the queue got 511k messages.
      The huge amount of "stuck" messages made the Rabbitmq cluster
      unstable.
      
      https://bugzilla.redhat.com/show_bug.cgi?id=1478274
      https://bugs.launchpad.net/ceilometer/+bug/1665449
      
      * Nova_hyperv:
      I added the same configurations as in Nova project.
      
      * Vitrage
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Searchlight
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Ironic
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Glance
      It was already properly configured.
      
      * Trove
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Blazar
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Sahara
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Watcher
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Barbican
      I created a mechanism similar to what we have in Cinder, Nova,
      and others. I also added a configuration to 'keystone_notifications'
      section. Barbican needs its own queue to capture events from Keystone.
      Otherwise, it has an impact on Ceilometer and other systems that are
      connected to the "notifications" default queue.
      
      * Keystone
      Keystone is the system that triggered this work with the discussions
      that followed on https://review.opendev.org/#/c/670626/2
      
      . After a long
      discussion, we agreed to apply the same approach that we have in Nova,
      Cinder and other systems in Keystone. That is what we did. Moreover, we
      introduce a new topic "barbican_notifications" when barbican is
      enabled. We also removed the "variable" enable_cadf_notifications, as
      it is obsolete, and the default in Keystone is CADF.
      
      * Mistral:
      It was hardcoded "noop" as the driver. However, that does not seem a
      good practice. Instead, I applied the same standard of using the driver
      and pushing to "notifications" queue if Ceilometer is enabled.
      
      * Cyborg:
      I created a mechanism similar to what we have in AODH, Cinder, Nova,
      and others.
      
      * Murano
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Senlin
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Manila
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Zun
      The section is declared, but it is not used. Therefore, it will
      be removed in an upcomming PR.
      
      * Designate
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      * Magnum
      It was already using a similar scheme; I just modified it a little bit
      to be the same as we have in all other components
      
      Closes-Bug: #1838985
      
      Change-Id: I88bdb004814f37c81c9a9c4e5e491fac69f6f202
      Signed-off-by: default avatarRafael Weingärtner <rafael@apache.org>
      22a6223b
    • Kien Nguyen's avatar
      Add Masakari Ansible role · 577bb50a
      Kien Nguyen authored
      Masakari provides Instances High Availability Service for
      OpenStack clouds by automatically recovering failed Instances.
      
      Depends-On: https://review.openstack.org/#/c/615469/
      
      
      Change-Id: I0b3457232ee86576022cff64eb2e227ff9bbf0aa
      Implements: blueprint ansible-masakari
      Co-Authored-By: default avatarGaëtan Trellu <gaetan.trellu@incloudus.com>
      577bb50a
    • Scott Solkhon's avatar
      Wait for MariaDB to be accessible via HAProxy · 03cd7eb3
      Scott Solkhon authored
      Explicitly wait for the database to be accessible via the load balancer.
      Sometimes it can reject connections even when all database services are up,
      possibly due to the health check polling in HAProxy.
      
      Closes-Bug: #1840145
      Change-Id: I7601bb710097a78f6b29bc4018c71f2c6283eef2
      03cd7eb3
    • Radosław Piliszek's avatar
      Allow cinder coordination backend to be configured · 03b4c706
      Radosław Piliszek authored
      
      This is to allow operator to prevent enabling redis and/or
      etcd from magically configuring cinder coordinator.
      
      Note this change is backwards-compatible.
      
      Change-Id: Ie10be55968e43e3b9cc347b1b58771c1f7b1b910
      Related-Bug: #1840070
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      03b4c706
Loading