Skip to content
Snippets Groups Projects
  1. Dec 23, 2018
  2. Dec 21, 2018
  3. Dec 19, 2018
  4. Dec 17, 2018
  5. Dec 16, 2018
    • Bartosz Zurkowski's avatar
      Find Monasca agent plugins locally · 10d33f82
      Bartosz Zurkowski authored
      
      Find module searches paths on managed server. Since role path and custom
      Kolla config is located on deployment node and deployment node is not
      considered to be a managed server, Monasca plugin files cannot be found.
      After the deployment container running Monasca agent collector stucks in
      restart mode due to missing plugin files.
      
      The problem does not occur if deployment was started from a managed
      server (eg. OSC). The problem occurs if the deployment was started from
      a separate deployment server - a common case.
      
      This change enforces running find module locally on deployment node.
      
      Change-Id: Ia25daafe2f82f5744646fd2eda2d255ccead814e
      Signed-off-by: default avatarBartosz Zurkowski <b.zurkowski@samsung.com>
      10d33f82
    • Bartosz Zurkowski's avatar
      Call Grafana APIs only once · c5d1e1d5
      Bartosz Zurkowski authored
      
      In multinode deployments creating default Grafana organization failed,
      because Ansible attempted to call Grafana API in the context of each
      host in the inventory. After creating organization via the first host,
      subsequent attempts via the remaining hosts failed due to already
      existing organization. This change enforces creating default
      organization only once.
      
      Other tasks using Grafana API have been enforced to be ran only once as
      well.
      
      Change-Id: I3a93a719b3c9b4e55ab226d3b22d571d9a0f489d
      Signed-off-by: default avatarBartosz Zurkowski <b.zurkowski@samsung.com>
      c5d1e1d5
  6. Dec 14, 2018
    • Mark Goddard's avatar
      Create cells before starting nova services · 365bb517
      Mark Goddard authored
      Nova services may reasonably expect cell databases to exist when they
      start. The current cell setup tasks in kolla run after the nova
      containers have started, meaning that cells may or may not exist in the
      database when they start, depending on timing. In particular, we are
      seeing issues in kolla CI currently with jobs timing out waiting for
      nova compute services to start. The following error is seen in the nova
      logs of these jobs, which may or may not be relevant:
      
      No cells are configured, unable to continue
      
      This change creates the cell0 and cell1 databases prior to starting nova
      services.
      
      In order to do this, we must create new containers in which to run the
      nova-manage commands, because the nova-api container may not yet exist.
      This required adding support to the kolla_docker module for specifying a
      command for the container to run that overrides the image's command.
      
      We also add the standard output and error to the module's result when a
      non-detached container is run. A secondary benefit of this is that the
      output of bootstrap containers is now displayed in the Ansible output if
      the bootstrapping command fails, which will help with debugging.
      
      Change-Id: I2c1e991064f9f588f398ccbabda94f69dc285e61
      Closes-Bug: #1808575
      365bb517
  7. Dec 11, 2018
  8. Dec 07, 2018
    • Mark Goddard's avatar
      Fix fact gathering with --limit · 56b4352f
      Mark Goddard authored
      Prior to this change, when the --limit argument is used, each host in the
      limit gathers facts for every other host. This is clearly unnecessary, and
      can result in up to (N-1)^2 fact gathers.
      
      This change gathers facts for each host only once. Hosts that are not in
      the limit are divided between those that are in the limit, and facts are
      gathered via delegation.
      
      This change also factors out the fact gathering logic into a separate
      playbook that is imported where necessary.
      
      Change-Id: I923df5af41a7f1b7b0142d0da185a9a0979be543
      56b4352f
    • Mark Goddard's avatar
      Scalability improvements for disabled services · 5d8403bd
      Mark Goddard authored
      Currently, every service has a play in site.yml that is executed, and
      the role is skipped if the service is disabled. This can be slow,
      particularly with many hosts, since each play takes time to setup, and
      evaluate.
      
      This change creates various Ansible groups for hosts with services
      enabled at the beginning of the playbook. If a service is disabled, this
      new group will have no hosts, and the play for that service will be a
      noop.
      
      I have tested this on a laptop using an inventory with 12 hosts (each
      pointing to my laptop via SSH), and a config file that disables every
      service. Time taken to run 'kolla-ansible deploy':
      
      Before change: 2m30s
      After change: 0m14s
      
      During development I also tried an approach using an 'include_role' task
      for each service. This was not as good, taking 1m00s.
      
      The downsides to this patch are that there is a large number of tasks at
      the beginning of the playbook to perform the grouping, and every play
      for a disabled service now outputs this warning message:
      
      [WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
      
      This is because if the service is disabled, there are no hosts in the
      group. This seems like a reasonable tradeoff.
      
      Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
      5d8403bd
  9. Dec 06, 2018
  10. Dec 05, 2018
    • Eduardo Gonzalez's avatar
      Allow set tty for containers · 846c15d8
      Eduardo Gonzalez authored
      This change adds support to comfigure tty,
      it was enabled by default but a recent patch
      removed it. Some services such as Karaf in opendaylight
      requires a TTY during startup.
      
      Closes-Bug: #1806662
      Change-Id: Ia4335523b727d0e45505cbb1efb40ccf04c27db7
      846c15d8
    • Jeffrey Zhang's avatar
      Fix glance configuration when using external ceph · 6f020a04
      Jeffrey Zhang authored
      When using external ceph, enable_ceph=no and glance_backend_ceph=yes,
      glance.conf should enable rbd store.
      
      Change-Id: Ia09cd57c829b00f28674cddf44fb55583e193d0f
      6f020a04
  11. Nov 30, 2018
  12. Nov 26, 2018
    • Farid Da Encarnacao's avatar
      Fix karbor upgrade · 209d9c76
      Farid Da Encarnacao authored
      Remove mode "0660" because mode it's not a supported parameters for kolla_docker
      
      Change-Id: I1e3d690eb3cb5d61b1c88f6da2f9b10e2c5f3603
      Closes-Bug: #1804702
      209d9c76
    • Eduardo Gonzalez's avatar
      Support stop specific containers · 1a682fab
      Eduardo Gonzalez authored
      With this change, an operator may be able to stop a
      service container without stopping all services in a host.
      This change is the starting point to start
      fast-forward upgrades support.
      In next changes new flags will be introducced to disable
      stop dataplane services during upgrades.
      
      Change-Id: Ifde7a39d7d8596ef0d7405ecf1ac1d49a459d9ef
      Implements: blueprint support-stop-containers
      1a682fab
  13. Nov 22, 2018
    • Nick Jones's avatar
      Add new option to perform an on-demand backup of MariaDB · f704a780
      Nick Jones authored
      blueprint database-backup-recovery
      
      Introduce a new option, mariadb_backup, which takes a backup of all
      databases hosted in MariaDB.
      
      Backups are performed using XtraBackup, the output of which is saved to
      a dedicated Docker volume on the target host (which defaults to the
      first node in the MariaDB cluster).
      
      It supports either full (the default) or incremental backups.
      
      Change-Id: Ied224c0d19b8734aa72092aaddd530155999dbc3
      f704a780
  14. Nov 21, 2018
    • Eduardo Gonzalez's avatar
      Add glance-cache support · cc9dae4d
      Eduardo Gonzalez authored
      Glance cache is used to keep a locally cache image
      in the glance_api service.
      Is an usefull service when an image is commonly used
      to speed times between pulling from storage backend
      and send to nova.
      
      Change-Id: I8e684cc10e4fee1cb52c17a126e3b11f69576cf6
      cc9dae4d
  15. Nov 20, 2018
  16. Nov 19, 2018
    • Christian Berendt's avatar
      Set "no_log" for "databases user and setting permissions" tasks · 03788e17
      Christian Berendt authored
      
      At the moment the "databases user and setting permissions" task for
      designate and nova leaks the database_password because of the use
      of with_items:
      
      ---snip---
      TASK [nova : Creating Nova databases user and setting permissions] *********************************************************
      ok: [x -> y] => (item={u'database_password': u'password', u'database_name': u'nova', u'database_username': u'nova'})
      ok: [x -> y] => (item={u'database_password': u'password', u'database_name': u'nova_cell0', u'database_username': u'nova'})
      ok: [x -> y] => (item={u'database_password': u'password', u'database_name': u'nova_api', u'database_username': u'nova_api'})
      ---snap---
      
      Change-Id: I141e4153223c8772c82a31d81e58057ce266c0b9
      Co-authored-by: default avatarBernd Müller <mueller@b1-systems.de>
      03788e17
  17. Nov 17, 2018
  18. Nov 29, 2018
    • Nicolas Haller's avatar
      Fix section trustee of sahara.conf · 4812d4a7
      Nicolas Haller authored
      Tested on Rocky, /v3 needs to be added to the variable auth_url to have
      the trust/trustee mechanism to work. All cluster creation would fail
      otherwise.
      
      Closes-Bug: #1805896
      Change-Id: Ieedac124fa22e5a7ae622c16d47d482007bbec60
      4812d4a7
    • Mark Goddard's avatar
      Factor out OpenStack release detection playbook · fca91fe8
      Mark Goddard authored
      We copy-paste the same play into various playbooks to detect
      openstack_release. This change factors that code into a separate
      playbook that is imported.
      
      Change-Id: I5fea005642b960080bf5e43455618dc24766c386
      fca91fe8
  19. Nov 28, 2018
    • Nicolas Haller's avatar
      Fix section keystone_authtoken of sahara.conf · b439d48a
      Nicolas Haller authored
      Tested on Rocky, it seems there is no admin_* variables and some others
      are missing (username/password/...) causing keystone to return http code
      400 responses.
      
      Change-Id: If4a0919bfcd6b8d8a6bfd5df9001b4967e441e7e
      Closes-Bug: #1805714
      b439d48a
    • Gaëtan Trellu's avatar
      Fix Karbor endpoints · 22284676
      Gaëtan Trellu authored
      From Karbor documentation, endpoints should be created with
      "%(project_id)s" and not with "%(tenant_id)s".
      
      This is very important because of this commit in Karbor which is
      looking for a string "project_id".
      
      Change-Id: I8fc640891d0d58541198cc8f2e942d8db6e8d02f
      Closes-Bug: #1805705
      22284676
    • Gaëtan Trellu's avatar
      Set region_id for karbor_client · 4bb5b335
      Gaëtan Trellu authored
      region_id has a default value hardcoded in Karbor code equal to
      "RegionOne" which could be an issue if a different region is define.
      
      Change-Id: Ia13496156515d0f871e8fa9bd3584940a32759e9
      Closes-Bug: #1798125
      4bb5b335
  20. Nov 25, 2018
  21. Nov 21, 2018
  22. Nov 19, 2018
    • caoyuan's avatar
      Use correct variable for default certificate paths · 9223deee
      caoyuan authored
      The variable {{ node_config_directory }} is used for the configuration
      directory on the remote hosts, and should not be used for paths on the
      deploy host (localhost).
      
      This changes the default value of the TLS certificate and CA file to
      reference {{ CONFIG_DIR }}, in line with the directory used for
      admin-openrc.sh (as of I0709482ead4b7a67e82796e17f85bde151e71bc0).
      
      This change also introduces a variable, {{ node_config }}, that
      references {{ CONFIG_DIR | default('/etc/kolla') }}, to remove
      duplication.
      
      Change-Id: Ibd82ac78630ebfff5824c329d7399e1e900c0ee0
      Closes-Bug: #1804025
      9223deee
  23. Nov 13, 2018
  24. Nov 09, 2018
    • Pierre Blanc's avatar
      Freezer: Update freezer driver with elasticsearch · 62222abc
      Pierre Blanc authored
      By default the driver used is elasticsearch in version 2
      This change updates the driver with the good one.
      It also updates backend with name used in the documentation.
      
      Change-Id: I80f3020cb42903ae48ef65f52f67aae977c5a56b
      62222abc
Loading