Skip to content
Snippets Groups Projects
  1. Dec 21, 2018
  2. Dec 20, 2018
  3. Dec 19, 2018
  4. Dec 18, 2018
  5. Dec 17, 2018
  6. Dec 16, 2018
    • Bartosz Zurkowski's avatar
      Find Monasca agent plugins locally · 10d33f82
      Bartosz Zurkowski authored
      
      Find module searches paths on managed server. Since role path and custom
      Kolla config is located on deployment node and deployment node is not
      considered to be a managed server, Monasca plugin files cannot be found.
      After the deployment container running Monasca agent collector stucks in
      restart mode due to missing plugin files.
      
      The problem does not occur if deployment was started from a managed
      server (eg. OSC). The problem occurs if the deployment was started from
      a separate deployment server - a common case.
      
      This change enforces running find module locally on deployment node.
      
      Change-Id: Ia25daafe2f82f5744646fd2eda2d255ccead814e
      Signed-off-by: default avatarBartosz Zurkowski <b.zurkowski@samsung.com>
      10d33f82
    • Bartosz Zurkowski's avatar
      Call Grafana APIs only once · c5d1e1d5
      Bartosz Zurkowski authored
      
      In multinode deployments creating default Grafana organization failed,
      because Ansible attempted to call Grafana API in the context of each
      host in the inventory. After creating organization via the first host,
      subsequent attempts via the remaining hosts failed due to already
      existing organization. This change enforces creating default
      organization only once.
      
      Other tasks using Grafana API have been enforced to be ran only once as
      well.
      
      Change-Id: I3a93a719b3c9b4e55ab226d3b22d571d9a0f489d
      Signed-off-by: default avatarBartosz Zurkowski <b.zurkowski@samsung.com>
      c5d1e1d5
  7. Dec 14, 2018
    • Mark Goddard's avatar
      Create cells before starting nova services · 365bb517
      Mark Goddard authored
      Nova services may reasonably expect cell databases to exist when they
      start. The current cell setup tasks in kolla run after the nova
      containers have started, meaning that cells may or may not exist in the
      database when they start, depending on timing. In particular, we are
      seeing issues in kolla CI currently with jobs timing out waiting for
      nova compute services to start. The following error is seen in the nova
      logs of these jobs, which may or may not be relevant:
      
      No cells are configured, unable to continue
      
      This change creates the cell0 and cell1 databases prior to starting nova
      services.
      
      In order to do this, we must create new containers in which to run the
      nova-manage commands, because the nova-api container may not yet exist.
      This required adding support to the kolla_docker module for specifying a
      command for the container to run that overrides the image's command.
      
      We also add the standard output and error to the module's result when a
      non-detached container is run. A secondary benefit of this is that the
      output of bootstrap containers is now displayed in the Ansible output if
      the bootstrapping command fails, which will help with debugging.
      
      Change-Id: I2c1e991064f9f588f398ccbabda94f69dc285e61
      Closes-Bug: #1808575
      365bb517
  8. Dec 13, 2018
  9. Dec 12, 2018
  10. Dec 11, 2018
  11. Dec 07, 2018
    • Mark Goddard's avatar
      Fix fact gathering with --limit · 56b4352f
      Mark Goddard authored
      Prior to this change, when the --limit argument is used, each host in the
      limit gathers facts for every other host. This is clearly unnecessary, and
      can result in up to (N-1)^2 fact gathers.
      
      This change gathers facts for each host only once. Hosts that are not in
      the limit are divided between those that are in the limit, and facts are
      gathered via delegation.
      
      This change also factors out the fact gathering logic into a separate
      playbook that is imported where necessary.
      
      Change-Id: I923df5af41a7f1b7b0142d0da185a9a0979be543
      56b4352f
    • Mark Goddard's avatar
      Scalability improvements for disabled services · 5d8403bd
      Mark Goddard authored
      Currently, every service has a play in site.yml that is executed, and
      the role is skipped if the service is disabled. This can be slow,
      particularly with many hosts, since each play takes time to setup, and
      evaluate.
      
      This change creates various Ansible groups for hosts with services
      enabled at the beginning of the playbook. If a service is disabled, this
      new group will have no hosts, and the play for that service will be a
      noop.
      
      I have tested this on a laptop using an inventory with 12 hosts (each
      pointing to my laptop via SSH), and a config file that disables every
      service. Time taken to run 'kolla-ansible deploy':
      
      Before change: 2m30s
      After change: 0m14s
      
      During development I also tried an approach using an 'include_role' task
      for each service. This was not as good, taking 1m00s.
      
      The downsides to this patch are that there is a large number of tasks at
      the beginning of the playbook to perform the grouping, and every play
      for a disabled service now outputs this warning message:
      
      [WARNING]: Could not match supplied host pattern, ignoring: enable_foo_True
      
      This is because if the service is disabled, there are no hosts in the
      group. This seems like a reasonable tradeoff.
      
      Change-Id: Ie56c270b26926f1f53a9582d451f4bb2457fbb67
      5d8403bd
    • Zuul's avatar
      29a2cda2
    • Zuul's avatar
      Merge "Allow set tty for containers" · f1be7033
      Zuul authored
      f1be7033
  12. Dec 06, 2018
  13. Dec 05, 2018
Loading