Skip to content
Snippets Groups Projects
  1. Jul 05, 2019
  2. Jul 04, 2019
  3. Jul 03, 2019
  4. Jul 02, 2019
    • Radosław Piliszek's avatar
      CI: Keep stderr in ansible logs · b9aa8b38
      Radosław Piliszek authored
      
      Otherwise ara had only the stderr part and logs only the
      stdout part which made ordered analysis harder.
      
      Additionally add -vvv for the bootstrap-servers run.
      
      Change-Id: Ia42ac9b90a17245e9df277c40bda24308ebcd11d
      Signed-off-by: default avatarRadosław Piliszek <radoslaw.piliszek@gmail.com>
      b9aa8b38
    • Rafael Weingärtner's avatar
      Cloudkitty InfluxDB Storage backend via Kolla-ansible · 97cb30cd
      Rafael Weingärtner authored
      This proposal will add support to Kolla-Ansible for Cloudkitty
       InfluxDB storage system deployment. The feature of InfluxDB as the
       storage backend for Cloudkitty was created with the following commit
       https://github.com/openstack/cloudkitty/commit/
       c4758e78b49386145309a44623502f8095a2c7ee
      
      Problem Description
      ===================
      
      With the addition of support for InfluxDB in Cloudkitty, which is
      achieving general availability via Stein release, we need a method to
      easily configure/support this storage backend system via Kolla-ansible.
      
      Kolla-ansible is already able to deploy and configure an InfluxDB
      system. Therefore, this proposal will use the InfluxDB deployment
      configured via Kolla-ansible to connect to CloudKitty and use it as a
      storage backend.
      
      If we do not provide a method for users (operators) to manage
      Cloudkitty storage backend via Kolla-ansible, the user has to execute
      these changes/configurations manually (or via some other set of
      automated scripts), which creates distributed set of configuration
      files, "configurations" scripts that have different versioning schemas
      and life cycles.
      
      Proposed Change
      ===============
      
      Architecture
      ------------
      
      We propose a flag that users can use to make Kolla-ansible configure
      CloudKitty to use InfluxDB as the storage backend system. When
      enabling this flag, Kolla-ansible will also enable the deployment of
      the InfluxDB via Kolla-ansible automatically.
      
      CloudKitty will be configured accordingly to [1] and [2]. We will also
      externalize the "retention_policy", "use_ssl", and "insecure", to
      allow fine granular configurations to operators. All of these
      configurations will only be used when configured; therefore, when they
      are not set, the default value/behavior defined in Cloudkitty will be
      used. Moreover, when we configure "use_ssl" to "true", the user will
      be able to set "cafile" to a custom trusted CA file. Again, if these
      variables are not set, the default ones in Cloudkitty will be used.
      
      Implementation
      --------------
      We need to introduce a new variable called
      `cloudkitty_storage_backend`. Valid options are `sqlalchemy` or
      `influxdb`. The default value in Kolla-ansible is `sqlalchemy` for
      backward compatibility. Then, the first step is to change the
      definition for the following variable:
      `/ansible/group_vars/all.yml:enable_influxdb: "{{ enable_monasca |
      bool }}"`
      
      We also need to enable InfluxDB when CloudKitty is configured to use
      it as the storage backend. Afterwards, we need to create tasks in
      CloudKitty configurations to create the InfluxDB schema and configure
      the configuration files accordingly.
      
      Alternatives
      ------------
      The alternative would be to execute the configurations manually or
      handle it via a different set of scripts and configurations files,
      which can become cumbersome with time.
      
      Security Impact
      ---------------
      None identified by the author of this spec
      
      Notifications Impact
      --------------------
      Operators that are already deploying CloudKitty with InfluxDB as
      storage backend would need to convert their configurations to
      Kolla-ansible (if they wish to adopt Kolla-ansible to execute these
      tasks).
      
      Also, deployments (OpenStack environments) that were created with
      Cloudkitty using storage v1 will need to migrate all of their data to
      V2 before enabling InfluxDB as the storage system.
      
      Other End User Impact
      ---------------------
      None.
      
      Performance Impact
      ------------------
      None.
      
      Other Deployer Impact
      ---------------------
      New configuration options will be available for CloudKitty.
      * cloudkitty_storage_backend
      * cloudkitty_influxdb_retention_policy
      * cloudkitty_influxdb_use_ssl
      * cloudkitty_influxdb_cafile
      * cloudkitty_influxdb_insecure_connections
      * cloudkitty_influxdb_name
      
      Developer Impact
      ----------------
      None
      
      Implementation
      ==============
      
      Assignee
      --------
      * `Rafael Weingärtner <rafaelweingartne>`
      
      Work Items
      ----------
       * Extend InfluxDB "enable/disable" variable
       * Add new tasks to configure Cloudkitty accordingly to these new
       variables that are presented above
       * Write documentation and release notes
      
      Dependencies
      ============
      None
      
      Documentation Impact
      ====================
      New documentation for the feature.
      
      References
      ==========
      [1] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/storage.html#influxdb-v2`
      [2] `https://docs.openstack.org/cloudkitty/latest/admin/configuration/collector.html#metric-collection`
      
      
      
      Change-Id: I65670cb827f8ca5f8529e1786ece635fe44475b0
      Signed-off-by: default avatarRafael Weingärtner <rafael@apache.org>
      97cb30cd
    • Zuul's avatar
      8b1e6379
  5. Jul 01, 2019
  6. Jun 28, 2019
    • Will Szumski's avatar
      Specify endpoint when creating monasca user · 9074da56
      Will Szumski authored
      otherwise I'm seeing:
      
      TASK [monasca : Creating the monasca agent user] ****************************************************************************************************************************
      fatal: [monitor1]: FAILED! => {"changed": false, "module_stderr": "Shared connection to 172.16.3.24 closed.\r\n", "module_stdout": "Traceback (most recent call last):\r\n  F
      ile \"/tmp/ansible_I0RmxQ/ansible_module_kolla_toolbox.py\", line 163, in <module>\r\n    main()\r\n  File \"/tmp/ansible_I0RmxQ/ansible_module_kolla_toolbox.py\", line 141,
       in main\r\n    output = client.exec_start(job)\r\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/decorators.py\", line 19, in wrapped\r\n
          return f(self, resource_id, *args, **kwargs)\r\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/api/exec_api.py\", line 165, in exec_start\r\
      n    return self._read_from_socket(res, stream, tty)\r\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/api/client.py\", line 377, in _read_from_
      socket\r\n    return six.binary_type().join(gen)\r\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 75, in frames_iter\r\
      n    n = next_frame_size(socket)\r\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 62, in next_frame_size\r\n    data = read_exactly(socket, 8)\r\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 47, in read_exactly\r\n    next_data = read(socket, n - len(data))\r\n  File \"/opt/kayobe/venvs/kolla-ansible/lib/python2.7/site-packages/docker/utils/socket.py\", line 31, in read\r\n    return socket.recv(n)\r\nsocket.timeout: timed out\r\n", "msg": "MODULE FAILURE", "rc": 1}
      
      when the monitoring nodes aren't on the public API network.
      
      Change-Id: I7a93f69da0e02c9264da0b081d2e60626f899e3a
      9074da56
  7. Jun 27, 2019
    • Mark Goddard's avatar
      Simplify handler conditionals · de00bf49
      Mark Goddard authored
      Currently, we have a lot of logic for checking if a handler should run,
      depending on whether config files have changed and whether the
      container configuration has changed. As rm_work pointed out during
      the recent haproxy refactor, these conditionals are typically
      unnecessary - we can rely on Ansible's handler notification system
      to only trigger handlers when they need to run. This removes a lot
      of error prone code.
      
      This patch removes conditional handler logic for all services. It is
      important to ensure that we no longer trigger handlers when unnecessary,
      because without these checks in place it will trigger a restart of the
      containers.
      
      Implements: blueprint simplify-handlers
      
      Change-Id: I4f1aa03e9a9faaf8aecd556dfeafdb834042e4cd
      de00bf49
    • Zuul's avatar
      Merge "Disable and remove OracleLinux CI jobs" · 54856a87
      Zuul authored
      54856a87
    • Zuul's avatar
      85b9dabc
    • Zuul's avatar
      Merge "Restart all nova services after upgrade" · 651b983b
      Zuul authored
      651b983b
    • Zuul's avatar
      Merge "Format internal Fluentd logs" · e8f210a2
      Zuul authored
      e8f210a2
    • Zuul's avatar
      Merge "Don't drop unmatched Kolla service logs" · 01bc357d
      Zuul authored
      01bc357d
    • Zuul's avatar
      Merge "Increase log coverage for Monasca" · 067e40ad
      Zuul authored
      067e40ad
    • Zuul's avatar
      Merge "Enable InfluxDB TSI by default" · e7c19b74
      Zuul authored
      e7c19b74
    • Zuul's avatar
    • Christian Berendt's avatar
      Add support for neutron custom dnsmasq.conf · a3f1ded3
      Christian Berendt authored
      Change-Id: Ia7041be384ac07d0a790c2c5c68b1b31ff0e567a
      a3f1ded3
    • Mark Goddard's avatar
      Restart all nova services after upgrade · e6d2b922
      Mark Goddard authored
      During an upgrade, nova pins the version of RPC calls to the minimum
      seen across all services. This ensures that old services do not receive
      data they cannot handle. After the upgrade is complete, all nova
      services are supposed to be reloaded via SIGHUP to cause them to check
      again the RPC versions of services and use the new latest version which
      should now be supported by all running services.
      
      Due to a bug [1] in oslo.service, sending services SIGHUP is currently
      broken. We replaced the HUP with a restart for the nova_compute
      container for bug 1821362, but not other nova services. It seems we need
      to restart all nova services to allow the RPC version pin to be removed.
      
      Testing in a Queens to Rocky upgrade, we find the following in the logs:
      
      Automatically selected compute RPC version 5.0 from minimum service
      version 30
      
      However, the service version in Rocky is 35.
      
      There is a second issue in that it takes some time for the upgraded
      services to update the nova services database table with their new
      version. We need to wait until all nova-compute services have done this
      before the restart is performed, otherwise the RPC version cap will
      remain in place. There is currently no interface in nova available for
      checking these versions [2], so as a workaround we use a configurable
      delay with a default duration of 30 seconds. Testing showed it takes
      about 10 seconds for the version to be updated, so this gives us some
      headroom.
      
      This change restarts all nova services after an upgrade, after a 30
      second delay.
      
      [1] https://bugs.launchpad.net/oslo.service/+bug/1715374
      [2] https://bugs.launchpad.net/nova/+bug/1833542
      
      Change-Id: Ia6fc9011ee6f5461f40a1307b72709d769814a79
      Closes-Bug: #1833069
      Related-Bug: #1833542
      e6d2b922
    • Mark Goddard's avatar
      Don't rotate keystone fernet keys during deploy · 09e29d0d
      Mark Goddard authored
      When running deploy or reconfigure for Keystone,
      ansible/roles/keystone/tasks/deploy.yml calls init_fernet.yml,
      which runs /usr/bin/fernet-rotate.sh, which calls keystone-manage
      fernet_rotate.
      
      This means that a token can become invalid if the operator runs
      deploy or reconfigure too often.
      
      This change splits out fernet-push.sh from the fernet-rotate.sh
      script, then calls fernet-push.sh after the fernet bootstrap
      performed in deploy.
      
      Change-Id: I824857ddfb1dd026f93994a4ac8db8f80e64072e
      Closes-Bug: #1833729
      09e29d0d
  8. Jun 26, 2019
  9. Jun 25, 2019
  10. Jun 24, 2019
Loading