Skip to content
Snippets Groups Projects
Commit 656f6cdb authored by k-s-dean's avatar k-s-dean Committed by Radosław Piliszek
Browse files

Put openstack exporter behind HAproxy so only one is queried at a time

Closes-Bug: #1972818

Change-Id: I9e36b9169b6725bf6db953e464fc099087747778
parent 555cd39f
No related branches found
No related tags found
No related merge requests found
......@@ -79,6 +79,12 @@ prometheus_services:
image: "{{ prometheus_openstack_exporter_image_full }}"
volumes: "{{ prometheus_openstack_exporter_default_volumes + prometheus_openstack_exporter_extra_volumes }}"
dimensions: "{{ prometheus_openstack_exporter_dimensions }}"
haproxy:
prometheus_openstack_exporter:
enabled: "{{ enable_prometheus_openstack_exporter | bool }}"
mode: "http"
external: false
port: "{{ prometheus_openstack_exporter_port }}"
prometheus-elasticsearch-exporter:
container_name: prometheus_elasticsearch_exporter
group: prometheus-elasticsearch-exporter
......
......@@ -100,9 +100,7 @@ scrape_configs:
honor_labels: true
static_configs:
- targets:
{% for host in groups["prometheus-openstack-exporter"] %}
- '{{ 'api' | kolla_address(host) | put_address_in_context('url') }}:{{ hostvars[host]['prometheus_openstack_exporter_port'] }}'
{% endfor %}
- '{{ kolla_internal_vip_address | put_address_in_context('url') }}:{{ prometheus_openstack_exporter_port }}'
{% endif %}
{% if enable_prometheus_elasticsearch_exporter | bool %}
......
---
fixes:
- |
The prometheus openstack exporters are now behind haproxy,
providing a unique time series in the prometheus database.
Also ensures that only one exporter queries
the openstack APIs at any given time interval.
With the previous behavior each openstack exporter
was scraped at the same time.
This caused each exporter to query the openstack APIs
simultaneously introducing unneccesary load and duplicate
time series in the prometheus database due to the instance
label being unique for each exporter.
`LP#1972818 <https://bugs.launchpad.net/kolla-ansible/+bug/1972818>`__
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment