Skip to content
Snippets Groups Projects

Compare revisions

Changes are shown as if the source revision was being merged into the target revision. Learn more about comparing revisions.

Source

Select target project
No results found

Target

Select target project
  • gitlab-org/build/omnibus-mirror/prometheus
1 result
Show changes
Commits on Source (12)
Loading
Loading
@@ -311,10 +311,13 @@ The following meta labels are available on targets during [relabeling](#relabel_
server: <host>
[ token: <secret> ]
[ datacenter: <string> ]
[ scheme: <string> ]
[ scheme: <string> | default = "http"]
[ username: <string> ]
[ password: <secret> ]
 
tls_config:
[ <tls_config> ]
# A list of services for which targets are retrieved. If omitted, all services
# are scraped.
services:
Loading
Loading
@@ -412,24 +415,49 @@ region: <string>
CAUTION: OpenStack SD is in beta: breaking changes to configuration are still
likely in future releases.
 
OpenStack SD configurations allow retrieving scrape targets from OpenStack Nova
instances.
OpenStack SD configurations allow retrieving scrape targets from the OpenStack
Nova API.
One of the following `role` types can be configured to discover targets:
#### `hypervisor`
The `hypervisor` role discovers one target per Nova hypervisor node. The target
address defaults to the `host_ip` attribute of the hypervisor.
 
The following meta labels are available on targets during [relabeling](#relabel_config):
 
* `__meta_openstack_instance_id`: the OpenStack instance ID
* `__meta_openstack_instance_name`: the OpenStack instance name
* `__meta_openstack_instance_status`: the status of the OpenStack instance
* `__meta_openstack_instance_flavor`: the flavor of the OpenStack instance
* `__meta_openstack_public_ip`: the public IP of the OpenStack instance
* `__meta_openstack_private_ip`: the private IP of the OpenStack instance
* `__meta_openstack_tag_<tagkey>`: each tag value of the instance
* `__meta_openstack_instance_id`: the OpenStack instance ID.
* `__meta_openstack_instance_name`: the OpenStack instance name.
* `__meta_openstack_instance_status`: the status of the OpenStack instance.
* `__meta_openstack_instance_flavor`: the flavor of the OpenStack instance.
* `__meta_openstack_public_ip`: the public IP of the OpenStack instance.
* `__meta_openstack_private_ip`: the private IP of the OpenStack instance.
* `__meta_openstack_tag_<tagkey>`: each tag value of the instance.
#### `instance`
The `instance` role discovers one target per Nova instance. The target
address defaults to the first private IP address of the instance.
The following meta labels are available on targets during [relabeling](#relabel_config):
* `__meta_openstack_instance_id`: the OpenStack instance ID.
* `__meta_openstack_instance_name`: the OpenStack instance name.
* `__meta_openstack_instance_status`: the status of the OpenStack instance.
* `__meta_openstack_instance_flavor`: the flavor of the OpenStack instance.
* `__meta_openstack_public_ip`: the public IP of the OpenStack instance.
* `__meta_openstack_private_ip`: the private IP of the OpenStack instance.
* `__meta_openstack_tag_<tagkey>`: each tag value of the instance.
 
See below for the configuration options for OpenStack discovery:
 
```yaml
# The information to access the OpenStack API.
 
# The OpenStack role of entities that should be discovered.
role: <role>
# The OpenStack Region.
region: <string>
 
Loading
Loading
@@ -979,10 +1007,6 @@ external labels send identical alerts.
 
### `<alertmanager_config>`
 
CAUTION: Dynamic discovery of Alertmanager instances is in alpha state. Breaking configuration
changes may happen in future releases. Use static configuration via the `-alertmanager.url` flag
as a stable alternative.
An `alertmanager_config` section specifies Alertmanager instances the Prometheus server sends
alerts to. It also provides parameters to configure how to communicate with these Alertmanagers.
 
Loading
Loading
Loading
Loading
@@ -59,6 +59,7 @@ A simple example rules file would be:
```yaml
groups:
- name: example
rules:
- record: job:http_inprogress_requests:sum
expr: sum(http_inprogress_requests) by (job)
```
Loading
Loading
@@ -78,6 +79,7 @@ rules:
### `<rule>`
 
The syntax for recording rules is:
```
# The name of the time series to output to. Must be a valid metric name.
record: <string>
Loading
Loading
@@ -93,6 +95,7 @@ labels:
```
 
The syntax for alerting rules is:
```
# The name of the alert. Must be a valid metric name.
alert: <string>
Loading
Loading
Loading
Loading
@@ -2,17 +2,19 @@
# todo: internal
---
 
# Prometheus 1.8
# Prometheus 2.0
 
Welcome to the documentation of the Prometheus server.
 
The documentation is available alongside all the project documentation at
[prometheus.io](https://prometheus.io/docs/prometheus/1.8/).
[prometheus.io](https://prometheus.io/docs/prometheus/2.0/).
 
## Content
 
- [Installing](install.md)
- [Getting started](getting_started.md)
- [Installation](installation.md)
- [Configuration](configuration/configuration.md)
- [Querying](querying/basics.md)
- [Storage](storage.md)
- [Federation](federation.md)
- [Migration](migration.md)
---
title: Migration
sort_rank: 7
---
# Prometheus 2.0 migration guide
 
In line with our [stability promise](https://prometheus.io/blog/2016/07/18/prometheus-1-0-released/#fine-print),
Loading
Loading
@@ -6,14 +11,15 @@ This document offers guidance on migrating from Prometheus 1.8 to Prometheus 2.0
 
## Flags
 
The format of the Prometheus command line flags have changed. Instead of a
The format of the Prometheus command line flags has changed. Instead of a
single dash, all flags now use a double dash. Common flags (`--config.file`,
`--web.listen-address` and `--web.external-url`) are still the same but beyond
that, almost all the storage-related flags have been removed.
 
Some notable flags which have been removed:
- `-alertmanager.url` In Prometheus 2.0, the command line flags for configuring
a static Alertmanager URL have been removed. Alertmanager must now be
a static Alertmanager URL have been removed. Alertmanager must now be
discovered via service discovery, see [Alertmanager service discovery](#amsd).
 
- `-log.format` In Prometheus 2.0 logs can only be streamed to standard error.
Loading
Loading
@@ -26,14 +32,14 @@ Some notable flags which have been removed:
new engine, see [Storage](#storage).
 
- `-storage.remote.*` Prometheus 2.0 has removed the already deprecated remote
storage flags, and will fail to start if they are supplied. To write to
storage flags, and will fail to start if they are supplied. To write to
InfluxDB, Graphite, or OpenTSDB use the relevant storage adapter.
 
## Alertmanager service discovery
 
Alertmanager service discovery was introduced in Prometheus 1.4, allowing Prometheus
to dynamically discover Alertmanager replicas using the same mechanism as scrape
targets. In Prometheus 2.0, the command line flags for static Alertmanager config
targets. In Prometheus 2.0, the command line flags for static Alertmanager config
have been removed, so the following command line flag:
 
```
Loading
Loading
@@ -42,7 +48,7 @@ have been removed, so the following command line flag:
 
Would be replaced with the following in the `prometheus.yml` config file:
 
```yml
```yaml
alerting:
alertmanagers:
- static_configs:
Loading
Loading
@@ -50,12 +56,12 @@ alerting:
- alertmanager:9093
```
 
You can also use all the usual Prothetheus service discovery integrations and
relabeling in your Alertmanager configuration. This snippet instructs
You can also use all the usual Prometheus service discovery integrations and
relabeling in your Alertmanager configuration. This snippet instructs
Prometheus to search for Kubernetes pods, in the `default` namespace, with the
label `name: alertmanager` and with a non-empty port.
 
```yml
```yaml
alerting:
alertmanagers:
- kubernetes_sd_configs:
Loading
Loading
@@ -94,15 +100,15 @@ ALERT FrontendRequestLatency
 
Would look like this:
 
```yml
```yaml
groups:
- name: example.rules
rules:
- record: job:request_duration_seconds:99percentile
- record: job:request_duration_seconds:histogram_quantile99
expr: histogram_quantile(0.99, sum(rate(request_duration_seconds_bucket[1m]))
BY (le, job))
- alert: FrontendRequestLatency
expr: job:request_duration_seconds:99percentile{job="frontend"} > 0.1
expr: job:request_duration_seconds:histogram_quantile99{job="frontend"} > 0.1
for: 5m
annotations:
summary: High frontend request latency
Loading
Loading
@@ -115,39 +121,39 @@ new format. For example:
$ promtool update rules example.rules
```
 
Note that you will need to use promtool from 2.0, not 1.8.
## Storage
 
The data format in Prometheus 2.0 has completely changed and is not backwards
compatible with 1.8. To retain access to your historic monitoring data we recommend
you run a non-scraping Prometheus 1.8.1 instance in parallel with your Prometheus 2.0
instance, and have the new server read existing data from the old one via the
remote write protocol.
compatible with 1.8. To retain access to your historic monitoring data we
recommend you run a non-scraping Prometheus instance running at least version
1.8.1 in parallel with your Prometheus 2.0 instance, and have the new server
read existing data from the old one via the remote read protocol.
 
Your Prometheus 1.8 instance should be started with the following flags and an
empty config file (`empty.yml`):
config file containing only the `external_labels` setting (if any):
 
```
$ ./prometheus-1.8.1.linux-amd64/prometheus -web.listen-address ":9094" -config.file empty.yml
$ ./prometheus-1.8.1.linux-amd64/prometheus -web.listen-address ":9094" -config.file old.yml
```
 
NOTE: **NOTE** If you used external labels in your Prometheus 2.0 config, they need to be
preserved in your Prometheus 1.8 config.
Prometheus 2.0 can then be started (on the same machine) with the following flags:
 
```
$ ./prometheus-2.0.0.linux-amd64/prometheus --config.file prometheus.yml
```
 
Where `prometheus.yml` contains the stanza:
Where `prometheus.yml` contains in addition to your full existing configuration, the stanza:
 
```
```yaml
remote_read:
- url: "http://localhost:9094/api/v1/read"
```
 
## PromQL
 
The follow features have been removed from PromQL:
The following features have been removed from PromQL:
 
- `drop_common_labels` function - the `without` aggregation modifier should be used
instead.
Loading
Loading
@@ -163,11 +169,11 @@ details.
### Prometheus non-root user
 
The Prometheus Docker image is now built to [run Prometheus
as a non-root user](https://github.com/prometheus/prometheus/pull/2859). If you
as a non-root user](https://github.com/prometheus/prometheus/pull/2859). If you
want the Prometheus UI/API to listen on a low port number (say, port 80), you'll
need to override it. For Kubernetes, you would use the following YAML:
need to override it. For Kubernetes, you would use the following YAML:
 
```yml
```yaml
apiVersion: v1
kind: Pod
metadata:
Loading
Loading
@@ -178,16 +184,16 @@ spec:
...
```
 
See [https://kubernetes.io/docs/tasks/configure-pod-container/security-context/](Configure a Security Context for a Pod or Container)
See [Configure a Security Context for a Pod or Container](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/)
for more details.
 
If you're using Docker, then the follow snippet would be used:
If you're using Docker, then the following snippet would be used:
 
```
docker run -u root -p 80:80 prom/prometheus:v2.0.0-rc.2 --web.listen-address :80
```
 
## Prometheus lifecycle
### Prometheus lifecycle
 
If you use the Prometheus `/-/reload` HTTP endpoint to [automatically reload your
Prometheus config when it changes](configuration/configuration.md),
Loading
Loading
Loading
Loading
@@ -218,6 +218,10 @@ number of seen HTTP requests per application and group over all instances via:
 
sum(http_requests_total) without (instance)
 
Which is equivalent to:
sum(http_requests_total) by (application, group)
If we are just interested in the total of HTTP requests we have seen in **all**
applications, we could simply write:
 
Loading
Loading
Loading
Loading
@@ -79,7 +79,7 @@ The read and write protocols both use a snappy-compressed protocol buffer encodi
 
For details on configuring remote storage integrations in Prometheus, see the [remote write](configuration/configuration.md#remote_write) and [remote read](configuration/configuration.md#remote_read) sections of the Prometheus configuration documentation.
 
For details on the request and response messages, see the [remote storage protocol buffer definitions](https://github.com/prometheus/prometheus/blob/master/storage/remote/remote.proto).
For details on the request and response messages, see the [remote storage protocol buffer definitions](https://github.com/prometheus/prometheus/blob/master/prompb/remote.proto).
 
Note that on the read path, Prometheus only fetches raw series data for a set of label selectors and time ranges from the remote end. All PromQL evaluation on the raw data still happens in Prometheus itself. This means that remote read queries have some scalability limit, since all necessary data needs to be loaded into the querying Prometheus server first and then processed there. However, supporting fully distributed evaluation of PromQL was deemed infeasible for the time being.
 
Loading
Loading