InfluxDB 2.x: Task: Aggregation: Unterschied zwischen den Versionen
Aus Wiki-WebPerfect
Admin (Diskussion | Beiträge) |
Admin (Diskussion | Beiträge) |
||
Zeile 6: | Zeile 6: | ||
*Create a graph panel that shows the sum of all used storage we have over time. | *Create a graph panel that shows the sum of all used storage we have over time. | ||
*Because the storage is shared, each node in the same cluster collects the same storage information's -> That's why I have to dedup the data (unique()). | *Because the storage is shared, each node in the same cluster collects the same storage information's -> That's why I have to dedup the data (unique()). | ||
− | *Because I write the aggregated data back to the same bucket, I renamed the _measurement of the aggregated data ( | + | *Because I write the aggregated data back to the same bucket, I renamed the _measurement of the aggregated data (adding suffix "_agg-sum") to be able to distinguish the data. |
Version vom 30. Juni 2021, 13:46 Uhr
Sometimes you want to aggregate a heavy amount of data, for example for a Grafana graph panel, but the query is very slow because of the amount of metrics (Over millions).
Then it is more efficient to create an InfluxDB Task that aggregates the data and save the results back to an bucket instead of calculate the result each time the query runs.
Example: Sum of all storage
What I want
- Create a graph panel that shows the sum of all used storage we have over time.
- Because the storage is shared, each node in the same cluster collects the same storage information's -> That's why I have to dedup the data (unique()).
- Because I write the aggregated data back to the same bucket, I renamed the _measurement of the aggregated data (adding suffix "_agg-sum") to be able to distinguish the data.
How can I achieve this
- Because I want also the historical data, I must first aggregate the data over a bigger time window and run the query manually.
- The sampling frequency in my bucket is one metric point per each hour. So that's why I also want to aggregate the data with an sampling frequency of one hour.