The averaging interval is the key. Your sensors each have a configurable monitoring interval and will store the results of the sensor every time the interval has elapsed (30s, 60s, 5m, etc.). When you graph results, if you select an average interval higher than the monitoring interval configured what happens is you take all of the values in the graph interval and average them out. See the following example:
CPU sensor polls every 30s:
2:00:00 - 25%,
2:00:30 - 27%,
2:01:00 - 33%,
2:01:30 - 26%,
2:02:00 - 45%,
2:02:30 - 47%,
2:03:00 - 25%,
2:03:30 - 27%,
2:04:00 - 85%,
2:04:30 - 25%,
2:05:00 - 26%,
2:05:30 - 26%
Looking at these results, you can see there are a couple spikes to 45%-57% and 85%. Given a 5 minute averaging interval, however, you end up with 36.5% for this period which is not representative of the real data that you see above. If you instead averaged by 1min/60sec you're averaging every two values out which gives you a clearer picture but also a LOT more data is processed.
The way I see it, larger intervals are great for viewing utilization over time or for reviewing usage over a very long period. If you're looking for data spikes, though, lower the averaging interval and be more selective about your timeframe or you'll end up twiddling your thumbs for 25 min while you wait for that graph to come back.
Add comment