I've got a few bandwidth sensors setup on our routers and our Cisco core switch in order to monitor our bandwidth usage. Up until recently we've had the scanning interval set to 10s, but we see odd spikes and drops in the graphs from time to time and I was wondering if maybe we are scanning too often. The idea behind scanning more often was to get a better idea of what the bandwidth was at a specific point in time. A customer might spike to 500-600Mbit/sec but the graph may not show it, if we aren't scanning often enough.
Just trying to figure out how to get to a happy balance here. I need good, raw data but at the same time these anomalous data points are troublesome.
my system currently has 1200 sensors, 185 of those are SNMP bandwidth sensors.
Add comment