We're currently trying to create a centralised test server that pings our upstream provider's network locations around the country. The problem is that our own testing latency can reflect in a ping result, so to account for this we've created a sensor factory calculation that subtracts our latency to the area from the final area's ping result to give a better representation of the data.
The problem though is that we're seeing negative values here, and this appears to be caused by the fact that the pings all fire at different times, meaning that if we have high latency on our local network for a second but not when the ping fires to test the final latency (that subtracts the first ping), we end up with negative values in the results. It also means we're generally questioning this method entirely, even though it should theoretically work if we can get the timings correct.
Can anyone offer us any advice to get the timing correct so that we're subtracting the "synchronised" pings from one another instead? Or is there a way to average the data over 3 pings and subtract it from the average of the second 3 pings?
Add comment