Hi, I'm trying to set up an Eventlog sensor that should change state to error when an Error event is logged in the Application log. But I dont understand how the sensor works.
1) The sensor never goes into error state even though an event is clearly logged. I have tried with several settings of Limit values eg.: 0,0001. The New Records graph turns red, but the sensor is still Ok?
2) Now, if I can get the sensor to work, how is the error state cleared? How does the event log sensor know when I have fixed an error condition on the server? As far as I know, it just can't. And there is no "reset" button on the sensor.
3) Why are PRTG monitoring the eventlog as #/sec? It sounds like an insanely precise scale. Often you only see e.g. an error condition or another application error occur randomly, maybe once per week (hopefully very seldom, actually). That's 0,00000165343915 errors/sec????
I have tried reading the documentation over and over and looked at other articles in the KB but to no avail. I think that I simply don't understand that sensor. (We have more than 1.300 sensors so I'm not THAT new to PRTG :) )
Add comment