When a restart of the core server is required, it sends out alert notifications for active error/warning sensors. Is there a way to keep this from happening? It seems like the server should know that an alert was already sent and a followup is not necessary because of a service restart.
I'm very sorry, but this cannot be disabled or changed.
It wouldn't be good if you had hundreds of sensors in a warning or error state that would regenerate new notifications on a restart of the Core Server, is there work being done to track the state of the sensor before a restart or can you submit this as a feature request?
Consider it on the wishlist!
I'm kicking up a very old question but found this when looking for an answer to exactly the same question. 8 years later, is there an option to prevent this from happening?
An option would be to pause all notification templates before you restart the server. After the restart and when all Sensors are back again, you can resume the templates again.
Kicking this again. The same issue appears when using a one-time maintenance window. This is really annoying. On top of that, it's not just sensors that are in a down state, but also a down-acknowledged state. I set them to acknowledged to prevent future notifications. Yet, the acknowledged state gets cleared when entering (or leaving) the one-time maintenance window.
I get why this is happening, as the state changes. But this means that a very nice feature (which is the One-Time Maintenance Window) only does my job for 50%. I still have to manually edit notification schedules, and use cumbersome customized schedules because the 'easy-to-click' scheduling only allows for hourly maintenance.
All in all this far from ideal and it seems like such basic functionality. In 2013 Torsten Lindner said it's on the wish list. What's the deal?
Did you also tried to pause the notification templates and not the Senors / devices? Please note that a server restart resets all Sensor states and so all acknowledge Sensors and maintenance windows are deleted since these states are not saved in the PRTG configuration file, instead they are cached. During the restart, all caches are deleted and so the states are also reset. Please note that we currently working on a new API and therefore, no big changes are done in the "old" API. Therefore, I recommend to test the mentioned workaround and then wait for the new API. You can check also our roadmap: https://www.paessler.com/prtg/roadmap
Hi Moritz, The problem is that this doesn't fix my acknowledged alarms that get cleared. I see a new API is on the 'looking at' section so this is going to take a long time before we actually get it.
There is no native options to do this. The only option would be to write a script which acknowledge the alarms again. However, as mentioned before, the acknowledge status is only cached and therefore deleted after a server restart.
Do you know if the acknowledge status is preserved when I run the PRTG server clustered and reboot them one after the other?
If you restart both servers one after one and both server receive the data (e.g. Cluster Probe). The acknowledge status is persistent through a restart.