What is this?

This knowledgebase contains questions and answers about PRTG Network Monitor and network monitoring in general.

Learn more

PRTG Network Monitor

Intuitive to Use. Easy to manage.
More than 500,000 users rely on Paessler PRTG every day. Find out how you can reduce cost, increase QoS and ease planning, as well.

Free Download

Top Tags


View all Tags

Collecting metric data via API

Votes:

0

Hi.

As we are using PRTG to monitor some of our infrastructure and we use another tool for managing our alerts and data reports. We need to integrate this tool with PRTG in order to retrieve metric and Alert data. From another query that I posted here, I got to know that the only state change information that we can retrieve via the PRTG API is by using the "message" parameter with the relevant filters. Now we are left with locating a way to export the metric data. It is quite easy to export current metric data for all sensors using the API, but is there a way to do that for historical metric data as well?

Thanks, Alex

api metrics prtg

Created on Jul 17, 2017 10:30:40 AM



Best Answer

Accepted Answer

Votes:

0

Hello Alex, thank you for your reply.

Ok, got you. /api/historicdata.json will require a sensorid because otherwise the amount of data would be just massive. The Core Server wouldn't be able to generate/parse this size of a report while doing all the "regular business" in the background. If you think about it, a sensor with a scanning interval will have 1440*(number of channels) data samples per day. Make that times 200 sensors and you get to 288000*(number of channels). And 200 is still a small number, so this doesn't scale at all.

The problem we have with that is in case the collection is paused in our external tool (due to maintenance for example), after we restart it we will not collect the metric data from the downtime period

Depending on how your tool works, you could have a "background data crawler" that will go trough each sensor it missed data and poll it afterwards. It would take a while, but you would have all the data in the end of the day. Of course I understand the drawbacks of having to modify the existing tool, but it will be the only real option in the end.

Best Regards,
Luciano Lingnau [Paessler Support]

Created on Jul 19, 2017 7:51:24 AM by  Luciano Lingnau [Paessler]



4 Replies

Votes:

0

Hello Alexander,
thank you for your KB-Post.

What precisely are you attempting to achieve? If you wish to use a 3rd party application for alerting, did you consider creating a custom (HTTP or Exe/Script based) notification?

You could also continously query /api/table.xml?content=sensors&output=xml and use filters like filter_status to list only sensors in down or warning state.

As for obtaining the Historic Data of sensors, there's an API called /api/historicdata.xml and /api/historicdata.json for this purpose. They are documented within PRTG under Setup > PRTG API on the "Historic Data" tab.

Best Regards,
Luciano Lingnau [Paessler Support]

Created on Jul 18, 2017 1:06:28 PM by  Luciano Lingnau [Paessler]



Votes:

0

Hello Luciano,

Thank you very much for your reply.

I am sorry if I have not clarified my question as best as I could, but you have understood what I need anyway. However, I have checked all the documented API calls under "Setup > PRTG API" and what is listed under the "Historic Data" tab doesn't to the trick for me.

The "/api/historicdata.json" calls require a sensor ID in order to execute. If I need to collect data from only a few sensors that would work great. However as there is a limitation for no more than 5 calls per minute with this call, we cannot really use it as that would generate a lot of call backlog (we have over 200 sensors for about 50 different devices).

The "/api/table.xml?content=sensors&output=xml" call is quite allright, but has no real historical data values either for state changes or metric data. The problem we have with that is in case the collection is paused in our external tool (due to maintenance for example), after we restart it we will not collect the metric data from the downtime period. That will really mess up the reports in the end.

I suppose we will probably have to make do with live metrics only, but if there is any alternative to collect their historical values via API, please let me know.

Take care and have a good day.

Best regards, Alex

Created on Jul 19, 2017 6:50:06 AM



Accepted Answer

Votes:

0

Hello Alex, thank you for your reply.

Ok, got you. /api/historicdata.json will require a sensorid because otherwise the amount of data would be just massive. The Core Server wouldn't be able to generate/parse this size of a report while doing all the "regular business" in the background. If you think about it, a sensor with a scanning interval will have 1440*(number of channels) data samples per day. Make that times 200 sensors and you get to 288000*(number of channels). And 200 is still a small number, so this doesn't scale at all.

The problem we have with that is in case the collection is paused in our external tool (due to maintenance for example), after we restart it we will not collect the metric data from the downtime period

Depending on how your tool works, you could have a "background data crawler" that will go trough each sensor it missed data and poll it afterwards. It would take a while, but you would have all the data in the end of the day. Of course I understand the drawbacks of having to modify the existing tool, but it will be the only real option in the end.

Best Regards,
Luciano Lingnau [Paessler Support]

Created on Jul 19, 2017 7:51:24 AM by  Luciano Lingnau [Paessler]



Votes:

0

Hello again, Luciano.

Thank you for the detailed answer. I believe all is clear now - we will see what we can do about modifying the tool's collection mechanism.

Best regards, Alex

Created on Jul 19, 2017 9:02:02 AM




Disclaimer: The information in the Paessler Knowledge Base comes without warranty of any kind. Use at your own risk. Before applying any instructions please exercise proper system administrator housekeeping. You must make sure that a proper backup of all your data is available.