What is this?

This knowledgebase contains questions and answers about PRTG Network Monitor and network monitoring in general.

Learn more

PRTG Network Monitor

Intuitive to Use. Easy to manage.
More than 500,000 users rely on Paessler PRTG every day. Find out how you can reduce cost, increase QoS and ease planning, as well.

Free Download

Top Tags


View all Tags

"Load limit exceeded"

Votes:

0

I updated to 13.1.2.1448 from 13.1.1.1182 today and am now having issues with the PRTG API. I have custom sensors/scripts that access historic data (xml/avg=540) that had no issues prior to updating. Now, however, some of the calls result in blanks in my script. So I pasted the URLs in Firefox and it was hit and miss when I hit refresh. Some times the xml came right up others it said "Firefox can't find the file at (url)". I then tried the URLs in Internet Explorer and had the same issue, though the error message there was from PRTG stating "Load limit exceeded / Please try again later!".

I don't know why there's an issue all of a sudden. They worked just fine prior to the update, and are kind of important to our company. Will running the 13.1.1.1182 setup exe downgrade PRTG, or will that cause problems?

api historic-data load-limit-exceeded prtg

Created on Mar 6, 2013 2:45:17 AM



Best Answer

Accepted Answer

Votes:

0

The "Load limit exceeded" message was shown by PRTG versions 13.x.1-13.x.2 whenever more than 5 reporting or historic-data requests were sent to the web server in less than 1 minute.

Version 13.x.3 and later do not display this message any more. Now requests are pipelined (everything over 5 requests per minute is delayed).

Created on May 6, 2013 8:33:07 AM by  Dirk Paessler [Founder Paessler AG] (11,025) 3 6



20 Replies

Votes:

0

Dear Matt,

I'm very much afraid this is due to a change in PRTGs webserver. Such API requests for historic data could possibly put a huge load on PRTGs Core Server, and so the number of such API Requests were limited to 5 per minute. It would be necessary to delay the requests now, so that this limit is not hit.

best regards.

Created on Mar 6, 2013 3:43:51 PM by  Torsten Lindner [Paessler Support]



Votes:

0

Would downgrading be a possibility? At least until I get my sensors/scripts changed?

Created on Mar 6, 2013 6:12:19 PM



Votes:

0

A rollback is always tricky I'm afraid, but possible. With this minor version it's basically just running the old installer on your PRTG installation. But please keep in mind, that it will be necessary to "downgrade" all Remote Probes manually (download and run the "Remote Probe Installer" on each Remote Probe). Also, if you are running a Cluster, that would complicate things a lot. Then it would be necessary to shut down the Failover Node, run the downgrade installer on the Master Node, and then manually on the Failover Node as well.
If you'd like to do a rollback and need an installer for the old 13.1.1.1182 version, please get in contact with us via email to [email protected] and also forward us the Core Log.

Created on Mar 6, 2013 6:23:01 PM by  Torsten Lindner [Paessler Support]



Votes:

0

Would it be possible to make the maximum number of API calls per minute a setting under "Setup" so that users can configure it themselves? You could default it to 5 and have a message that tells users changing the number can impact PRTG performance. 5 is just way too low for me/my needs. If I was to try to space the calls out to fit the limit, it'd take over 6 minutes for each of my sensors to complete. I can't be the only one affected by this change.

Created on Mar 7, 2013 6:50:39 PM



Votes:

0

Sorry, but we really don't want to change this, or implement a setting for it. First, a setting for this would be quite 'complicated', as to many users wouldn't know what it's actually for. That may sound plain, but it's a valid reason for keeping the number of switches and options as low as possible.
Secondly, it would open up the problem again, potentially bringing PRTGs Webserver into an overload situation. Which is something we have to avoid by all means.

Created on Mar 7, 2013 7:48:32 PM by  Torsten Lindner [Paessler Support]



Votes:

0

Hm.

Ok. Many users wouldn't know what it's actually for. Try to configure 10 Daily pdf-reports, set schedule 0:00 and your can see pdf with "Load limit exceeded" text.

Reports - often used function.

Schedule does not use minutes. Only hours. 10 reports - 10 hours.

Created on Mar 21, 2013 7:06:47 AM



Votes:

0

Yes, this can happen. Please distribute the reports then a bit, so that not all are executed at the same time.

Created on Mar 21, 2013 4:25:20 PM by  Torsten Lindner [Paessler Support]



Votes:

0

Of course this is the first thing I thought. But schedule granularity - 1 hours without minutes. Can it is necessary consider introducing minutes in scheduler?

Created on Mar 22, 2013 6:06:25 AM



Votes:

0

We'll put this on the list.

Created on Mar 22, 2013 1:36:08 PM by  Torsten Lindner [Paessler Support]



Votes:

0

Hi,

I am in the same situation as MattG - 5 API calls per minute is simply not enough for our needs. We are a MSP who have thousands of sensors which need to be reported on. Doing 5 call per minute on thousands of sensors is impractical - I know you stated that you will not be adding a setting to change this limit - however there should be a workaround implemented for users who require more than 5 API calls a minute, otherwise this will be a massive issue for users with thousands of sensors.

Created on Mar 28, 2013 8:17:45 PM



Votes:

0

I´m having the same problem too, when exporting to CVS with the CVSexport API, it has a delay but is not working. Hope you can help us with this. My server run perfectly before with this reports, so I don't thing I was overloading or at least not for more than a minute.

Created on Apr 18, 2013 5:03:46 PM



Votes:

0

Hi,

I agree with Nav. 5 API calls per minute is definitive not enough. Our Monitoring- and Reportingapplication intensively uses the API and will be impacted seriously when limiting the number of API calls to 5.

Please remove this limit with the next release.

thanks thomas

Created on Apr 19, 2013 10:39:55 AM



Votes:

0

We do hear you! We will keep the limit, but add a pipeline to it. So that requests "over the limit" are queued but not blocked/refused. This will be available as per the next stable release, in hopefully the next two weeks. Please bear with us.

Created on Apr 19, 2013 12:27:53 PM by  Torsten Lindner [Paessler Support]

Last change on Jul 24, 2013 12:57:13 PM by  Torsten Lindner [Paessler Support]



Accepted Answer

Votes:

0

The "Load limit exceeded" message was shown by PRTG versions 13.x.1-13.x.2 whenever more than 5 reporting or historic-data requests were sent to the web server in less than 1 minute.

Version 13.x.3 and later do not display this message any more. Now requests are pipelined (everything over 5 requests per minute is delayed).

Created on May 6, 2013 8:33:07 AM by  Dirk Paessler [Founder Paessler AG] (11,025) 3 6



Votes:

0

Is there any chance we will be able to disable this feature on our installs?

This is an absolute disaster for us.

Each night we need to make a couple thousand calls to the API to get metrics across our system. This only takes a minute or two to execute.

These metrics are extracted for two reasons - To make automated adaptive changes to how we divide up our service allocations - To determine accumulation of pay for performance payments to our employees based on the availability and performance of our different systems

We updated to version 13, and suddenly we are no longer to manage these portions of our buisness. We had to go to veeam and restore our dedicated PRTG server as we could no longer manage this portion of our business under version 13.

I did see the argument:

"it would open up the problem again, potentially bringing PRTGs Webserver into an overload situation. Which is something we have to avoid by all means."

but quite honestly, you don't need to worry about this for me. I am happy throttling myself to ensure I don't take down my own, paid for and on premise, PRTG instance.

If I am throttled to 5 calls per minute, it will take roughly 7 hours to get through the 2000 API calls we make each day. Not to mention we will have to rewrite all of our connectors to presumably wait hours for "pipelined" responses to requests.

This is a massive regression in capabilities of your product, and its absolutely devistating us where we allowed ourselves to rely on Paesler and PRTG.

If there is any way we can get around this new limitation, please let us know.

Thank you, Chris

Created on Jul 23, 2013 7:10:23 PM



Votes:

0

Chris, I am very sorry, but currently there are no plans for an option to disable the pipelining, nor are there any ways to bypass it.

Created on Jul 24, 2013 12:59:51 PM by  Torsten Lindner [Paessler Support]



Votes:

0

Torsten, did Paessler change their mind in the last 9 months? Did you decide to remove pipelining again or do you still keep it in the lastest releases?

Best Regards thomas

Created on Apr 29, 2014 2:01:00 PM



Votes:

0

The Pipelining is still implemented, as we still need to protect the webserver. However it is transparently made for the user, and will not cause error-responses in reports or historic data requests anymore.

Created on Apr 29, 2014 3:12:23 PM by  Torsten Lindner [Paessler Support]



Votes:

0

Oh man, I came into work this morning and saw responses in this thread resurrected in my email. For a brief moment my heart skipped a beat thinking we could start using PRTG again as our main monitoring solution, but alas, it was just false hope.

I find it funny that this feature was added to "protect" PRTG for us users, but because of it, instead of our PRTG being protected, we had to abandon PRTG almost completely and move to Foglight at nearly 100x the cost!!! Even our annual maintenance for Foglight is about twenty times the one time fee we used to pay for PRTG. Yikes!

If you guys ever change your mind and re-enable us to use your product the way we need to (we accept the risk that if we abuse the API, we will slow down the core and jeopardize our scanning schedules), please be sure to post here and we'd be back in a heartbeat.

Created on Apr 29, 2014 3:52:30 PM



Votes:

0

Chris, I talked to the developer and he says that if we'd remove the queuing your API requests would not be faster, they would simply run parallel instead if serialized, which might even slow them down.

One other solution is:

  • You could make a copy of the API endpoint that you use (e.g. historicdata_html.htm)
  • remove the <#loadlimit> placeholder
  • target your API calls to the new filename

This will remove the queuing, but we do not recommend/support this approach.

Created on Apr 30, 2014 10:21:54 AM by  Dirk Paessler [Founder Paessler AG] (11,025) 3 6




Disclaimer: The information in the Paessler Knowledge Base comes without warranty of any kind. Use at your own risk. Before applying any instructions please exercise proper system administrator housekeeping. You must make sure that a proper backup of all your data is available.