Hello,
PRTG seems to have several frustrating limitations, particularly -
- Heavy performance impact from some fairly useful sensors, including WMI, ssh custom scripts, and VMware, among others.
- Associated heavy performance impact from more frequent polling intervals.
- Performance issues with configuring many dependencies.
- Performance issues with maintenance schedules past a certain (low) point.
- Low supported sensor count on virtualized hardware.
- Low supported sensor count with clustered environments, becoming progressively lower with every added cluster node.
These items, along with some other miscellaneous concerns, make me wonder how people support large PRTG deployments. Vertical scalability appears to be completely non-existent, so I assume people scale horizontally by having multiple independent PRTG clusters, each monitoring a subset of sensors or certain system types, and then consolidate monitoring data from all clusters into some kind of centralized presentation layer.
Does this sound about right? That's quite a bit of hardware required for moderate to large sized installations, especially if you're recommending physical servers. What would this presentation layer look like? The PRTG Enterprise Console? Its functionality seems limited compared to the web GUI. Can PRTG integrate with other event management consoles (for example, BMC's BPPM)? If the PRTG Enterprise Console is the only option, how do the limitations imposed on individual clusters translate to performance when managing the entirety of your monitoring environment through the EC?
How does HA and DR play into this?
Are there guidelines or best practices available for this kind of deployment? Or, more generically, is PRTG even recommended for enterprise-scale monitoring deployments?
Thanks.
Add comment