We have just started to use PRTG to monitor free disk space our virtual windows cluster environment. It has been working fine but last night we failed over to the other host in the cluster and since then all the drives (except for one) is saying it could not be found. The sensor is being monitored by the clusters DNS name is that helps?
What you can try to do is check what is returned with the WMI tester when you run the following query:
SELECT DriveType,DeviceID,FreeSpace,Size,VolumeName FROM Win32_LogicalDisk
Make sure to run this query against the IP that you are using to monitor the cluster in PRTG and also make sure to run the WMI tester from the PRTG server.
You may also want to try and run this again when the other cluster node is active to see if there is a difference in the drive letters that are being presented when the other node is active. The way that PRTG tracks which drive it is monitoring is by the drive letter and if this is changing when one or the other cluster nodes is active, it would account for why the sensors are going into the error status.
Thanks Greg, at the moment its working fine. After swapping the nodes over the only way we found to stop PRTG reporting drives in the cluster were unavailable, was to shutdown/restart the former live node, by doing this it made PRTG force itself to update to the correct node.
I have run the WMI test from the core sensor as you mention and made a note of everything that's being reported. Next time we swap over nodes in the cluster i will run it again and then come back to you
I have ran the WMI tester on the cluster name before and after we had a problem and the results are different.
What would be the best way for PRTG to monitor disk space on a virtual failover cluster where the services / applications are split between both nodes in the cluster? Would it be WMI? SNMP? or is there another sensor designed for this?
Would it be possible for you to post the results from the query? You can also try scanning with the SNMP tester to see if those results are more consistent but if they aren't there really won't be a way for PRTG to read the data from the cluster. It's necessary for PRTG to have the same index return data no matter if one cluster node is up or another. In this case, you may need to change the configuration of the cluster possibly.
The way we use our cluster is the drives are split between both nodes (eg. SQL drives will always be on node1 and services will always be on node2). Every weekend each node is rebooted during this all the services and drives are moved to one node then the other node is rebooted then once it back up all the services and drives are moved to the other node and then the empty node is rebooted. Once both nodes have been rebooted the services & drives are then moved back to their preferred owners, but for what ever reason since this weekend PRTG & WMI cant see all the drives anymore, unless you log onto each indiviual node.
Looking into this a bit more i think it might be a config change on the cluster that is needed as both the pictures are going to the same name but yet completly different values. I will keep a close eye on this for the next few weeks and get come back to you