I was wondering if it was possible to monitor the GPU usage of our NVIDIA M10 Grid card. So I've been testing and finally came up with this working solution!
In our VMWARE Servers we have NVIDIA M10 Grid cards installed. All these cards have 8 shareable GPU's. Within VMWARE we have build a complete Microsoft Server 2016 RDP Session Host environment. Our RDP Session Host Servers all have a shared GPU from the NVIDIA Card. Therefore it is possible to use open GL and other High end Video en DTP solution within a RDP Session.
Monitoring all the NVIDIA GPU's usage could only be done within the nvidia-smi CLI command. So I have made a script, that reads out this GPU usage and generates an output that PRTG could read. Finally in PRTG I used the SSH Script that runs this script and reads out the output.
This is the script:
if test "$#" -ne 1; then GPUCORE=0 else GPUCORE=$1 fi UTIL=$(nvidia-smi --query-gpu=utilization.gpu --format="csv,nounits,noheader" --id=$GPUCORE) echo 0:$UTIL:Utilization GPU core $GPUCORE
Finally within PRTG you define the right GPU ID for this sensor. Because the NVIDIA M10 has got 4 GPU's: 0 - 1 - 2 - 4, use one off these Parameters, to read out the specific GPU ID. For the Unit String file I use: %
Notice: The Parameter field can read out only 1 GPU ID. So if you want to read out all 4 GPU's, you have to create 4 sensors.