I have exactly the same problem - and I have a NAS with a failing HDD for testing
Screenshot: https://www.dropbox.com/sh/1jn42d6ha90sb37/AADojy7afqXGTqb_zUxnl6hta?dl=0
Results of walk:
Paessler SNMP Tester - 20.2.4 Computername: PRTG-GRIFFIN Interface: xxx.xxx.xxx.xxx
09/03/2021 18:30:38 (1 ms) : Device: xxx.xxx.xxx.yyy
09/03/2021 18:30:38 (4 ms) : SNMP v2c
09/03/2021 18:30:38 (6 ms) : Custom OID 1.3.6.1.4.1.6574.2.1.1.5
09/03/2021 18:30:38 (11 ms) : SNMP Datatype: SNMP_EXCEPTION_NOSUCHINSTANCE
09/03/2021 18:30:38 (13 ms) : -------
09/03/2021 18:30:38 (15 ms) : Value: #N SNMP_EXCEPTION_NOSUCHINSTANCE223
09/03/2021 18:30:38 (17 ms) : Done
Paessler SNMP Tester - 20.2.4 Computername: PRTG-GRIFFIN Interface: 09/03/2021 18:39:06 (1 ms) :
Device: 09/03/2021 18:39:06 (3 ms) : SNMP v2c 09/03/2021 18:39:06 (5 ms) : Walk 1.3.6.1.4.1.6574.2.1.1.5 09/03/2021 18:39:06 (53 ms) : 1.3.6.1.4.1.6574.2.1.1.5.0 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (56 ms) : 1.3.6.1.4.1.6574.2.1.1.5.1 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (59 ms) : 1.3.6.1.4.1.6574.2.1.1.5.2 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (62 ms) : 1.3.6.1.4.1.6574.2.1.1.5.3 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (65 ms) : 1.3.6.1.4.1.6574.2.1.1.5.4 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (68 ms) : 1.3.6.1.4.1.6574.2.1.1.5.5 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (70 ms) : 1.3.6.1.4.1.6574.2.1.1.5.6 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (74 ms) : 1.3.6.1.4.1.6574.2.1.1.5.7 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (77 ms) : 1.3.6.1.4.1.6574.2.1.1.5.8 = "1" [ASN_INTEGER]
09/03/2021 18:39:06 (80 ms) : 1.3.6.1.4.1.6574.2.1.1.5.9 = "1" [ASN_INTEGER]
Yet another reply. I have a Synology with a "failing disk" Walking the MIB and looking at the docs in https://global.download.synology.com/download/Document/Software/DeveloperGuide/Firmware/DSM/All/enu/Synology_DiskStation_MIB_Guide.pdf
I get the following results 1.3.6.1.4.1.6574 = Base MIB
Results: System MIB - This is system Status 1.3.6.1.4.1.6574.1.1.0 = "1" [ASN_INTEGER] - Status Normal
Results: Disk MIB (for the specific disk)
09/03/2021 19:00:23 (267 ms) : 1.3.6.1.4.1.6574.2.1.1.3.4 = "WD20EFRX-68EUZN0 " [ASN_OCTET_STR] 09/03/2021 19:00:23 (396 ms) : 1.3.6.1.4.1.6574.2.1.1.5.4 = "1" [ASN_INTEGER]
All Status show as 1 = Functioning Normally
Results: RAID MIB
09/03/2021 19:03:02 (11 ms) : 1.3.6.1.4.1.6574.3.1.1.1.0 = "0" [ASN_INTEGER]
09/03/2021 19:03:02 (14 ms) : 1.3.6.1.4.1.6574.3.1.1.2.0 = "Volume 1" [ASN_OCTET_STR]
09/03/2021 19:03:02 (17 ms) : 1.3.6.1.4.1.6574.3.1.1.3.0 = "1" [ASN_INTEGER]
RAID Status shows as 1 = normal, I think this should be showing as 11 = degrade ("Degrade is shown when a tolerable failure of disk(s) occurs").
The question, is this a bug or by design. The RAID is functioning. The disk is functioning - its just got a "this disk is failing warning. Please backup your data" etc. Not sure how PRTG can deal with this. If Synology SNMP don't tell them then PRTG can't report it.
Add comment