What surprises me the most is that you say per customer - cause this sounds a lot like pretty big locations with a lot of employees that often would have an internal IT department - but yeah - outsourcing is a pretty common practice as well :-).
Have a look here:
https://www.it-admins.com/sql-database-backup-monitoring/
The same information was posted here:
https://kb.paessler.com/en/topic/79665-sql-database-backup-monitoring
You don't need to configure the VBS script and the SQL file - your request was so simple that the SQL file would easily be sufficient - SQL_Database_Full_Backups.sql
This would report you a) is the amount of database on the SQL instance constant and b) are they backed up - for at least those two channels you need to set limits so you get alerts.
Please don't be offended when I say the request was so simple - but databases are a way more complex topic when it comes to monitoring then simply checking are they online of not.
The basics you should do:
- check the amount of databases on the target SQL instance (one server can have multiple instances)
- check if all those databases are backed up - from the SQL instances point of view
- check if the target backup-file per database exists
- check if the amount of retention files exists
- if the database has transaction logs - make sure they exist as defined by the database TIR level
> database TIR level - yeah - this is no definition that SQL provides, you actually should define and document TIR levels and assign them to all your databases. Some databases will be production databases with a high volume of changes, some with low volume of changes, some are legacy and others are development and test.
The needs of all those different levels when it comes to backups will vary - while production and high volume of changes would probably define a full-backup per day and activated transaction logs - possibly every 30 minutes - while a low change volume database in production might define only daily full-backups ... and eventually a legacy database only needs weekly backups for just in case... DEV and TEST - depending on the needs the developers would have..
Doing so is a challenge again - cause you need to see through how databases and systems even might relate to each other. Out of my experience, documentation in IT is often non-existent or very thin. While proper documentation is so important, databases are especially often ignored. They seem to appear out of no where and grow wild everywhere.. like a decease. Eventually they are forgotten and left behind. I dearly hope you don't have to many of them, while I almost guarantee you will have some of those left behind databases.
But all of this will not help you either - of course you can monitor the SQL instance statistics and performance as well as the hosting server performance with CPU, RAM and IOPS as well as the infamous free space on the database volume, log-file volume and backup-volumes as well as the SQL TEMP-DATABASE volumes (oh yes - this actually is a topic on it's own).
You still don't get any insight of the database. SQL literally won't reveal if a database is actually used - unless you catch a transaction in the Activity Monitor. And the real insights from a database come actually from the data it holds. You need either a developer to define what are values you can and possibly should monitor and he provides you SQL scripts to do so - like new entries in tables, changed entries, total volume of invoices per day/week/month, total revenue, total products manufactured per hour etc... or you are lucky enough to understand a) the business processes related to the database, b) the software and database structure and c) the big picture and business need.
Going that deep in to databases can be a pretty big deal and reveals the real value of PRTG - it is quite surprising to some people in the upper management to see such graphs and they tend to love it, cause it might help them in to see the big picture life, instead of waiting for the fiscal year and report that is ready 3 months after the fiscal year ended... lol...
Okay - I did go to far - eventually I hope the SQL script I pointed out helps you - my point is - there is a lot you can monitor here - a simple how many databases are on my SQL instance is in my opinion not really monitoring them - I do so to monitor backups and have indicators as of what is up. Any important to the business database I monitor in additional details - looking at user transaction statistics etc. after talking with the business process owners. This way I actually know that low-priority SQL instances don't wild-grow new databases and that important databases are actually in use and available as expected.
If you have questions, ideas - keep em coming - I am looking forward to discuss this further - I just see it as a more complex situation then a PowerShell script would be able to handle - but I might have the wrong approach to it! I just thought I tell you how I really see and do it - but this is not necessarily the best approach either...
Regards
Florian Rossmark
www.it-admins.com
Add comment