I don't know if this is best practice or not, but personally I try to only use SQL queries that either means if anything is returned, then it's bad or good (or alternatively, if nothing is returned then it's bad or good).
For an Oracle DB, it might be a query like this:
select return_code, OS_USERNAME, username, terminal, action_name, userhost, to_char(timestamp,'dd-mon hh24:mi') timestamp from sys.dba_audit_trail where timestamp > trunc(sysdate) and returncode <> 0 order by timestamp desc
All the query above does is return unsuccessful login attempts, with the latest row being shown in the Sitescope dashboard. You could change the SQL query further to only show unsuccessful login attempts for certain DB users.
In our environment, we have a DB counter monitor displaying a list of users and their status with the thresholds set to error if the status is anything but "OPEN", then have a DB Query monitor dependent on the DB counter monitor to return the latest unsuccessful login attempt and send an alert out.
It might sound a bit complicated, but it results in users knowing when certain critical DB accounts get locked out and why they were locked out.
The majority of our DB Queries work like that as well.
The older versions of Sitescope only supported queries which returned row/line counts so we woudl work the the app owners and DBAs to craft queries which would result in either 0 rows or #rows and use that simple difference as our good/bad indicator.
Recently (ver 11.x I think) we've found that we can evaluate the text returned from these queries with the typical REGEX matching available in SiteScope monitors and it is working out quite well.