Mac AV Testing: How Useful Is It?

Written by

 

I commented recently (on an independent AV testing-related blog) on a blog article from Intego in which Lysa Myers commented not only on the infamous Imperva pseudo-test, but on a test report from Thomas' Tech Corner.
 
On January 28, Thomas Reed returned to the fray with a further round of security product testing: Mac anti-virus testing, part 2. The good news is that Reed has responded to suggestions from the AV industry as to how his testing could be improved.
 
  • The sample set has been increased to 128 samples – not extensive, but an appreciable increase – and there has been some interaction with the security companies whose products were tested over specific samples.
  • As before, he's gone out of his way to document his methodology.
  • Anticipating criticism of the use of 'extinct' malware, he's tabulated separate % detection columns for 'extinct' and 'active' malware.
  • He makes it clear that the test is based purely on manual scanning: in other words, it's neither 'real world' on-access scanning nor whole product testing – there's no attempt to comment on such issues as 'feature sets, performance and stability'. And that's fine in principle: I'm always happy when a tester is upfront about the limitations of a test.
 
However, some issues remain problematic:
 
  • The 20 products tested are, also as before, something of an uneasy mixture of classes of security software.
  • Reed asserts that "information about what malware has been detected historically by an anti-virus engine is important for predicting future accuracy". That doesn't hold water unless you can be sure that 'extinct' malware is not removed from a product's definitions database. You can certainly argue that it shouldn't be, but that doesn't mean it never is. This also weakens his contention that the separate columns provide a guide as to which programs are improving or declining in effectiveness over time.
  • Reed also suggests that on-access testing is impractical because it would be "exceedingly time consuming with only a few samples": I assume that means 'even with only a few samples' - that's true, but it does sound rather like a plea to allow weak testing because adequate testing takes too long. Admittedly, that's the thinking behind most magazine reviews, with the added bonus of concern about the expense of time-consuming methodology.
  • He goes on to suggest that on-access testing is meaningless "since Mac OS X is currently able to block all known malware through a variety of methods". I'm not sure that's true, even though Apple does do a pretty good job of responding to known threats. However, even if we gloss over the point for the sake of argument, that simply says to me that static testing with known, verified samples is pointless because it's only the unknown samples that exercise AV software.

Reed does say that "Testing with static samples may be less informative, but it does give valuable information about the completeness of each engine’s virus definitions database". But that again assumes that passive scanning picks up everything that on-access scanning does. That unsafe assumption is one of the problems with pseudo-testing using multi-scanner sites like VirusTotal.

 

What’s hot on Infosecurity Magazine?