Mac Product Testing: After the (Flash) Flood

Written by

This year’s Virus Bulletin conference, the ‘must-go-to’ event for most anti-malware researchers, was light on (directly) Mac-related content. Unsurprising perhaps: there has been no recent high-profile, high-volume malware event equivalent to the great Flashback Flashflood of 2012. Instead, we’ve seen an ongoing trickle of highly-targeted malware. We’ve also caught glimpses of miscellaneous malware – a little of it Mac-specific, more that is in some sense Mac-capable – in non-Mac server environments that give us a hint that Something Is Going On, but ‘what it is ain’t exactly quite clear’.

In fact, while there was a great deal of discussion relating to mobile device security at VB2013, it was almost entirely devoted to Android – also unsurprising, since this is currently the most malware-rich and anti-malware-rich mobile environment. There was, however, plenty of attention paid to the testing of security products, notably related to AMTSO (and particularly to its Real Time Threat List project, which aims to complement the long-standing but not-very-representative WildList as a resource for product testers).

The paper I presented this year with Lysa Myers – until recently working with Mac security specialists Intego, but now a colleague at ESET – kind of straddled all those areas, but was also the only paper to be primarily focused on OS X. Mac Hacking: the Way to Better Testing? isn’t really about hacking, but about the special difficulties imposed by recent versions of OS X on product testers. You might not expect product testing to attract the same controversy on Mac platforms as it does on Windows, given that the number of directly malicious programs for OS X is so low.

However, there are fewer prior tests on which to base a testing methodology, so establishing sound mainstream testing in such a different environment is not straightforward. The aspect that concerned us most is the way in which Apple’s work on direct detection of malware within the OS has had a major impact on testing methodology. We’re not referring to such long-standing internal countermeasures as ASLR and sandboxing. Implementation of this kind of generic protection on OS X and Windows didn’t prove to be the death of malware (or anti-malware), despite some of the optimistic predictions of the operating system vendors, but it has certainly raised the bar and made life harder for cybercriminals. But it’s impact on testing – except in so far as how best to accommodate different OS update and patch levels is currently a topic of tester/vendor discussion and negotiation – isn’t conspicuous.

In the paper, we did devote some discussion to Gatekeeper, which does offer a different kind of generic protection, since it offers options to restrict the encroachment of unsigned code. While there are scenarios in which it might slightly hamper batch testing, for example, this isn’t a major concern right now, in our perception.


However, Apple's intensive work on enhancing OS X security with internal signature detection of known malware has driven testers back towards the style of static testing from which (mainstream) Windows testing has – to a large extent – moved on. While we’re concerned that the degree of protection afforded by Xprotect.plist signature detection is often overestimated by users – and indeed by some testers outside the mainstream, but I’ll get back to that – we’re more concerned that the close binding of signature detection into the OS hampers detection testing. How do you evaluate a product’s detection capabilities accurately when the OS blocks the execution of malcode before anti-virus kicks in? Does this simply mean that you don’t need AV at all?

Comparison between the ways in which Microsoft and Apple each approach known malware detection within the operating system is interesting. Microsoft’s MSE/Windows Defender is a fairly conventional anti-malware program, even if its performance is declining by comparison with for-fee security products to the point where Microsoft seems somewhat ambivalent about its usefulness to the end user. Apple’s XProtect.plist, however, is actually more limited. While Apple’s increased behind-the-scenes cooperation with security companies has enabled it to improve on its detection volumes and speed of update, XProtect.plist isn’t a full-blown, conventional anti-malware package. It doesn’t cover the whole range of malware, and its detection is still old-school static signature-based without heuristic capabilities, relying on reactive rather than proactive technology. So the fear remains that Mac users, always inclined to overestimate the effectiveness of integral OS security measures, will choose to assume that it has the same capabilities as commercial software, or better.

As both Macs and Mac malware increase in prevalence, the importance of testing the software intended to supplement the internal security of OS X increases too, but how can a tester make his testing more similar to real-world scenarios, or is it possible to make a test less realistic in terms of update and patch levels yet more fair and accurate? We focused on the testing scenarios that are unique to Macs and OS X, and also looked at some of the implications for testing on mobile platforms. We’ll be running a series of blogs based on the presentation on the ESET blog site in the near future, but in the meantime, you can find the full abstract and the paper itself here.

What’s hot on Infosecurity Magazine?