First published: 31st May 2008
A recent article by Pedro Bustamante, a researcher at Panda Security, the Spanish Anti-Virus company highlights a problem with the automation of virus definition creation. Anti-virus companies have been increasing their use of heuristics and automation in sample collection, handling and analysis for many years. This has allowed them to keep up with the huge increase in the amount of malware being generated, but it also leads to an increase in false positives on programs that show some of the characteristics of malware. Worse, once a small number of anti-virus companies start detecting a program as malicious, the rest follow suite, perhaps without verifying that the program is, in fact, malicious. This is driven by the dual needs to provide the best protection to their customers by cooperating with "rival" anti-virus companies in sharing samples, and to always score "perfect" detection on tests performed by researchers who are also not checking exhaustively for false positives.
A case in point is the detection of game downloaders created by Fenomen Games, a company that creates and distributes games. The game downloaders have many of the features of trojan downloaders, such as the runtime-packing and their behaviour of connecting to the Internet, downloading something, executing it and exiting, so they often trigger heuristic detection. Mr. Bustamante has analysed some of these and not found anything malicious. Assuming that the analysis is correct, then AVIRA's description, saying it is a trojan, is wrong. On the other hand, Sophos' description, saying it is Adware or PUA (Potentially Unwanted Application), is arguably correct - in a business context, few employers would want their employees downloading and playing games.
Test results that do not clearly specify what was tested therefore have little value - how can you be sure they reflect your requirements?