Your Peace of Mind is our Commitment

Contact Us English Recent Articles

Higher, Faster, Stronger?

No, not the Olympics, but the effects of the latest malware. Are we really seeing higher numbers, occurring faster with stronger damage?

Some commentators and newspapers are already saying that 2004 will be "The Year of the Supervirus", which leaves me wondering, what will we call 2005? The Year of the Ultravirus? What will we do when we run out of superlatives?

To separate the facts from the hyperbole, we have to take a closer look. The malware that triggered these claims was W32/Mydoom.A, an email worm that poses as an email error message, and which makes a timed DoS attack on www.sco.com. The 'B' variant also attacks www.microsoft.com. Both SCO and Microsoft have announced large rewards for information leading to the conviction of Mydoom's author. One anti-virus vendor initially estimated that over one million computers were infected, although they did not state how the figure was derived, and their later releases merely said it "is now very widespread". They did still claim that it had surpassed Sobig to become the "fastest spreading email worm in history". At least one newspaper report unfortunately dropped the word "email" from the claim - Mydoom certainly did not spread faster than Slammer!

Other statistics have a firmer base. A well-known provider of email anti-virus services reports the numbers they have stopped, and those figures are certainly higher for Mydoom than previous email worms. However, the company has enlarged its' customer base, so they are handling more email than ever before too. The breakdown of the figures by country reflects their customer base: top is the USA, with the UK second. This is not a criticism of the service provider; it is just an unavoidable limitation in their data collection if we are trying to assess the global effects of malware. One of the anti-virus software developers also has a method of collecting statistics - their world virus-tracking centre uses data from their free online virus scanner. Their statistics for Mydoom.A reveal that about 1.15 million computers have been infected since 26th January. They list the top ten countries for infected computers; right at the top, unsurprisingly, is the USA with 333 thousand infections. Very surprisingly, number two is Tajikistan with 190 thousand infected computers. I am unsure whether to congratulate this central Asian republic with a population of 6.8 million for its' rapid computing development - the CIA World Factbook reports it had just 5000 Internet users in 2002, or commiserate that this development has led to such an overwhelming virus problem. So the available statistics are either slightly suspect because of biased sampling, or totally suspect for unbelievable results. How do the risk assessments of different companies compare? The ratings included: "High-Outbreak", " Category 3 - Moderate" (later raised to "Category 4 - Severe"), "Medium Risk" (later raised to "High Risk"). Why did some companies change their rating? Were they carefully reassessing the risk in the light of their latest data, or reacting to competitor's announcements?

How did the risk assessments compare to the experience of large organisations? In an unscientific sample of one, I talked to information security staff of an organisation with 3000 users over dinner. We compared Mydoom.A with Loveletter. During the Loveletter incident, they suffered an internal outbreak and each email user received between 20 and 27 infected emails. For Mydoom.A, they blocked the messages at their gateway, and stopped about 7 infected emails per user. The conclusion for that organisation is that Loveletter was much worse. This time they were better prepared and didn't even let Mydoom.A in.

I think the experience was the same in many other companies, Mydoom was just another email worm and easy to block. However, the other side of this picture is that there were probably far more infected messages arriving from the Internet. Perhaps this can be put down to a larger number of infected home users and SME's with fast broadband connections. At least one anti-virus developer recognises this difference between home and corporate users - their "Newly Discovered Threats" webpage has separate columns for "Home Risk" and "Corporate Risk", but there is currently no difference between the columns, reinforcing the perception that risk ratings are assigned subjectively, probably by a virus analyst in a hurry to work on the next sample.

The arrival of W32/Bagle.Q is interesting because it presents difficulties in threat assessment. Bagle.Q is an email worm that avoids having an executable attachment by using a known vulnerability in common email clients to download and execute a file from a website. The simple policy of blocking all executable attachments, which was so effective against email worms from Loveletter to Mydoom, does not stop Bagle.Q. However, Bagle.Q will have no effect on a site that has kept its patches up-to-date, or which is not using the most common email client. The threat to a site that has relied on a policy of blocking all attachments for so long that it has become complacent might be higher than the threat to a site that is generally unprepared, and depends on a panic reaction to new outbreaks. Thus, the threat that a particular variant of malware poses to a particular site will be highly dependent on its security policy and practices.

These trends are not good news for anyone wanting to make rational management decisions on information security. We know that the problem of malware in general, and malware in email in particular, is constantly developing but we have no reliable statistics to evaluate its growth. What statistics we have are biased, inaccurate or both. Researchers and developers are assigning risk levels to malware, but, even where the criteria are described, the process is subjective and the actual situation will be highly influenced by local factors.

We need better statistics, with a defined, transparent relationship to policies and implementation. The data collection must seek to eliminate sources of bias, this rules out security vendors; probably CERTs are the independent bodies that can take on the task. However, vendors will need to be involved, in order to automate the reporting in a standardised form: the fact that your gateway blocked X thousand copies of the latest worm automatically and silently is a success for your policies and implementation which should be recorded. Equally, it is indicating a failure of some sort at other sites, but it would be an unnecessary burden on administrators to manually set up reporting at each site. There will need to be checks and balances to ensure that the data is accurate - malware might start self-reporting to get itself the "Worst Ever" title.

With co-operation, CERTs defining what and how to report, vendors making the reporting easy, and administrators permitting the reports to be made, we might be able to answer significant questions like "what is a Supervirus and which year did they own?" We might even be able to make an accurate assessment of how well our information security systems are working.