Your Peace of Mind is our Commitment

Contact Us English Recent Articles

Full Disclosure

The debate on full disclosure has been heating up recently, with Scot Culp of Microsoft accusing the security community of "Information Anarchy". Full disclosure is the principle that, when a researcher finds a security hole, he or she should promptly publicise it widely. Supporters of full disclosure claim it improves the security of products because vendors are forced to fix the holes while critics point out that the bad guys have an easy opportunity to attack before a fix is available.

Scot Culp is clearly against full disclosure, claiming, "The relationship between information anarchy and the recent spate of worms is undeniable" in an article published on the Microsoft website. He points to close similarities between recent worms and published exploits as evidence that the worm's authors benefited from the disclosure. However, some in the security community have pointed out that, at least in the case of CodeRed, the worm was not based on the disclosure by eEye, but on an earlier worm that did not become widespread.

Unfortunately, commercial concerns also affect the viewpoints. A small security company that discovers a flaw in a major product can get massive publicity by being first to publicise it in the popular media. Conversely, the vendor of the major product suffers negative publicity; it would be better for them if the flaw remained unknown until the product became obsolete. Failing this, they can minimise the effect by announcing their fix at the same time as the flaw. The reality of the bad publicity effect can be seen in Gartner's advice to dump IIS, although it remains to be seen whether significant numbers of IIS servers will be replaced.

However, the real concern is the threat to the users, and the key here is that the vulnerability existed before it was discovered. Indeed, the company or person who first publicises the flaw might not have been the first to discover it - an unknown number of bad guys might already be using it. A smart bad guy who discovered an unpublicised flaw could maximise his gain from it by quietly using it for his nefarious purposes, and then releasing an automated attack tool as soon as it becomes publicised. It would then appear that the announcement resulted in the creation of the tool, discouraging future disclosures.

A security vulnerability opens a window of opportunity with three distinct phases. The vulnerability exists as soon as the product is released, but the window does not open until it is discovered. If a bad guy discovers it, the first phase begins; this is where the vulnerability is exploited without the possibility of response. The second phase begins when the vulnerability is publicised. At this stage, the defenders can take action, even though a fix is not yet available. The action will depend on the nature of the vulnerability and the defender, for example, if sufficient information is available, the defender might program their intrusion detection software to shut down the vulnerable server if the attack is detected - choosing denial of service in preference to theft of corporate secrets. The third phase starts when the fix is published - defenders can then start to eliminate the vulnerability. The third phase ends, and the window closes, when all vulnerable systems are eliminated - in some cases, this might be when everyone has stopped using the product. Alongside these phases, but semi-independent, is the publication of an automated attack tool exploiting the vulnerability. Before publication of such a tool, only skilled attackers can exploit the vulnerability, after publication, any script-kiddie can attack. The publication of the tool might mark the beginning of phase two - it could be the first public announcement of the vulnerability, or a tool might never be created.

The times when the vulnerability can result in the most damage are in phase one, when the attackers can choose valuable targets with impunity, and phase two after the publication of an automated attack tool (I will call this phase 2b), when the number of potential attackers skyrockets. Full disclosure advocates seek to minimise phase one, and their opponents seek to minimise phase 2b. The conflict arises because the actions that defenders can take to minimise one of these results in lengthening the other. Any description of the vulnerability will give a skilled attacker enough information to investigate the vulnerability him- or herself, allowing the production of an automated attack tool. However, any delay in the publication of a description extends phase one.

Personally, I am moderately in favour of full disclosure - we cannot wait for security solutions to be handed down from On-High, in accordance with vendor's internal timetables and interests. This should be combined with a responsible attitude - certainly, those who can assist in the defence (the vendor, and perhaps others, such as intrusion detection developers) should be informed in confidence immediately, with full details. A public announcement should be scheduled without regard for the vendor's preferences. Some researchers may delay as little as one day before a public announcement; CERT/CC's policy is 45 days delay, with the possibility of variation according to the nature of the vulnerability. I agree that different vulnerabilities will be best dealt with by different delays. The public announcement should contain all information that would help defend against attack, and as little detail of how to launch an attack as possible. However, if the vendor claims the attack is infeasible or unimportant, the researcher should back-up the announcement by demonstration. This could be a non-automated tool, distributed only to trusted researchers and technical journalists.

Thus, I think that, in the case of CodeRed, eEye went too far in publishing the details of forceful heap violation. Some extreme full disclosure advocates even publish the source code of viruses - this is an unacceptable release of an automated attack tool. They are also missing the point - the real vulnerability that viruses exploit is that we are using general-purpose, programmable computers; they are vulnerable because they are useful and adaptable.

On the other side, Scot Culp is going too far in demanding that researchers wait indefinitely for vendor patches. He says that Microsoft will be working to build an industry-wide consensus on this issue, I welcome that, and I hope the consensus will represent the best interests of the users, not the commercial interests of vendors or security companies.

Sentencing the Author of the ‘Anna Kournikova Worm’

On the 27 September 2001, Jan de Wit (the author of VBS/VBSWG.J@mm, better known as the ‘Anna Kournikova Worm’) was sentenced to 150 hours community service, or, if he prefers, 75 days in jail. When this worm appeared, it spread rapidly worldwide and quickly reached the top of the lists of most anti-virus developers. It caused widespread disruption, so many say that the sentence is too lenient:

Graham Cluely of Sophos said, “Considering that Anna K. was one of the top five viruses of all time and was as big as Melissa, the prosecutor's request sends out all the wrong signals to the industry.”

Jason Holloway, U.K. general manager with F-Secure, said he was disappointed with the light sentence; “It may be due to the FBI’s lack of specimen charges against de Wit, but it does not send the right message to the industry.”

Opinion at the Virus Bulletin Conference generally agreed that 150 hours of community service was too lenient, but the 18-month jail time that Christopher Pile was given by British courts in 1995 was too harsh.

The major problem does seem to have been with evidence of damage: the FBI was only able to list 55 incidents of infection, causing just US$166,827 worth of damage. Previously, I talked about the importance of reporting - this case reiterates the lesson of the CIH case, without reports, there is nothing to charge the criminals with if they are caught.

However, there are some positive aspects to the case: The judge rejected de Wit’s plea that he did not understand the consequences of posting the worm to a newsgroup. Additionally, de Wit’s computer and collection of viruses (reported as over 7000) has been confiscated. Realistically, both can be replaced, but replacing the virus collection in particular will be time-consuming, hopefully, he simply will not bother. Also, the case has been dealt with relatively quickly - the worm started spreading on 12 February 2001, and we have a sentence on 27 September 2001. In contrast, Melissa started spreading 26 March 1999 and it’s author, David Smith, has yet to be sentenced in the USA. Justice should be swift and accurate.