First published: 24th December 2013
Allan Dyer
In October, cryptography expert Bruce Schneier and 24 others challenged the anti-virus industry to come clean about their involvement in government surveillance, which makes the recent revelation by Edward Snowden that the NSA paid cryptography company RSA US$10 million to make a weak algorithm the default in its products somewhat ironic. Perhaps the cryptography industry should be asking about its own ethics instead?
The questions that the open letter asked were:
- Have you ever detected the use of software by any government (or state actor) for the purpose of surveillance?
- Have you ever been approached with a request by a government, requesting that the presence of specific software is not detected, or if detected, not notified to the user of your software? And if so, could you provide information on the legal basis of this request, the specific kind of
software you were supposed to allow and the period of time which you were supposed to allow this use?
- Have you ever granted such a request? If so, could you provide the same information as in the point mentioned above and the considerations which led to the decision to comply with the request from the government?
- Could you clarify how you would respond to such a request in the future?
I did not respond to the letter in October, because it was specifically directed at anti-virus developers, and while Yui Kee sells and supports anti-virus software, we do not develop it in-house so the answers would not have been useful. However, I would like, now, to discuss the different considerations affecting cryptography and anti-virus developers. I think that cryptography development is a lot more susceptible to government manipulation than anti-virus development.
We have had this discussion before
In late 2001, it was revealed that the FBI had developed keystroke-logging software, called Magic Lantern and anti-virus companies were asked if they could or should detect it. Marc Maiffret of eEye responded, "Our customers are paying us for a service, to protect them from all forms of malicious code. It is not up to us to do law enforcement's job for them so we do not, and will not, make any exceptions for law enforcement malware or other tools."
That response is very similar to RSA's denial of the current allegations on 22 December, "we have never entered into any contract or engaged in any project with the intention of weakening RSA’s products, or introducing potential ‘backdoors’ into our products for anyone’s use". Both essential say, "we don't do that, we're the Good Guys", which may be true, but do you believe them?
Graham Cluley of Sophos, had a much better response, "We have no way of knowing if it was written by the FBI, and even if we did, we wouldn’t know whether it was being used by the FBI or if it had been commandeered by a third party". This gets to the heart of the matter, if the product detects the "Government Approved Malware", it doesn't know who's controlling it. The criminals would be racing to find some way of subverting it, or developing something that looks close enough to be ignored.
What has changed since Magic Lantern?
A lot. In 2001, we were near the beginning of malware for criminal gain, there had been the largely unsuccessful
AIDS trojan in 1989, and the term phishing had been coined in 1995, but the main growth was after 2001. The number of malware types has exploded, from about 50,000 in 2000 to millions today. The volume of samples that developers receive is staggering, hundreds of thousands per day.
This has forced malware analysis to become a highly automated team effort. Many anti-virus developers have established analysis labs in multiple jurisdictions to "follow the sun". Microsoft has a lab in Ireland; Kaspersky has researchers in Romania, Germany and Russia; Sophos has labs in
Australia, Hungary, England, and Canada; F-Secure has labs in Finland and Malaysia.
The difference between subverting Cryptography and subverting Anti-Virus
To successfully subvert the market-leading anti-virus product, let's call it X, a government agency simply has to persuade the developers to not detect the government malware. A modern product uses multiple methods to examine software: virus specific scanning (commonly called signature scanning, but that is not a good name for it), heuristics, a sandbox with behaviour analysis, and perhaps a host intrusion prevention system (HIPS). These are updated multiple times daily. Avoiding the virus-specific scanning is relatively easy: if the malware is unknown, it is not detected. However, any update might bring a new rule or behaviour that picks out the malware as suspicious. In order to stay undetected, the malware needs to be whitelisted, so the government agency will need to give the anti-virus developer a sample to be stored in the database of "known good" software. So any researcher in the company will be able to access the government malware, and see the reason (or lie) why it was added to the database. What are the chances that one of them will have a different agenda to the government agency? Benjamin Franklin said, "Three can keep a secret, if two of them are dead".
But it gets worse for the government agency because anti-virus companies share malware samples. The developers recognise that this improves the protection they can give their customers. The competition is in detection, not collection. If any other developer obtains a sample of the government malware, they will share it like any other sample and the chances of someone realising that X is misbehaving rise. When someone reverse-engineers X and finds that obviously malicious code has been whitelisted, trust in X plummets, it is no longer the market-leader, and conspiracy-theorists have a smoking gun with government fingerprints.
On the other hand, cryptography software is developed by a relatively small team, probably in one location, and not updated daily. It can be subverted in quite a subtle way, perhaps by a single key person suggesting a default algorithm that is generally thought to be good, but which the government agency knows has a weakness. That key person might not even realise they have acted to weaken their product. If the weakness is discovered later, the developer has complete deniability, unless there is a whistle-blower inside the government agency. This scenario may sound familiar.
The Stuxnet Lesson
There is no such thing as perfect anti-virus. We have know this since Frederick Cohen wrote his mathematical proof. Knowing this, anti-virus developers strive to provide the best protection in a real-world environment. Stuxnet reconfirmed the non-perfection of anti-virus, but it also provided a blueprint for using malware to penetrate any system. This includes using zero-day exploits for breaking into systems, careful testing, and very specific targeting. Done right, it is possible to avoid notice by anti-virus developers for years. Stuxnet only got noticed when it spread beyond its intended targets.
The lesson for the government agency is obvious: don't tell the anti-virus developers, just make the stealthiest malware you can and limit its spread to your particular targets. If it eventually gets detected, deploy the new malware you've been developing.
My Answers
For completeness, even though questions intended for anti-virus developers are not strictly applicable to me, my answers to the fours questions in the open letter would be: 1. No, 2. No, 3. No, 4. I would say, "Have you any idea how dumb that is?" and point them to this article.
I was not paid to write this by a government agency, but I would say that, wouldn't I?