Cory Doctorow wrote an excellent piece about the disclosure of software security defects. The post “Telling the Truth About Defects in Technology Should Never, Ever, Ever Be Illegal. EVER.” spells out the current predicament and suggests a way forward.
This topic is contemporary, impactful, and fascinating. It spans various domains such as InfoSec, free speech, censorship, and private corporate rights
Read on as I analyze the article and offer my thoughts about security vulnerability disclosures.
Disclosure – I am a dues paying member of the EFF. I am also currently employed by a data security company.
Having that information provides some context into what I am going to say. Now that we have that out of the way…
Telling the Truth About Defects
Cory tells us the history of the legal framework which influences defect disclosure. There is no law that says a corporate entity can control how others honestly disclose their security vulnerabilities. Yet companies fabricate a legal defense based on 2 existing laws:
- Computer Fraud and Abuse Act (CFAA) of 1986 – this law makes it a felony to “exceed authorized access” on someone else’s computer.
- Read about the law here.
- Digital Millennium Copyright Act (DMCA) of 1998 – the key part here is Section 1201 which bans bypassing any “technology measure” that “effectively controls access” to copyrighted works
As the article puts it, the argument posed is “Our terms of service ban probing our system for security defects. When you login to our server for that purpose, you ‘exceed your authorization,’ and that violates the Computer Fraud and Abuse Act.“.
Additionally, another argument is “We designed our products with a lock that you have to get around to discover the defects in our software. Since our software is copyrighted, that lock is an ‘access control for a copyrighted work’ and that means that your research is prohibited, and any publication you make explaining how to replicate your findings is illegal speech, because helping other people get around our locks is ‘trafficking.“.
Effectively the CFAA and DMCA are used to silence security researchers from defect disclosure. This is especially paramount because so many companies could be labeled a technology company. They do not like it when people publicly report security flaws. Nobody wants to be embarrassed.
Consequently some companies have taken the strategy of crafting “coordinated disclosure” policies to hamper security researchers who might report a bug. These responses are more rooted in scare tactics than in a legal precedent.
What Would a Sane Disclosure Policy Look Like?
Lastly, the article suggests a much more reasonable disclosure policy:
We believe that conveying truthful warnings about defects in systems is always legal. Of course, we have a strong preference for you to use our disclosure system [LINK] where we promise to investigate your bugs and fix them in a timely manner. But we don’t believe we have the right to force you to use our system.
Accordingly, we promise to NEVER invoke any statutory right — for example, rights we are granted under trade secret law, anti-hacking law, or anti-circumvention law — against ANYONE who makes a truthful disclosure about a defect in one of our products or services, regardless of the manner of that disclosure.
We really do think that the best way to keep our customers safe and our products bug-free is to enter into a cooperative relationship with security researchers and that’s why our disclosure system exists and we really hope you’ll use it, but we don’t think we should have the right to force you to use it.
Prisoners Dilemma of Defect Disclosure
There is some game theory which, when understood, clarifies the muddy waters. This is a sort of Prisoners Dilemma between security researchers and software companies. Simply put it is a logical explanation of why 2 rational people refuse to work together when it is in their best interest. This end result is less desirable for all involved actors. There is a mathematical framework backing all this up.
Cyber Offensive vs Defensive
My opinion is that it is better to practice cyber defense than to focus on cyber offense. Government and national security may disagree but I think they too should be held to that standard. To be fair – there is a time and a place for things. Fighting fire with fire is foolish when your house is on fire. However, if a wildfire is devastating the West region then strategic burns are a reasonable answer.
The same principle can be applied with balancing offensive and defensive postures.
Security Through Obscurity
The most primitive form of defending some knowledge is to hide it. Security through obscurity relies on secrecy to protect information instead of concrete measures. If you want to protect the integrity of your software code hiding things is the first thing most people think to do. Yet it is largely ineffective.
Keeping flaws secret only hinders long term security. An analogy would be if you have a pistol in the house with young children. Thinking you can simply hide the gun will not work. There needs to be concrete measures such as a lock box, keeping ammunition separate from the firearm, and keeping the gun unloaded.
Disclosing a Vulnerability Before the Patch
Was it correct for Google to Publish a Major Microsoft Vulnerability Before the Patch? How much time after a disclosure is announced to the software company and it is disclosed to the public should there be? Who decides how long? Who decides who can publish? These are all good questions. In this case Google forced Microsoft’s hand and they didn’t like it.
Here is the zero day publication of a major Windows bug.
Here is Microsoft’s response:
“We believe in coordinated vulnerability disclosure, and today’s disclosure by Google puts customers at potential risk,” a Microsoft spokesperson told VentureBeat. “Windows is the only platform with a customer commitment to investigate reported security issues and proactively update impacted devices as soon as possible. We recommend customers use Windows 10 and the Microsoft Edge browser for the best protection.”
It is stunning that Microsoft recommended people use Windows 10 and the Edge browser all while there is a major zero day vulnerability just brought to light.
Some may think that 7 days advanced notice is too short but consider that when companies suffer a data breach they have little qualms about waiting months to tell the public.
Whistle-blowers and Censorship
Are security researchers going to be categorized as whistle-blowers? If it is made illegal to disclose software defects then the possibility will exist. This form of censorship masquerades as effective security and rights to information. However it is in fact security through obscurity and information wants to be free.
To make the software we trust on daily basis secure we need to be able to disclose security vulnerabilities instead of hording zero days.
Thanks for reading!
If you liked this post then you might also like: DRM Security Concerns | Who does the IoT Obey?