Tuesday, May 26, 2015

DMCA hearings, security research, opponents

Security research continued, opponents
 
Copyright Office: Jacqueline Charlesworth
Michelle Choe
Regan Smith
Cy Donnelly
Steve Ruhe
John Riley
Stacy Cheney (NTIA)
 
Opponents:
Christian Troncoso, BSA | The Software Alliance: we support good faith security testing. We are surrounded by the good guys, and we have an interest in working with academic and independent security community. But any possible exemption also has the potential to be exploited by the bad guys. [Has that happened with previous exemptions?] User trust is instrumental, as is collaboration with research community. We worry about specific authorization for researchers to make disclosures based on the researcher’s sole judgment before the provider has had the opportunity to address the problem. Authorizing zero-day disclosures may enable identity theft, financial fraud, and other serious threats. Objective must be to thwart malefactors. Congress is considering laws on info-sharing proposals, which BSA supports. How best to create incentives? Limit liability w/o unintended consequences. Administration is also considering policies, such as export controls on hacking tools. Concern is balance: responsibly disseminate tools while guarding against their falling into the hands of those w/bad intent. Congress enacted exemptions w/careful checks and balances to prevent ill use. Proponents argue that ambiguity = chilling effects. Were proponents seeking narrow clarifications we wouldn’t oppose their efforts. Proposed class does much more—broad and w/no important safeguards.
 
Congressional intent: class 25 should be amended to permit circumvention only when software lawfully obtained, researcher has made good faith effort to obtain permission, solely for purpose of testing, and info is used primarily to promote security and is maintained in a manner that doesn’t facilitate copyright infringement or violation of the CFAA. Must avoid unintended consequences. Info should first be shared w/developer, in best position to fix it. Time to fix before shared more broadly. Otherwise bad actors get window of opportunity. Not speculative: already a thriving market for security research on zero-day vulnerabilities.
 
Class should be tailored in a manner consistent w/Congressional intent, mindful of broader cybersecurity debate. Not inadvertently help bad actors.
 
Charlesworth: How would we do this?
 
A: you’d have to find a big chilling effect, but there’s a lot of research going on. BSA has a big interest in partnering w/community; many actively try to incentivize by providing rewards to those who provide info responsibly—enough time to issue a patch.
 
Charlesworth: how much time is this?
 
A: no set time. Every vulnerability is different.  Particularly w/enterprise software.  Complex systems.
 
Charlesworth: what percentage of members authorize research?
 
A: don’t know; trend for software companies to do that. Some of them probably work behind the scenes. Many have visible programs advertised on their websites.
 
Q: do your members have specific concerns about trade secrets?
 
A: absolutely.
 
Q: you said you would be ok with a narrow exemption. How to address that?
 
A: build in the standard that it couldn’t involve any other violation of applicable law, including violation of trade secrets.  [Why is 1201 needed if another law is violated?  Why use copyright law to enforce a different regime?]
 
Harry M. Lightsey, III, General Motors, LLC with Anna Shaw, counsel for GM with Hogan & Lovells (not testifying)
 
Comments are solely directed at auto industry.  Controls range from engine to safety, braking, speed, steering, airbags.  ECU software is protected by TPMs.  If circumvented, could present real and present concerns for the safety of the occupants, as well as compliance w/regulatory and environmental requirements.
 
Proponents have no evidence of chilling effect in auto industry, which has every incentive to encourage responsible security research. We have, as we said in Class 22, relations with various independent researchers/academic institutions/industry fora.  We attend Black Hat and Defcon.  We engage in efforts w/DARPA. We do our part to encourage responsible security research. Our concerns are that a broad exemption would harm ability to control research and have opportunity to fix vulnerabilities before they’re widely disclosed, creating safety concerns.
 
Charlesworth: you asked about limiting exemptions to vulnerabilities caused by access controls.
 
Troncoso: those are the only past exemptions—limited to access controls creating security vulnerabilities. The proposal here is very broad, applied to any type of software.
 
Charlesworth: but are you asking for that as a limit? Are you ok with a narrow exemption limited to vulnerabilities caused by access controls?
 
Troncoso: we’d be comfortable with that.
 
Charlesworth: that’s a fairly considerable limitation.
 
Troncoso: our motivation is the disclosure issue. If that can be addressed and congressional intent can be integrated, we would be comfortable w/an exemption broader than vulnerabilities specific to the access controls.
 
Charlesworth: hacking into live systems—how should we think about that issue in practical terms? There’s not a huge record of need to look at live nuclear power plants.  How should the Office be thinking about the concern about publishing research where a breach could be catastrophic?
 
Green: two issues. Should you be testing live systems?  That can be dangerous. However, there are other directly applicable laws, like the CFAA, specifically designed to deal with that. I have never viewed the DMCA as specifically applicable to that case.  Is it something we should be using 1201 for?  Does that benefit us as a society?  Clearly it does not. We know that there are a number of systems that whether you’re accessing in real time or as separate copies, the results can lead to finding major safety issues. The value of fixing them is very high.
 
Charlesworth: saw a news report about someone who allegedly hacked into a live operating airplane system. [Are they being charged with a criminal violation of 1201?] They may be doing it for what they perceive to be good purposes. Security researchers could make a mistake—exposed a flaw, but also isn’t that scary?  Would you be willing to limit this to not-live systems?  Maybe that should be debated in Congress.
 
Green: I speak for all the researchers here when I say that story is not something we endorse. No ethical researcher should be working on live systems.
 
Reid: In addition to distinguishing that story, the vast majority of the research we’re talking about is aimed at fixing problems in a safe way.   
 
Charlesworth: but how do we limit the exemption to ethical research? There needs to be linedrawing to notify the public of what they can and can’t do. [Is that copyright law’s job, to reintroduce the entire legal code into 1201 exemptions?]  We’re trying to consider potential narrowing so that people feel that the exemption would be consistent w/congressional intent and the goals of the proceeding.  So are you willing to exclude live systems? Doesn’t think there’s much of a record on live systems. 
 
Reid: Urge you to consider that however this gets treated in this proceedings, as Green mentioned there are a number of other laws here. Collateral concerns about tampering are illegal under a whole bunch of laws. The question you ought to ask: is the DMCA the last line of defense for airplanes? Are we relying on © to protect airplanes? (A) we’re not, (B) if we were that would be troubling, (C) we are so far away from the purpose of the DMCA to protect (c) works from (c) infringement.  Nothing in the airplane story involves circumvention, FBI affidavit doesn’t cite 1201.  Legal and policy venues exist to address these; the Office need not worry about enabling behavior that’s illegal under other laws b/c it will still be illegal. There are complicated contours to this discussion, and these discussions should happen in other venues.  We’re in support of having those venues participate and apply those laws and policies. But (c) is not the place to do it, and you don’t need to, and 1201 doesn’t require you to. [Applause!]
 
Belovin: We are here to avoid breaking laws. We don’t want to violate the CFAA or airplane hijacking laws.  © infringement is almost never a concern unless you have a copy of the system.  The guy who allegedly tried to hack the airplane in flight wasn’t copying Boeing’s software. As a pragmatic matter, if I’m testing a system for security flaws in a way that could possibly involve copying, I have to have the thing in my possession. This is not a CFAA exemption request.
 
Charlesworth: couldn’t you hack in through the internet?
 
Belovin: you’d have to violate the CFAA first.  The larger violation there is the hacking. The more probable case is not involving the DMCA, but stealing source code—this is not protected by TPMs under the DMCA, it’s protected by ordinary enterprise security controls and firewalls. The DMCA was intended to protect copyright violations, not a CFAA supplement. 
 
Matwyshyn: Airplane incident facts are in dispute, but the security community is not rallying.  Homicide laws are the first line of defense. Whether a TPM was circumvented is irrelevant.
 
Charlesworth: b/c of how the law is written, we have to consider these issues. [Why?  That’s not in the exemption standard—it’s noninfringing use, as Betsy Rosenblatt eloquently said.]
 
Blaze: back to the issue of disclosure—remember that repair is important, but so is warning consumers against defective products.  The Snort toy: if I were a parent, even before it’s fixed, I’d want to know. Disclosure to parents is important even at the price of embarrassment to the vendor. Give the benefit of the process not merely to the developer: users are stakeholders as well.
 
Lightsey: no evidence of chill in auto industry. Given dramatic consequence on safety, proponents have not met burden of showing need for an exemption. Saying there are other laws and regulations is not sufficient in this context. We feel the DMCA is a relevant protection and we encourage the ability to engage w/security researchers responsibly.
 
Troncoso: Stanislav explained that he reached out first to mfgr, notwithstanding the bluster he was ultimately able to work w/them to ensure the vulnerability was fixed. He didn’t disclose until after it was fixed. That gets to the norm that we’re seeing even in researchers in this room. Consistent w/companies’ interests in protecting consumers.  Professor Green’s initial filing: he indicates he always provides disclosure before disclosing vulnerabilities to the public. It’s a key issue to us, critical to public safety.

Green: I always attempt to provide disclosure. Sometimes it’s not possible, as when there are 1000s of websites. Sometimes you notify, and they are not able to remediate it. They tell you there’s no fix or they’ll take a year. Then you have the obligation to look at the end user/consumers and that has to affect your calculation. Android is rarely updated by carriers. Google will make a patch, but 90% of consumers may be vulnerable a year later. You have to decide based on what’s right for consumers and not based on what’s good for software companies.
 
Stanislav: In the case of the Snort and camera, with both were reported through the helpdesk because there was no front door. Took days to convince them that there was an issue to kick upstairs. Had a ticket closed on him and had to reopen w/Snort. Only reason this got solved was that my company was going to disclose publicly. At that point reporter reached out; vendor said they’d never heard from a researcher before [i.e., it did not tell the truth]; then the CEO reached out to him on the thread they’d already been having. The internet of things comes from innovators—not large legal teams that understand complex legal situations; they will fight back in an attempt to shut you up.
 
Matywyshn: indeed, car companies like Tesla are state of the art. But unfortunately there’s a large degree of variation across car manufacturers. Some haven’t fully staffed security teams and have many openings—it would be beneficial to engage with security community. Tesla, for example, is ISO compliant and doesn’t oppose our approach.  If every car company was on the level of Tesla, we wouldn’t be concerned, but security researchers are concerned.
 
Belovin: I’m in favor of notification, but one issue is whether or not the vendor would have the legal right to block or delay publication interacts in a bad way w/university policies. I may not accept a grant that gives the funding agency the right to block outside publication. University sees this as a matter of academic freedom. Mirrored in an odd place in the law on export controls.  What is “export”?  You can’t teach foreign nationals certain things—one of the things it says in the law is that fundamental research is ok, but what is that?  One criterion: can someone else block publication?  If someone else can block publication, then export controls apply, which causes very serious chilling effects of its own.
 
Blake: we found sweeping vulnerabilities in election software.  Research authorized by customers (state gov’ts) not by voting machine vendors.  We were indemnified under state law and there was some contractual back and forth w/the vendors that I wasn’t privy to—grey area. One of the issues we addressed was whether to give the vendors advance notice to fix. We normally do try to give notice, we felt that allowing end users to remediate immediately outweighed the benefits of not notifying the users and allowing vendors time to repair things that would take more time to fix than the next election. Vendors didn’t see our results until they were made public.
 
Moy: Emphasize again the importance of disclosing not only so the vulnerability can be remedied but so that consumers can make an informed choice.  If a vendor can stall publication for 6 months/year but continue to market the product in the meantime, that’s an enormous problem w/major implications for consumers.
 
Charlesworth: could some be addressed by high-level communication: there is a security problem?
 
Moy: maybe for some, not all. There will be cases where the nature of the vulnerability is important. Consider the BMW vulnerability publicized in January—remote unlocking.  Details might be important to certain consumers—couldn’t be exploited to unlock other people’s cars, not your own; don’t know if that’s true but consumers could make decisions for themselves.
 
Charlesworth: but it’s not step by step instructions. Why would an ordinary consumer need to know that?
 
Moy: ordinary consumers include people who understand how the tech works. I wouldn’t be able to exploit a vulnerability even if you handed me a detailed paper about it.  [Likewise.]
 
Charlesworth: but what about enabling a certain group of people who might not otherwise have known about it—not sophisticated ones.  [So, sophisticated enough to understand the disclosure’s detailed, but not sophisticated enough to do it themselves.  Charlesworth is suggesting that researchers publish “step by step instructions” for a hack. But I don’t think that describes most of what they do, or not in that sense.  I read descriptions of Heartbleed, but that doesn’t mean it was step by step.]  Why would I need to know the way in which someone can exploit the Snort?
 
Moy: Who’s going to translate the nature of the vulnerability?
 
Charlesworth: Stanislav will.  The company refuses to fix it, so he publishes an article saying this toy has a problem.  I wouldn’t then need line by line instructions in order to make a decision about possessing that toy.  Why is that so hard to concede?
 
Moy: that would be enough for some consumers, not for others.
 
Charlesworth: Why?
 
Moy: sufficient for some, but not for other  more sophisticated consumers. I’m having a difficult time imagining how to write a disclosure requirement that would be written so that you could disclose, but not enough to replicate it technically.
 
Charlesworth: (j): solely to promote the owner/operator’s security. Part of the policy was that you weren’t necessarily advising the world how to do this.  Doing the research in a way that didn’t enable malicious actors.  Congress put the test in here to deal w/the complications—whether you use the research responsibly. [With respect to copyright, though, is a very different question than “are you providing a net benefit to the world?”]
 
Moy: Q depends also on how the company deals w/security. Is it something that could be fixed, or does it represent a major flaw?  Security experts should be able to analyze that and explain to us if necessary.
 
Charlesworth: is a high level disclosure better than none?
 
Moy: more information for consumers in the market is generally a good thing, but that doesn’t get to the reasons we want disclosure.

Stanislav: (1) At the time of the webcam—CEO said my research was inaccurate and misleading. I’ve presented it publicly now; when a story like this comes out and the vendor says I’m lying I can prove it. (2) Prevention: if the intermediary-users (web companies etc.) don’t know the specific details of the vulnerability in the meantime until the vendor patches it, then they can’t fix it on an intermediate basis.
 
Sayler: the individual disclosure is useful for consumers who may recognize that the problem may be replicated in other devices. Replication is hugely important, and it requires public disclosure for those of us in the community who do this kind of work.
 
Many of the flaws we discover, we’re not the first—many are already available on the black market. Allowing disclosure will not increase the number of zero-day exploits.
 
Charlesworth: the concern is you may be educating people about the unknowns.
 
Sayler: it’s a balance: it might happen, but you are also protecting millions of people. Extraordinarily hard to codify what the proper behavior is.  Thus we should rely on researchers’ good faith (and other laws).  Far outweighs the downsides.
 
Lightsey: to protect the record, on behalf of GM, cybersecurity is something we take very seriously. We have a senior leader at GM. The industry is committed to voluntary privacy principles, including promise to maintain reasonable security, enforceable under §5 of FTCA.  [Though as Moy says, how will you know if they’re following through?]
 
Troncoso: Potential for companies to decide not to fix a problem. But we do have regulators in place to handle those issues. If they encounter pushback from software companies unwilling to fix problems, urge them to go to the FTC.  [Right, because they have so many resources.]
 
Charlesworth: what would the FTC do?
 
Tronsoco: they’ve been willing to bring enforcement actions against companies not employing sufficient security standards.  Building in a disclosure requirement is critical to avoid perverse incentives to keep research hidden so it’s more valuable on black/gray markets.  Potential for exemption to be exploited by bad actors.
 
Stallman: part of the value of exploits trafficked in black market is secrecy. Publication is a way to make an existing but unknown vulnerability lose its value. 
 
Blaze: There is a bright line between legitimate research and black market: we publish our work and we’re required to do so by the scientific method.  You asked about a compromise disclosure in which we describe existence of vulnerability w/o describing how to exploit. With some examples it might be possible to describe the vulnerability/remediation w/o enough detail to exploit. But many, many others describing the existence would make exploit trivially easy: the difference between the exploit and who’s vulnerable is nonexistent.  No line to be drawn unless we want “there’s a terrible, lifethreatening problem with GM cars” to be the disclosure—“this model has a brake problem” is better.
 
Charlesworth: but saying there’s a brake problem is different than line by line discussions.
 
Blaze: sometimes it is possible, but in other cases it’s not. [Perhaps we should trust the programmer/security researcher and not the person who doesn’t program here?] Vulnerability might be: if you turn the key three times the brake stops working. The only way to know is to try it. There is no other way to describe it. This varies across the spectrum. There is not a generally applicable line meaningfully separating them.
 
Charlesworth: when you publish, sometimes you refrain from giving detailed information. [Charlesworth has a specific idea of “line by line instructions” that is not consistent w/the programmers’.]
 
Blaze: sometimes.  We ask whether it’s necessary to include details. Sometimes it’s in the middle, and you can disclose 90% and a determined person could ferret it out. An essential property of the scientific process is to publish reproducible, testable results that others can build upon. Readers of scientific papers need to be able to verify and reproduce.
 
Matwyshyn: There’s a whole array of mitigation measures researchers regularly use—timing, detail, a bundle of best practices.
 
Charlesworth: are those written down?
 
Matwyshyn: they’re contingent on the nature of the reproducibility. The ISO standards are the closest.
 
On the point of 0-day vulnerability markets – the researcher’s perspective is: I know a vulnerability. (1) Do I sell it and make a quick buck, or (2) undertake laborious and personally risky process of contacting vendors and maybe having them threaten me w/DMCA, work for months.
 
Charlesworth: so there’s overlap w/bad guys?
 
Matwyshyn: the US gov’t purchases zero-days regularly. But most vulnerabilities are known—a researcher will find that this product hasn’t been patched with a ten-year-known vulnerability. Don’t want the DMCA to deter contacting the company.
 
FTC: I served as privacy advisor. But it is an agency with limited resources.  There isn’t a formal intake mechanism for security researchers to report problem. The FTC can’t mediate DMCA threats from vendors.
 
Charlesworth: you’re suggesting that people might sell research on the black market if they don’t get the exemption.
 
Matwyshyn: The zero-day market is a very small sliver.
 
Charlesworth: how does it play into the exemption process?
 
Matwyshyn: in the absence of a regulatory regime, which we don’t have.
 
Charlesworth: well, we have 1201. You’re assuming someone has discovered—have they broken the law or not?
 
Matwyshyn: if they may have circumvented, we want them to report it. 
 
Charlesworth: why would they care?
 
Matwyshyn: because the act of disclosure currently exposes them to liability. We want to nudge them towards disclosure.
 
Charlesworth: does that actually happen?
 
Belovin: an ex-NSA hacker has stated that he sold an exploit to the US gov’t. Here’s someone who’s finding and publishing vulnerabilities and also sold it to the intelligence community.
 
I served as chief technologist to the FTC for a year. FTC doesn’t have the resources to act as intermediary in these cases. It does not resolve individual cases about kinds of research people can do.  Security researchers: take auto hacking. One case involved vulnerabilities in the wireless tire pressure monitor. I never would’ve looked there, but once I was pointed in that direction, any competent researcher could replicate the issue within a few weeks. Asking the right question is often the very hardest part of this kind of research.  Different remediation measures are indicated depending on the type of issue.
 
Reid: Underscore Belovin’s point about remedies. It’s not just about understanding and explaining vulnerability. Sometimes consumers can take an actual remedial action, which sometimes takes some detail.  If your car has a software problem, you may want to know how to fix it. Look at how auto industry handles other types of problems: airbag recall; we now know every detail, including every factory the airbags came from.  That is useful information.  We lack that useful information about how to deal with the risks of hackers hacking our cars, which allows consumers to apply pressure.
 
Q: Talk about norms—is there anything in standards that could identify a security researcher v. a black hat?
 
Matwyshyn: someone who discloses flaws for security and works to better systems. ISO standards are evolving. The leads have stated that they are happy to directly consider any issues the Copyright Office panel feels should be discussed.
 
ISO is an organization that has traditionally been closed; lots of corporate standards; will push for openness of these standards because of the tremendous social value of an exemption.
 
Charlesworth: it’s a little hard to draft a law based on something no one can see.  [From your lips to Congress’s ears! [TPP reference]]
 
Reid: we’d be comfortable w/ a limitation that makes clear it has to be for noninfringing purposes, the statute is geared for that and it’s easy to write in.
 
Q: what about not in violation of any other laws?
 
Reid: defers to papers.
 
Matwyshyn: suboptimal framing b/c many of the chilling effects involve people leveraging DMCA to threaten with CFAA etc.
 
Charlesworth: we will not grant an exemption that says you can violate other laws.  [I don’t think that’s what’s been asked for; see Betsy Rosenblatt again.  Shall we say “you can’t use the exemption if you’re going to commit murder”?]
 
Belovin: one reason there’s no consensus on reporting—it’s often very hard to understand how best to disclose; judgment calls. More germane: there’s a fear of vendors not acting in good faith. There is a chilling effect. Rightly or wrongly, we’ve seen enough instances where the DMCA has been used as a club, even with no copyright interests, that researchers don’t want to give someone else the power to suppress them.

No comments: