When I started out in the systems administration and hacking worlds a couple of decades ago - and even when I first moved into information security as a profession nearly 15 years ago - the dominant incentive was the ego trip: what can I get away with? Truth be told, that's the original (and to many, myself included, the "real") meaning of hacking: to take something and make it do what I want, rather than necessarily what the creator intended. A hacker is someone who is highly interested in a subject (often technology), and pushes the boundaries of their chosen field.
That culture has nothing to do with malicious use of computers - nay nothing to do with malice at all. It is all about solving puzzles: "here's an interesting <insert favorite item>; now what can I do with it?" The hacking ethos brought about automotive performance shops and the motorcycle customization industry glamorized by West Coast Choppers for two examples. A hacker could be known less controversially as a Maker, or a tinkerer, or a modder - or an engineer.
Hacking in its purest form is perfectly legitimate. If I own a computer, or a phone, or a network router, or a TV, or a printer, or a programmable thermostat, or an Internet-connected toy, or a vehicle, or (the list could go on forever), I have every right to explore its capabilities and flaws. Within reasonable limits (various transportation authorities may have something to say if I add flashing red and blue lights to my car and start driving down the highway), it is mine to do with as I please. Where it becomes ethically and legally questionable is when I stop tinkering with things I own, and begin tinkering with something you own, without your permission.
And that's where we find ourselves today.
I'm not going to speculate on things of which I have no first-hand knowledge. The only thing I know for certain is that a few weeks ago, security researcher Chris Roberts tweeted a presumably sarcastic comment about hacking into the cabin controls of a commercial jet in which he was flying. Poor discretion? Probably. Unlawful? Not at all. But it caught the attention of a security analyst for said airline, who reported the comment to the FBI. Given Chris' public research and talks he has given, the FBI took the comment quite seriously.
I will reserve judgment, because I don't know the rest of the facts. It is possible Chris did in fact use the in-flight entertainment network as a springboard into the avionics system and cause an engine to climb, and that he did access the International Space Station. If so, that goes way beyond all acceptable boundaries: there is no room for messing with in-use systems that affect life safety.
But that is far from certain. It is possible that he did no such thing but is trying to make a name for himself. It is possible that he found a way to access flight controls from the passenger cabin, did not feel he was taken seriously by the airlines or aircraft manufacturers, and felt this was the best way to shine a light on a serious concern. It is possible that he did these things in a simulator rather than an in-flight vehicle, and the FBI took his statements out of context. I don't know, and I won't speculate.
Whatever the truth in this particular case though, there are real ethical concerns in security research and disclosure - concerns that each of us in the industry have to weigh when deciding how to conduct ourselves.
Computers, despite the amazing things they enable, do only what they are programmed to do. They don't have intuition - they can't think "this doesn't seem right" when faced with a condition the developer didn't envision. Since developers can't envision every possible condition, there is a constant cat-and-mouse game between criminal hackers and developers.
It is in that game that researchers find ourselves. Many of the rules for the game were written decades ago, before many modern technologies were even dreamed up. The seminal US law governing computer abuse is the Computer Fraud and Abuse Act (CFAA) of 1986. As you can imagine, CFAA is ambiguous when it comes to modern interconnected technologies, and leaves much to the interpretation of the courts.
Hence the dilemma.
Security practitioners may have entirely benevolent motivations - wanting to make the online world safer for the rest of the world. We chose a profession that to others seems like a black art (to be fair, it's only a black art because they do not understand it; likewise, to me much of medicine is a black art). We are aware of flaws and risks that a criminal could exploit for financial gain, or even to cause significant physical harm. That knowledge may come honestly (installing a program on my own computer to poke at it; buying surplus equipment and reading manufacturer schematics to put together a simulator environment that mirrors an actual airplane). It may come from more questionable means. Regardless, that knowledge brings with it a responsibility.
I have been fortunate to not experience a truly adversarial response to disclosure. I've not had a company's lawyers threaten me if I publish my findings. I've not had the FBI confiscate my electronic devices and issue an alert for people to be on the lookout for something I've done. But I've had a variety of reactions - and those reactions influenced my own choices and shaped my own opinion on responsible disclosure.
As a researcher, I prefer spending my time on things where the company has a track record of welcoming security input. To wit, I've found and disclosed a number of issues in wireless routers made by Asus, and have written a couple of useful configuration guides. I made a connection to a product manager who responds to me cordially and is appreciative of my input. That sort of relationship benefits everyone: the company gets outside security testing, flaws in widely-used consumer devices that might be exploited by a criminal are fixed, and I get a little professional exposure through acknowledgment in the release notes.
Other companies take a different, though still positive approach. I've written about a phishing scam that aimed to take advantage of members of the banking institution USAA, and reported a bug in the mobile banking app that could reveal private information if someone had access to your phone. In both cases the company privately acknowledged my reports, and the mobile app was quietly fixed a few weeks later. Again the result was a safer online world for the company's ten million members.
Sometimes though private discourse with a company or public agency doesn't have the desired result. I have only once made the decision to go the "full disclosure" route and publish something that had not yet been fixed. The payment service for highway tolls in Texas at one time revealed complete credit card numbers, with mailing address and expiration date, in a hidden web page field; the login process could easily be defeated, enabling a criminal to potentially obtain usable payment card information for a significant portion of the 1.2 million toll users in the state. After privately reporting my findings I chose to disclose publicly, but at the beginning of a weekend when the site was already offline for maintenance. Disclosure had the desired effect: by the time the site came back online the following week, the flaws had been fixed.
I've read a variety of essays on hacking and disclosure ethics of late. Attack Research's Valsmith talks about "Stunt Hacking" and "Media Whoring." Rafal Los (aka "Wh1t3Rabbit") posits that we must first define a researcher, and set rules for appropriate disclosure if we are to protect legitimate research. The Electronic Frontier Foundation publishes a "Coders Rights Project." Bugcrowd acts as a sort of clearing house for companies running "bug bounties," or formal programs to seek research in exchange for acclaim or awards, and publishes a "Responsible Disclosure Policy."
I am hopeful that as this aspect of the industry matures, a generally acceptable set of guidelines will come together providing a way for responsible, ethical research without fear of legal response. Until then, one thing remains clear: there is no place whatsoever for hacking systems in use that put lives in danger - whether aircraft, surgical robots, or hospital drug infusion pumps.
And that's where we find ourselves today.
I'm not going to speculate on things of which I have no first-hand knowledge. The only thing I know for certain is that a few weeks ago, security researcher Chris Roberts tweeted a presumably sarcastic comment about hacking into the cabin controls of a commercial jet in which he was flying. Poor discretion? Probably. Unlawful? Not at all. But it caught the attention of a security analyst for said airline, who reported the comment to the FBI. Given Chris' public research and talks he has given, the FBI took the comment quite seriously.
I will reserve judgment, because I don't know the rest of the facts. It is possible Chris did in fact use the in-flight entertainment network as a springboard into the avionics system and cause an engine to climb, and that he did access the International Space Station. If so, that goes way beyond all acceptable boundaries: there is no room for messing with in-use systems that affect life safety.
But that is far from certain. It is possible that he did no such thing but is trying to make a name for himself. It is possible that he found a way to access flight controls from the passenger cabin, did not feel he was taken seriously by the airlines or aircraft manufacturers, and felt this was the best way to shine a light on a serious concern. It is possible that he did these things in a simulator rather than an in-flight vehicle, and the FBI took his statements out of context. I don't know, and I won't speculate.
Whatever the truth in this particular case though, there are real ethical concerns in security research and disclosure - concerns that each of us in the industry have to weigh when deciding how to conduct ourselves.
Computers, despite the amazing things they enable, do only what they are programmed to do. They don't have intuition - they can't think "this doesn't seem right" when faced with a condition the developer didn't envision. Since developers can't envision every possible condition, there is a constant cat-and-mouse game between criminal hackers and developers.
It is in that game that researchers find ourselves. Many of the rules for the game were written decades ago, before many modern technologies were even dreamed up. The seminal US law governing computer abuse is the Computer Fraud and Abuse Act (CFAA) of 1986. As you can imagine, CFAA is ambiguous when it comes to modern interconnected technologies, and leaves much to the interpretation of the courts.
Hence the dilemma.
Security practitioners may have entirely benevolent motivations - wanting to make the online world safer for the rest of the world. We chose a profession that to others seems like a black art (to be fair, it's only a black art because they do not understand it; likewise, to me much of medicine is a black art). We are aware of flaws and risks that a criminal could exploit for financial gain, or even to cause significant physical harm. That knowledge may come honestly (installing a program on my own computer to poke at it; buying surplus equipment and reading manufacturer schematics to put together a simulator environment that mirrors an actual airplane). It may come from more questionable means. Regardless, that knowledge brings with it a responsibility.
I have been fortunate to not experience a truly adversarial response to disclosure. I've not had a company's lawyers threaten me if I publish my findings. I've not had the FBI confiscate my electronic devices and issue an alert for people to be on the lookout for something I've done. But I've had a variety of reactions - and those reactions influenced my own choices and shaped my own opinion on responsible disclosure.
As a researcher, I prefer spending my time on things where the company has a track record of welcoming security input. To wit, I've found and disclosed a number of issues in wireless routers made by Asus, and have written a couple of useful configuration guides. I made a connection to a product manager who responds to me cordially and is appreciative of my input. That sort of relationship benefits everyone: the company gets outside security testing, flaws in widely-used consumer devices that might be exploited by a criminal are fixed, and I get a little professional exposure through acknowledgment in the release notes.
Other companies take a different, though still positive approach. I've written about a phishing scam that aimed to take advantage of members of the banking institution USAA, and reported a bug in the mobile banking app that could reveal private information if someone had access to your phone. In both cases the company privately acknowledged my reports, and the mobile app was quietly fixed a few weeks later. Again the result was a safer online world for the company's ten million members.
Sometimes though private discourse with a company or public agency doesn't have the desired result. I have only once made the decision to go the "full disclosure" route and publish something that had not yet been fixed. The payment service for highway tolls in Texas at one time revealed complete credit card numbers, with mailing address and expiration date, in a hidden web page field; the login process could easily be defeated, enabling a criminal to potentially obtain usable payment card information for a significant portion of the 1.2 million toll users in the state. After privately reporting my findings I chose to disclose publicly, but at the beginning of a weekend when the site was already offline for maintenance. Disclosure had the desired effect: by the time the site came back online the following week, the flaws had been fixed.
I've read a variety of essays on hacking and disclosure ethics of late. Attack Research's Valsmith talks about "Stunt Hacking" and "Media Whoring." Rafal Los (aka "Wh1t3Rabbit") posits that we must first define a researcher, and set rules for appropriate disclosure if we are to protect legitimate research. The Electronic Frontier Foundation publishes a "Coders Rights Project." Bugcrowd acts as a sort of clearing house for companies running "bug bounties," or formal programs to seek research in exchange for acclaim or awards, and publishes a "Responsible Disclosure Policy."
I am hopeful that as this aspect of the industry matures, a generally acceptable set of guidelines will come together providing a way for responsible, ethical research without fear of legal response. Until then, one thing remains clear: there is no place whatsoever for hacking systems in use that put lives in danger - whether aircraft, surgical robots, or hospital drug infusion pumps.