Beyond Fear
Bruce Schneier's Beyond Fear is an excellent introduction to thinking rationally about security. The book is packed with insights, but still very readable. And it uses real-world examples to illustrate the concepts being discussed. This post is a summary of the notes I took while reading the book.
"Fear is the barrier between ignorance and understanding," says Schneier in the first chapter. To think rationally about security, you have to be dispassionate. You have to look at statistics, instead of watching the news and thinking that it will happen to you tomorrow (see availability heuristic). Real risk = threat * likelihood of successful attack. Perfect security is unachievable. (Want to have zero airplane hijackings? Ban all commercial flights.) Schneier proposes a five-step process for evaluating a proposed security measure:
-
What assets are you trying to protect?
-
What are the risks to those assets? Here you have to understand your attackers -- know their motivations, goals, expertise and resources, level of access (are they insiders?), and risk aversion.
-
How well does the security measure mitigate those risks? Here you have to understand the players involved -- know their agenda and power to influence. Are they selling real security or just security theater? Are they just trying to move the blame somewhere else in case something bad happens? To get a player to mitigate a risk, make him or her accountable for it / liable for the damages. (Conflicts of interest abound. Why can you take matches on a plane? Because the tobacco industry lobbied to allow it. This is just one example of a security decision made for business, not security reasons.)
-
What other risks does the security measure cause? Does it give a player too much power? Does it move the weakest link somewhere else, leaving the system just as vulnerable? Does it cause so many false alarms that people will simply ignore it, like in The Boy Who Cried Wolf?
-
What tradeoffs does the security measure require? Does it cost money? Does it destroy privacy and dignity? Does it make the system so difficult to use that people will turn elsewhere?
All security is about tradeoffs. Do you lock your door while you walk downstairs to the laundromat (more secure), or do you leave it unlocked (more convenient)? Ultimately, it is a subjective decision whether a security measure is worth its tradeoffs. The five-step process above is meant to help you consider all the factors involved when making such decisions.
The sole purpose of security systems is to prevent some things from being done. Security is invisible when it works. That's why we have to think about how systems fail. While a system may offer good protection against "frontal" attacks, most security breaches happen at the seams between secure systems (during a shift change, while the money is being carried from the armored car to the vault, etc). Complex, tightly coupled systems with many possible interactions are harder to secure. A security system can fail in two ways: passive failures (the system fails to react to an attack) and active failures (the system raises an alarm when there's no attack).
A system can be brittle (a small failure leads to complete compromise) or resilient (flexible, able to recover from failures). Static systems (cannot adapt), homogeneous systems (vulnerable to class breaks; see below), and systems that rely on secrecy (security through obscurity) tend to be brittle. It is tempting to think that an ideal security system would be fully automated, with no risk of human error or malice. But technology today is static and brittle, while people are adaptable and resilient. (A person can feel suspicious and investigate further; a computer can't.) So secure systems require trusted people, and probably will do so for the foreseeable future. (Side note: People make mistakes, so social engineering will probably always work.)
Computers and the Internet give attackers some leverage. Once you figure out how to break one system, you can break many other systems of the same type. Schneier calls this a class break; others use the term monoculture. (Side note: Nature protects against class breaks through diversity, but human agriculture, industry, and software tends towards homogeneity. Diversity sacrifices the individual's security for the security of the overall population -- an idea that doesn't sit well when the individuals in question are people.) Thanks to data aggregation, attacking a single system can have a huge payoff (steal one million credit cards). Through automation, attacks that have a low payoff (steal one cent from a million people) or a low probability of success (click on a spam link) suddenly become profitable. And the Internet enables attackers to act at a distance, taking advantage of the friction between legal jurisdictions. But it is important to remember that technology is inherently neutral, and that any technology can be used for good or for evil. By making a technology available, we (as a society) decide that its benefits outweigh the potential harm.
There are a few timeless strategies that make for good security. Defense in depth means having multiple barriers, so that an attacker has to penetrate all of them to break the system. (Example: a medieval castle.) Compartmentalization means splitting the system into chunks that are secure by themselves, and a successful attack on one chunk does not bring down the entire system. (Example: storing money in more than one place when traveling.) A choke point funnels all players into a narrow space, where security measures are concentrated. (Examples: the security checkpoint at an airport; the strait of Gibraltar). In general, a tried-and-true security system is better than a new and untested one.
Secure systems have to let the good guys in, while keeping the bad guys out. This is done by the triangle of identification (Who are you?), authentication (Prove it.) and authorization (Here is what you are allowed to do.) The authentication step can check something you know (a password), something you have (a physical key), something you are (a fingerprint scan), or any combination of the above. Conflating identification and authentication leads to poor security. For example, a social security number in the US is an identification token -- it's not secret, and it can't be changed. Using it for authentication is a bad idea. As another example, biometric signatures are good for authentication (Does the face in front of the camera correspond to person X?), but bad for identification (Which of these one million persons does this face correspond to? -- a much harder question). Two final curiosities: Do you know what an FBI badge looks like? An authentication system isn't secure if it's unfamiliar. Did you ever wonder why airlines required photo ID, long before 9/11? It wasn't for security -- they just wanted to prevent people from reselling tickets.
Prevention is often expensive and hard to get right. In many cases, detection during or after an attack is cheaper and more effective. For example, most surveillance cameras are not being watched live. Their role is to generate an audit trail that can be checked later, in case some security incident has occurred. Detection would be useless without an appropriate response, which completes the triangle. There are many types of responses. For example, forensics (trying to figure out what happened and how) is at odds with recovery (returning the system to a normal state), because recovery often destroys evidence.
A mandatory paragraph on terrorism. The main target of terrorists is morale -- they want us to live in fear. So wide publication of terrorist attacks in the media actually furthers the terrorists' goals. The measures put in place by governments to "prevent terrorism" are largely ineffective, because attacks are very rare, the target is everything, and perfect prevention is impossible. On the other hand, these measures cost dearly in terms of dollars and civil liberties. A much more sane approach would be to say, "Terrorist attacks are extremely rare, but they will happen from time to time no matter what we do. Let's focus our resources on other, more prevalent harms." Unfortunately, such a statement would be suicidal for any politician. ("You mean you're just gonna sit there and watch? Do something, dammit!") The real solution to terrorism is threefold. First, educate people that perfect security is impossible. Second, focus on intelligence and early detection, instead of trying to achieve 100% prevention. Third and most importantly, work to fix the underlying causes why so much of the world hates the US (or the UK, etc).
Security is never "done" -- it is an ongoing process of reevaluation. Schneier likes to think of it as a never-ending game (with high stakes) against an intelligent and evolving adversary. But each of us makes security tradeoffs every day, so we should not "leave the security to the experts". We should recognize when security decisions are made based on fear, instead of a rational analysis. And we should be mindful of people's agendas when they tell us that something is "for security purposes". Perfect security is impossible. In Schneier's words, "security is a tax on the honest," but "the price of freedom is the possibility of crime."
Go get the book. It's awesome. Also check out Schneier's blog and monthly newsletter.