Homeland Security Follies

Bruce Schneier
According to the sleeve of his latest book, Beyond Fear: Thinking Sensibly About Security, "in an Uncertain World, Bruce Schneier is the go-to security expert for business leaders and policy makers." If only the policy makers would listen, we'd be safer, happier and still free.

Other books include Applied Cryptography, described by Wired as "the book the NSA wanted never to be published."

Beyond Fear deals with security issues ranging from personal safety to national security and terrorism. Schneier is also a frequent contributor to Wired magazine, The Minneapolis Star-Tribune, and many other fine periodicals. He also writes a monthly newsletter, Cryptogram.

I interviewed him on The RU Sirius Show.

RU SIRIUS: First of all, why did you become a security expert? Were you a secure child? Did anybody steal your lunchbox at school?

BRUCE SCHNEIER: I don't think I had any defining security episodes in my life, but I think you're right that security is something you're born with. It's a mentality. I remember as a kid walking into stores and figuring out how to shoplift — looking where the cameras were. You're born with a mindset where you look at security in terms of a system and figure out how to get around it. It's a hacker mentality. So doing security just was natural for me.



RU: I want to get right into the political area of security against terrorism. You wrote that security works better if it's centrally coordinated but implemented in a distributed manner. Tell us a little bit about that and maybe say a bit about how that might work.

BS: In security — especially something as broad as national security – it's important that there be a lot of central coordination. You can't have people in one area doing one thing, and people in another area doing another thing, and then not have them talking to each other. So sharing information across jurisdictions and up and down the line of command is important. When things happen, you need a lot of coordination and you can see coordination failures again and again. In the aftermath of 9/11, Hurricane Katrina there was a lot of replication of effort. A lot of things that were obvious… everyone thought someone else was doing. But the other half of that is distributed implementation. You can't be so rigid that your people in the field can't make decisions. In security today, we see smart people being replaced by rules. A great example is the Transportation Security Administration. They will blindly follow the stupidest rule rather than using common sense. Security works much better when the individuals at the point of security — the guards, the policemen, the customs agents — are well-trained and have the ability to follow their instincts. Think about how September 11 could have been prevented. A field agent in Minnesota was really first to put her finger on the plot, but she couldn't get her voice heard. And she didn't have the power to do anything herself. When you look at the real successes against terrorism at the borders, it's not custom agents following rules, but noticing something suspicious, and then following their instincts. So security works best when it's centrally coordinated, but distributedly implemented. A great example is the Marine Corps. That's their model. There's a lot of coordination, but individual marines in the field have a lot of autonomy. They're trained well, and they're trusted. And because of that, it's a good fighting force.

RU: You're talking about de-centralization, basically — the organizations making decisions on the local level.

BS: Right. Another analogy is the human body. There's a lot of coordination, but it's a very distributed organism. The pieces of our body do things autonomously, without waiting for approval. There's a lot of communication back and forth, a lot of coordination, but different pieces have their job, and they're empowered to do it. And it's robust and reliable because of that.

RU: What about radically democratized security, like Open Source kinds of efforts involving citizens?

BS: It's good and bad depending on how it works. I like Open Source intelligence. I like Open Source information gathering and dissemination. There's a lot of value in that. The downside of that is something like Total Information Awareness — TIA — where you have citizens basically spying on each other. And there you get pretty much nothing but false alarms. People will turn each other in because their food smells funny or they don't pray at the right time. Done right, a radically democratized, distributed security model works. Done wrong, you get East Germany where everyone spies on their friends.

RU: They were trying to get the postmen to spy on us for a while.

BS: Right. They were going to have postmen and the meter readers. That will work well if the postmen are properly trained. Where that will fail: if you tell a bunch of postmen, "Report anything suspicious." Because honestly, they don't know what "suspicious" looks like in this context. So the question is: given all the police resources we have, what should they be doing? I don't want the government chasing all the false alarms from the postmen and meter readers when they could be doing something more useful. So that's a bad use. If you train them properly, you'll have something better. But then you don't have a postman any more. You have a security officer.

Think of a customs agent. They're going to watch people, and they're going to look for something suspicious. But they're trained in how to do it. So they're less likely to be overtly racist or a fool for dumb profiles. They're more likely to look for things that are actually suspicious. So it's a matter of training. And that's pretty much true of Open Source security models. Think of Open Source software. Having a bunch of random people look at the code to tell you if it's secure won't work. If you have well-trained people who look at the code, that will work! Open Source just means you can see it, it doesn't guarantee that the right people will see it.

RU: Even with trained security people, it seems like they make an awful lot of errors. It seems like America, over the past few years, really has that "Can't Do" spirit. Is there anything you can tell us about trained security people, and how they could improve their efforts.

BS: Well, they're always going to make errors. Fundamentally, that's a problem in the mathematics called the base rate fallacy. There are simply so few terrorists out there that even a highly accurate test, whether automatic or human-based, will almost always bring false alarms. That's just the way the math works. The trick is to minimize the false alarms.

You've got to look at the false alarms versus the real alarms versus the real attacks missed — look at all the numbers together. But terrorist attacks are rare. They almost never happen. No matter how good you are, if you stop someone in airport security, it's going to be a false alarm, overwhelmingly. Once every few years, it'll be a real planned attack… maybe not even that frequently.

With training, you're less likely to stop someone based on a dumb reason. When airport security stops a grandma with a pocketknife, that's a false alarm. That's not a success. That's a failure. It's, of course, ridiculous. So the trick is to alarm on things that are actually suspicious so you'd spend your time wisely. But the fact that almost everybody will still end up being a false alarm — that's just the nature of the problem.

RU: Most of us experience the so-called "War on Terror" in one place, and that's at the airport. What are they doing right, and what are they doing wrong at the airports? Are they doing anything right?

BS: (Laughs) Since September 11, exactly two things have made us safer. The first one is reinforcing the cockpit door. That should have been done decades ago. The second one is that passengers are convinced they have to fight back, which happened automatically. You can argue that sky marshals are also effective. I'm not convinced. And actually, if you pretend you have sky marshals, you don't even actually have to have them. The benefit of sky marshals is in the belief in them, not in the execution.

Everything else is window dressing — security theater. It's all been a waste of money and time. Heightened airport security at the passenger point of screening has been a waste of time. It's caught exactly nobody; it's just inconvenienced lots of people. The No Fly List has been a complete waste of time. It's caught exactly nobody. The color-coded threat alerts – I see no value there.



RU: A recent BoingBoing headline read "TSA missed 90% of bombs at Denver airport." (Obviously they weren't talking about real bombs, but a test.)

BS: And the real news there is it wasn't even surprising. This is consistent in TSA tests both before and after 9/11. We haven't gotten any better. We're spending a lot more money, we're pissing off a lot more fliers, and we're not doing any better.

There's a game we're playing, right? Think about airport security. We take away guns and bombs, so the terrorists use box cutters. So we take away box cutters and small knives, and they put explosives in their shoes. So we screen shoes and they use liquids. Now we take away liquids; they're going to do something else. This is a game we can't win. I'm sick of playing it. I'd rather play a game we can win.

RU: The reactive thing is terribly absurd. The whole shoe-bomber thing — my ongoing joke is that if he were an ass bomber, taxpayers would now be buying a lot of Vaseline.

What do you think about John Gilmore's court fight — that he shouldn't have to present an ID to fly inside the country. Do you think that's a legitimate goal?

BS: I don't know the legal and constitutional issues. I know they're very complex and he unfortunately lost his case on constitutional grounds. For security purposes, there's absolutely no point in having people show a photo ID. If you think about it, everybody has a photo ID. All the 9/11 terrorists had a photo ID. The Unabomber had one. Timothy McVeigh had one. The D.C. snipers had one; you have one; I have one. We pretend there's this big master list of bad guys and if we look you up against the list, we'll know if you're a bad guy and we won't let you on the plane. It's completely absurd. We have no such list. The no-fly list we have is full of innocent people. It catches nobody who's guilty and everybody's who's innocent. Even if your name is Osama bin Laden, you can easily fly under someone else's name. This isn't even hard. So there is absolutely no value to the photo ID check. I applaud Gilmore based on the fact that this is a complete waste of security money.

RU: So if you were in charge of airport security, are there any things that you would implement?

BS: I think we should ratchet passenger screening down to pre-9/11 levels. I like seeing positive bag matching. That's something that was done in Europe for decades. The U.S. airlines screamed and screamed and refused to do it, and now they are.

Really, I would take all the extra money for airport security and have well-trained guards, both uniformed and plainclothes, walking through the airports looking for suspicious people. That's what I would do. And I would just give back the rest of the money. If we secure our airport and the terrorists go bomb shopping malls, we've wasted our money. I dislike security measures that require us to guess the plot correctly because if you guess wrong, it's a waste of money. And it's not even a fair game. It's not like we pick our security, they pick their plot, we see who wins. The game is we pick our security, they look at our security, and then they pick their plot. The way to spend money on security – airport security, and security in general — is intelligence investigation and emergency response. These are the things that will be effective regardless of what the terrorists are planning.

RU: You emphasize intelligence. Is there any truth to the claims made by various agencies that intelligence people couldn't do things that they should have been able to do to protect us because of the Church Committee rules in the mid-1970s?

BS: I think that's overstated. The controls that the Church Committee put in place made a lot of sense. The purpose was to stop very serious abuses by law enforcement — by the police, the NSA, and the CIA. If you look at the failures of 9/11, they weren't based on the Church Commission restrictions. So I think we're making a mistake by dismantling those protections. In effect, those are also security measures that protect us from government abuses. Unfortunately, those abuses are far more common than terrorist attacks.

RU: Do you ever watch the TV show Numb3rs?

BS: I don't. People tell me I should, and I do see plot summaries occasionally — but no, I'm not a big TV person.

RU: It's pretty amusing. But it makes it look like if you combined data mining with some complexity theory, you could predict everything anybody will do, and exactly when and where they'll show up.

BS: If that was true, there'd be a lot more people making money on Wall Street, wouldn't there?

RU: Right. What are the limitations of data-mining? You find data mining pretty much useless in the case of terrorism, but you find it useful in other areas.

BS: Data mining is a really interesting and valuable area of mathematics and science, and it has phenomenal value. The data mining success story is in credit cards. The credit card companies use data mining to constantly look at the stream of credit card transactions, and find credit cards that have been stolen. It works because credit card thieves are relatively numerous. There's some percentage of credit cards that are stolen every year and credit card thieves tend to follow standard profiles. When mine was stolen, the fraudster bought gas first (you do that to test that the card is valid), and then went to a large department store — they went to Canadian tire — and bought a bunch of things that were easily fence-able. So my credit card company caught that immediately. So there are a lot of thieves with a well-defined profile and then also the cost for false alarms isn't that great. Most of the time, the company calls us to check, and we're happy to receive the call. Or in extreme cases, they cut off the card, and you have to call and get it reinstated

It doesn't work well looking for terrorists. The number of terrorists, with respect to the general population, is infinitesimally smaller than the number of credit card fraudsters to the number of credit cards. Also, there is no well-defined profile. You know, you hear all sorts of things that are supposed to profile terrorists — people who move suddenly — one-fifth of the population does that. Or who you talk to and communicate with. Lots of people have weird friends. In a lot of ways, a surprise birthday party looks like a terrorist attack. The only difference is how it's executed. So you don't have this large database of existing events that you can data-mine for a profile.

The other problem is that false alarms are expensive! For a credit card, they're cheap — a phone call, or you turn off the card, and they have to reinstate it. In looking for a terrorist plot, a false alarm costs maybe three weeks work from a handful of FBI agents? It's an enormous amount of money and an enormous amount of effort. So when you apply the math to looking for terrorist attacks, you have no good profile; there are so many false alarms you'll never find a real attack; and the false alarms are so expensive that they divert resources from what could be actually useful anti-terrorist activities. So I don't think it's ever going to work. The numbers are just not on your side.

It is far more valuable to do traditional police investigative work. Think of what caught the London liquid bombers. It wasn't data mining. It wasn't profiling at the airports. It wasn't any of these new-fangled ideas. It was old-fashioned detective work — following the lead. It was smart investigators investigating. It's not sexy, but it's effective. Before diverting resources from that, you better have something really good. And data mining isn't.

RU: I guess the idea of Total Information Awareness would seem sexy to some portion of the geek population. That's where it came from!

BS: We're so desperate to find ways to harness technology to solve the problem. We're used to that working in other areas of society — just apply more computing power, you get better results.

This is fundamentally a human problem. It's not a data problem. It's a problem of human intelligence connecting the dots. If I'm looking for a needle in a haystack, throwing more hay on the pile isn't going to solve my problem.

I need a better way to methodically follow the lead into the haystack to the needle. Another lesson of the liquid plot is that if they got to the airport, it would've gotten through. It would've gotten through all the enhanced screening; it would've gotten through all the enhanced profiling. The reason it failed had nothing to do with airport security.

RU: Moving on from terrorism, but still thinking about haystacks — you have a bit in "Beyond Fear" about learning about security from insects, which I found really fascinating. What can we learn from insects?

BS: There's a lot to be learned from security from the natural world in general. All species have evolved as security beings — we need to survive enough to reproduce. We need to be able to protect our offspring so they can survive. We need to protect our food supply. We attack other creatures to kill them and eat them. There's so much security interplay in the natural world. And it's a great source of strategies and anecdotes. I find insects particularly valuable, because they evolve so quickly. You see so many interesting strategies in the insect world, because of the wacky evolutionary turns they take. Evolution doesn't give you the optimal security measure. Evolution tries security measures at random, and stops at the first one that just barely works. So you tend to get really weird security in the insect world. You do get some real neat examples of distributed security measures. Think of the way ants protect their colony. There are ant species that just wander around randomly, and if they hit a predator or a threat, they run right back to their colony to alert everybody. Individual ants are very cheap and very expendable, so if you have cheap resources, you just sort of do random things.

The lima bean plant is interesting. Effectively, when a certain mite attacks it, it calls an air strike. It releases a chemical that attracts a predator mite that will eat the mite that's attacking it. Very clever.

RU: We are moving into a society very much like the ones that have been written about in various cyberpunk novels in the early 90s. We can imagine people running around with suitcase nukes and bioterror or nanoterror weapons that are extraordinary. This kind of destructive power is moving from the government to the small group to the individual. Does that imply a need for a Total Surveillance society — basically, we need to watch everything everybody is doing, all the time?

BS: I don't think it implies that. It does imply we need some kind of different security. I think society is inherently good. Most people are inherently honest. Society would fall apart if that weren't true. In a sense, crime and terrorism is a tax on the honest. I mean, all of security is a tax. It taxes us honest people to protect against the dishonest people. The dishonest people are noise in the smooth running of society. The attacker gets a lot more leverage when the noise becomes greater – so in a complex society, a single person can do a lot more disrupting. But I don't believe that surveiling everybody will solve the problem. We have to start thinking about different ways to cope with these problems. But I sort of discount massive surveilance as ineffective. I don't even need to say: "I don't want to live in a society that has that."



RU: So let's say President Obama asks you to be the Homeland Security director. If you accepted, what would you do with it?

BS: If I was in charge of Homeland Security, I would spend money on intelligence investigation and emergency response. That's where I'll get the best value. And I think there is security inherent in civil liberties, in privacy and freedom. So I wouldn't be messing with that.

RU: Before I let you go, what are you exploring now?

BS: In the past, I've done a lot of work in the economics of security. Now I'm researching the human side of security — the psychology of security. I'm looking into how people make security decisions, how they react to security. Why is it that we're getting security wrong? Why is it that people fall for security theater instead of doing what makes sense? And it turns out there's a lot of very interesting things about how the brain works, how we process security trade-offs. I'm researching that. There's an enormous body of research that hasn't really been applied to the technological community. I'm really new to this research, but there's a lot there to look at.

See also:
Is Iraq Really THAT Bad?
Catching Up with an Aqua Teen Terrorist
20 Secrets of an Infamous Dead Spy
Detention and Torture: Are We Still Free Or Not?

6 thoughts to “Homeland Security Follies”

  1. Is anybody collecting horror stories about travelers and their experiences in airports? I have one from my daughter who traveled from France to the U.S. just a couple days ago with her 73 yr old aunt. What they were put through was unbelievable.

Leave a Reply

Your email address will not be published. Required fields are marked *