Our website uses cookies

Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing Infosecurity Magazine, you agree to our use of cookies.

Okay, I understand Learn more

A Q&A with Charlie Miller, Computer Security Researcher at Twitter

Charlie Miller
Charlie Miller

Eleanor Dallaway: You mentioned in your Hacker Halted keynote that anti-virus on mobiles isn’t worth the investment. What security technology is worthwhile for mobiles?

Charlie Miller: VPN is certainly a good idea. A lot of the stuff that’s already built into the phone is good, but you have to make sure the users are using it. On an iPhone, you can have no pin, a four-digit pin, or a passcode. Even if you have a four-digit pin, the difference between that and no pin is huge. You get to [protect] the data underneath, all the data is encrypted with a key drive from that pin. Enterprises need to make sure that they have policies to make sure everyone has pins, and that everyone is using the features that are built into the phone.

Does BYOD affect an enterprise’s ability to secure their mobile network and make things a lot harder?

It’s harder in the sense that you don’t have as much control over things. If people are bringing their own devices, they’re going to want to have Angry Birds and they’re going to want to be in control. All you have to do is design everything with that in mind. Make sure that if they’re using their phone on your network you isolate them, and if you need to remove access to your data, you can. Limit what access they have, and when you give them access. Assume that the devices are essentially hostile, that they’re going to jailbreak their phone, download some stupid app, and give it all the permissions in the world. Assume the bad guy is more or less on their device, isolate them, and isolate the data that you give them.

In your presentation you declared mobile threats mostly hype. Is there anything that you don’t do on your iPhone due to security concerns?

I can show you my online banking app on my phone right now. I don’t want restrictions, I want to be able to enjoy my phone, and do what I want, and I’m going to do everything to make that as safe as possible, but I’m not going to limit the things I do. What’s the point of carrying around an iPhone, if I can’t use all the features?
All security nerds like me fall into two camps. One is so paranoid that they don’t do anything. I’m in the other camp. In my house, it’s wide open. If someone wants to get me bad enough, they will, no matter what I do, so I might as well make things as usable as possible, whilst still having some security. Having said that, I wouldn’t do online banking on a random desktop, I wouldn’t even log onto my frequent flyer site.

Whilst you consider the current mobile threat landscape to be mostly hype, do you see this changing any time soon?

No. None of things are going to change very quickly.

You’ve been working on car hacking research with some interesting results. How real is the threat, and in what situation – with what threat actor - would we see that actually being a real world problem?

Almost all of my research is clearly theoretical, and no-one has ever seen [a car hack by threat actors] and probably never will for a long, long time. That’s kind of why we are doing the research now, because we want to get people thinking about it and fixing it before it actually becomes a problem. As far as who would do it, it’s hard. Not many people can do it, and it would take a lot of money and time – it would take serious funding and resources - but regardless, if someone gets onto my banking account, that’s a concern, but if someone breaks into my car like this, it’s pretty bad.

There’s also very little obvious financial gain in car hacking, which is the main motivation for hacking at the moment, right?

Right, there’s activism, and then there’s the guys who do it for money. [The motivation behind hacking a car] is to hurt someone, and you can do that with a hammer and a gun, and those things are way easier and cheaper. So I don’t know why anyone would do it, but to be honest, I’d rather not have my car vulnerable to it. We sent our findings to the Society for Automotive Engineering, and hopefully they’ll read it.

When you disclose these vulnerabilities, as mentioned above, are they taken seriously?

No, not really. They’re mostly worried about PR, and so their responses are very dismissive. I like to think that inside there are engineers who take it seriously, but they’re just not allowed to talk about it. Publicly they’re very dismissive of our research, but I’m hoping that privately, they’re still taking action against it. One thing that’s hard with a car versus computers and even phones is if it’s vulnerable today, it’s going to be vulnerable in ten years. So that’s sort of scary.

What does your wife say when you go home and tell her you’re going to hack the steering wheel of the car you’re driving?

She’s used to it. I was kind of worried about what she would say when I wrecked my car into the garage. All my other stuff is crazy and wild, but it doesn’t affect her directly, but when I crashed our drive, I was like, ‘oh crap. I think I’m going to get in trouble’. But she was pretty nice about it. She just said; ”OK – you’re going to get that fixed, right?!”

So, what’s your next research focus?

I’m still looking at cars. Two years ago, academic guys did the same [car] research but didn’t release the data, the tools, or even what car they used. So we release everything so that other researchers can quite quickly get up to speed and start looking at cars, and making them safer. The biggest bummer about doing car research is that you have to have a car, and those things aren’t cheap. So the next project is trying to make a car in a box, so all the electronics and stuff from a car are all wired together so that they think they’re in a car and working. Let’s see if we can do the research on that little thing. It might cost like $2 – 3,000 or something, instead of $30,000 or $40,000 or so.

Speaking of money, what are the financial implications of your research?

Well, you don’t make money breaking stuff. I can attest to that, for sure. The people who make money are building and making new things, and the faster they can do it, the better. They don’t think to slow down and make things safe and secure. Those people do a lot for the world today, but I like to see what could go wrong.

Recently, Infosecurity reported on the story that Facebook did not pay out a bug bounty when a researcher demonstrated a vulnerability on Mark Zuckerberg’s wall. What are your thoughts generally on bug bounty programs?

They were really controversial, and now it’s sort of accepted. All the big guys – Google, Facebook, Microsoft – have a bug program and that’s awesome, and all the data shows that they work. If you find a bug, you have to decide what you are going to do with it. The best thing is to give it to the company. If the company is paying you, that’s great, but if some bad guy want to pay you and the company doesn’t, it is hard to make the right choice. If the good and bad guys are offering to pay sort of the same, it’s easier to make the right choice.

In 2008, myself and Dino Dai Zovi had this campaign where we campaigned for ‘no more free bugs’. We said that researchers like us would not report bugs anymore until they are paid, because there’s all this risk and work involved, not just finding the bugs, but reporting them too. It was not just about getting paid, but if they’re willing to pay, that shows they’re serious. We like to think that helped [encourage bug bounty programs].

Infosecurity is celebrating its tenth anniversary this year. What one event or evolution in the industry in the past decade do you consider most significant?

The move away from trying to find and fix every vulnerability in software towards engineering things to make exploitation higher. When I started thirteen years ago, the idea was to find bugs, find bugs. In retrospect, it’s such a fool’s errand. So instead now, we make exploitation higher with things like sandboxing. We know how to build things, but we don’t know how to write perfect code, and so it’s silly to keep trying to write perfect code. I remember when Brad Arkin (Adobe) told me they weren’t looking for bugs any more, and I thought he was insane. I realized, days or weeks or months later, that he was totally right, and the data has proved that. It’s not just Adobe; Microsoft has done the same thing, and mobile operating systems are using sandbox apps. We’ve moved away from trying to be perfect, and instead concentrate on engineering things to make exploitation hard.

Has your research evolved in correlation with this industry sea-change?

Oh definitely. I used to find bugs, and give talks about buzzing, and how to do it. Then I started to give talks about mobile because that was new, and that started to get hard too. Now, I’m moving onto new technologies like cars. As things get hard, I try to move along to the things that are still easy, because no-one’s looked at them and it’s more interesting too.

Charlie, what’s your dream job?

I think about this a lot, and I’m pretty happy where I’m at. My problem is that I have more things I want to research than I have time to look at. Even if I didn’t have my full-time job, and spent all my time messing around with stuff, I would still only be doing one out of twenty things I would like to look at. My perfect job is to have twenty people working for me, and I can tell them what to research…so a director of research somewhere. I think that would be my perfect job.