August 7, 2012

For this fifth edition in a series of advice columns for folks interested in learning more about security as a craft or profession, I interviewed Charlie Miller, a software bug-finder extraordinaire and principal research consultant with Accuvant LABS.

Probably best known for his skills at hacking Apple‘s products, Miller spent five years at the National Security Agency as a “global network exploitation analyst.” After leaving the NSA, Miller carved out a niche for himself as an independent security consultant before joining Accuvant in May 2011.

BK: How did your work for the NSA prepare you for a job in the private sector? Did it offer any special skill sets or perspectives that you might otherwise not have gotten in the private sector?

Miller: Basically, it provided on the job training.  I got paid a decent salary to learn information security and practice it at a reasonable pace.  It’s hard to imagine other jobs that would do that, but if you have a lot of free time, you could simulate such an experience.

BK: The U.S. Government, among others, is starting to dedicate some serious coin to cybersecurity. Should would-be cyber warriors be looking to the government as a way to get their foot in the door of this industry? Or does that option tend to make mainly sense for young people?

Miller: For me, it made sense at the beginning, but there are some drawbacks.  The most obvious drawback is government pay isn’t as competitive as the private industry.  This isn’t such a big deal when you’re starting out, but I don’t think I could work for the government anymore for this reason.  Because of this, many people use government jobs as a launching point to higher paying jobs (like government contracting).  For me, I found it very difficult to leave government and enter a (non govt contracting) industry.  I had 5 years of experience that showed up as a couple of bullet points on my resume.  I couldn’t talk about what I knew, how I knew it, experience I had, etc. I had a lot of trouble getting a good job after leaving NSA.

BK: You’ve been a fairly vocal advocate of the idea that companies should not expect security researchers to report bugs for free. But it seems like there are now a number of companies paying (admittedly sometimes nominal sums) for bugs, and there are several organizations that pay quite well for decent vulnerabilities. And certainly you’ve made a nice chunk of change winning various hacking competitions. Is this a viable way for would-be researchers to make a living? If so, is it a realistic rung to strive for, or is bug-hunting for money a sort of Olympic sport in which only the elite can excel?

Miller: In some parts of the world, it is possible to live off bug hunting with ZDI-level payments.  However, given the cost of living in the US, I don’t think it makes sense.  Even if you mix in occasional government sales, it would be a tough life living off of bug sales.  If I thought it was lucrative, I’d being doing it!  For me, it is hard to imagine making more than I do now as a consultant by selling bugs, and the level of risk I’d have to assume would be much higher.

BK: How useful is fuzzing in helping researchers understand and devise new attack techniques? Would you recommend fuzzing as a learning method, or is this an approach that only the learned and advanced researchers are likely to get mileage from?

Miller: Every researcher should at least have fuzzing in their tool chest.  It doesn’t take much skill to do it and it is usually the quickest way to get started looking for bugs.  I’ve been doing it a long time, and someone just starting would probably already be 80-90% as effective as I am.  Of course in the end, you always have to understand the target, whether it is to look for bugs or to figure out crashes, but fuzzing is a quick and easy way to start and at least can limit the amount of the target you need to understand. Some fuzzing tips: Start simple, add protocol knowledge/complexity as needed.  Use multiple (types of) fuzzers for every job.  Use “template reduction” when dumb fuzzing.  Don’t forget to monitor your device for crashes, if you can’t tell when something goes wrong, fuzzing is a waste of time.

BK: What has been the single most valuable learning tool for you in your work?

Miller: I don’t know.  I use tools, like IDA Pro, gdb and various fuzzers, valgrind and friends, etc.  But I wouldn’t say any of those are learning tools per se, but they are definitely tools of the trade you have to be able to know if you want to understand the flaws found in low level native code.  (or equivalents for Windows like WinDbg, etc)  You need to be able to use those tools without thinking about them in order to show off your real skills.

BK: What about programming languages? Do you recommend any specific ones?

Miller: Well, I really do a lot of reverse engineering and binary analysis, which is unusual.  For me, its important to know C/C++ because it is a language that allows you very low level access to memory and most closely equates to what you see in native code.  However, for those starting out, it probably makes more sense to learn some languages more useful for web applications, like PHP or Java or something.  The majority of jobs I come across in application security are web applications, so unless you’re a dinosaur like me, you probably want to become a web app expert.  Web application security is a lot easier to get started in as well.  There are a lot of vulnerable web sites out there and with very few exceptions, we haven’t seen the effort put into making web application exploits (SQLi, XSS, etc) harder like we have with memory corruption exploits.

BK: In your own experience, did you run into any dead-ends, avenues you wouldn’t have spent so much time going down if you had to do it all over again?

Miller: Luckily, I didn’t waste much time on it, but one thing I’ve learned is that for the types of things I am interested in, certifications aren’t that useful for those looking for a job except to demonstrate very basic understanding of the subject.  I have two certs — a CISSP and a GCFA.  I was required to get the CISSP for a job I had and at the time and, while I did expand my breadth of knowledge (I know how tall fences should be, etc), I don’t think having a CISSP would particularly attract me to a candidate applying to work with me.  I got the GCFA because I was interested in forensics, but even though I earned it, I’d never want me working on a forensics job because I only have a hobbyist’s level of understanding of the field.

Otherwise, everything in this field is a dead end.  You either never find vulnerabilities you’re looking for or you do and they get patched.  Nothing in information security is forever, things change, and you have to be able to roll with that.

BK: Can you talk about the importance of cultivating certain traits as an employee/hacker/researcher in this space? Eg.., Patience, persistence, resourcefulness, lateral thinking. I realize some of these come more natural to some than to others, but there seem to be a set of traits common among many in this industry who do well, and those that I mentioned — in addition to perhaps “curiosity” — tend to go a long way. I’d be interested in your perspectives here.

Miller: Information security, as a field, is pretty hard and demanding.  For any field of that kind, you have to be pretty passionate and really love what you’re doing to be effective.  Otherwise, you won’t be able to put in the time and effort necessary to be successful, at least not on the long term.  It is really hard to measure this quality as an employer, but ask yourself if you’d still be looking for vulnerabilities if you were a millionaire.  I still would, although it’d be from a beach somewhere, so I know I’m in the right place.

Speaking of employers, information security is tough to get in because it is hard to evaluate a candidate on their expertise in a few hours.  You can’t just look at where a candidate went to school to know if they’re good.  This is why it is important as a job seeker to have a “portfolio” which highlights your skills like projects you’ve worked on, vulnerabilities you’ve found, talks you’ve given, etc.  This will help separate you from everyone else.

BK: What do you think is the best way to build that portfolio?

Miller: In this field, certificates and diplomas don’t necessarily indicate skill.  Only skill indicates skill and its hard to measure skill.  I think of it as an artist or architect trying to get a job.  It is less important what school an architect goes to than all of their plans and drawings they can show off.

This was the problem I had coming out of NSA.  I had nothing to point to that indicated I knew what I was talking about. I think the best way to build up one’s portfolio is a combination of CVE’s (bugs found) and research (measured in talks given).  If I see a resume with a bunch of impressive CVE’s and a bunch of talks given at major conferences, it will definitely catch my attention.


If you liked this interview, consider checking out the other interviews in this series:


23 thoughts on “How to Break Into Security, Miller Edition

  1. stvs

    Q: Did you ever hack an iPad only to find that it was actually a Galaxy Tab?

    Q: Ever visit the Genius Bar just to pull their chain?

    Q: What were your unanswered questions at Blackhat?

    1. BrianKrebs Post author

      Heh. How ’bout it Charlie? I’d like to know the answer to the second question there 🙂

  2. Richard Steven Hack

    Bug hunting is one area of IT security which doesn’t interest me much, if at all. Granted, it’s important in pen-testing – if you can’t get in any other way than a 0-day.

    But in general it’s a reflection of the incredibly bad state of the software industry. I mean, buffer overflows are Programming 101 topics – yet we still see them in EVERY piece of software out there. Really? After 20-30 years of programming best practices?

    So clearly there is something wrong with the industry. What is wrong is that it’s not “software engineering” – and won’t be for probably another twenty years (if not more) before AI techniques are applied to enable true “computer aided software engineering.” It’s still a “cottage craft” industry with no automation, no standards, and the usual percentage of incompetents in any industry devoted to getting the product “out the door”, regardless of quality. It’s not even remotely “engineering” in the technical sense.

    So being a vulnerability tester or exploit developer has a long future. But given how many other easy ways there are to break into a system, for an attacker it’s still only useful for the really tough cases. For the defense side, it’s mostly useless – unless you’re a software company.

    1. Nick P

      Ah, old nemesis, hope you’re doing well. Always look forward to reading your dystopian comments. 🙂

      “Granted, it’s important in pen-testing – if you can’t get in any other way than a 0-day.”

      If that’s true, then I’d say the company has mostly done their job, eh? 😉

      “I mean, buffer overflows are Programming 101 topics – yet we still see them in EVERY piece of software out there. Really? After 20-30 years of programming best practices?”

      Before that, back in the 70’s, MULTICS was highly resistant to buffer overflows partly b/c their stack operated in reverse direction. You know, where arbitrary data from unknown sources couldn’t just, say, drop on the stack pointer. 😉 A few academics recently published papers “innovating” this old idea, which is used in one existing OS. Yet, everyone writing OS’s still keeps buffers overflowable and those programmers you mentioned use that “feature” as often as possible. Inspires hope, eh?

      “What is wrong is that it’s not “software engineering”

      Still disagree on that. Software is art + science, unknowns + knowns. It’s a baby field, too, compared to some “engineering” fields. That said, there are methodologies that keep defects to a minimum, goal reaching to a maximum, etc. to a point that’s statistically certifiable and predictable enough to warranty. I’ve mentioned them before. Yet, people repeat ad nauseum that nobody is engineering software & yet a small no. of companies regularly do. Biggest activity currently is in DO-178B industries. Mistakes are too costly at $10,000 per line of code (hearsay quote, but believable). Praxis & Cleanroom kick arse too.

      “But given how many other easy ways there are to break into a system, for an attacker it’s still only useful for the really tough cases. For the defense side, it’s mostly useless – unless you’re a software company.”

      Yep. That’s right over 90% of the time, maybe 99%. The other 1% should probably either hire one of us or pick a different industry.

      1. Richard Steven Hack

        How’s it going, Nick? I still drop in over at Bruce’s blog frequently. I suppose it’s only a matter of time until I’m banned here – I’ve been banned on almost every other site with the sole exception of… 🙂

        Otherwise, I’m doing badly, as usual. 🙂

        As for software engineering, we agree, actually. There have been suggested methods for automating software quality in the computing academic press going back thirty years. The problem is that, outside of a handful of companies of the sort you mention, no one is using them. So my point stands: the INDUSTRY is still out to lunch, regardless of those companies you mention. The proof is in every piece of software we use.

        Google has $43 billion in cash and someone asked recently what should they do with it. I say, “overhaul software development”. You know Microsoft will never do it… Neither will Apple…

        Problem would be that Google’s own methodology sucks rocks, as the numerous bad decisions in Chrome prove. So they probably should just stay out of it and just fund people developing true “software engineering” with cash grants…

        1. Nick P

          “Otherwise, I’m doing badly, as usual”

          Having a near rock bottom month or two myself. A few possibilities for crawling out. Not saying as much as I used to on the Schneier blog b/c it’s getting kind of redundant. The same bad things and same preventative measures over and over. I do try to give info to newly interested people, esp those in the industry. We had some luck there. I’m either about to try to launch some major robust solutions or exist INFOSEC. Miller is right about needing a continuing passion for it. It’s kind of a thankless job, really. 😉

          ” So my point stands: the INDUSTRY is still out to lunch, regardless of those companies you mention. The proof is in every piece of software we use.”

          True. I doubt Google will do anything. However, I did congratulate them on their Native Client architecture that the Chrome sandbox is based on. They took many of the best lessons from INFOSEC research & put them together in a useful product. Then, they got it a large market & people using it rarely have problems that stem from malware. So, gotta give them credit where it is due. I agree they’re better off giving grants to real engineers, though.

          On that end, I encourage you to get access to ACM & IEEE, then look at various high assurance and “verified” papers published in past few years. There’s lots of good work going on. The “Coq” (rolls eyes) prover and dependent types, in particular, have made things much better. There’s other efforts like recovery-based architectures that might work in practice. NICTA automatically producing drivers from VHDL is great too, as we all love hand-rolled drivers. INRIA, who made Ocaml & Coq, have been doing awesome work all around under Xavier Leroy. So has NICTA, TU Dresden, and CMU.

          I mean, in the past, it was something good every few years & now we’ve gotten to a point where it’s accelerating quite a bit. I’m seeing several high quality deliverables a year. A Google-level backing of these groups could really do good across the board.

          Hope you get better & at a rate that’s faster than the ITSEC field is moving. 😉

    2. Ole Bin Login

      Would be nice if you had your own blog! google has a bit of you here and a bit of you there but no central location for all of your thoughts.

      Where ist your blog?


  3. Tark

    This is horrible advice. There is a hell of a lot more to security than vulnerability research. In fact, vulnerability research comprises a tiny fraction of what information security is all about. What about running firewalls? Managing endpoint security? Conducting risk assessments?

    Miller is so typical of the Accuvant people I have met. Arrogant, self-absorbed and largely clueless about the day to day realities of running an information security team at a business.

    I am sure it lots of fun to sit around all day tinkering with fuzzing tools and conducing interviews about iOS hacks. But that isn’t what the security community needs. We already have plenty of self-appointed gurus who spend all day telling everybody how stupid they are.

    Better advice would be to tell prospective security professionals to learn how to run the tools of security. And that starts with basic network and system tools like firewalls and endpoint security. Because if you cannot manage your antivirus deployment, you’re in no position to be lecturing people about the theories of hacking iOS.

    1. Dan Herrmann

      I’m with you, Tark.

      There’s SO much more in IT security than detecting bugs and/or vulnerability testing.

      We certainly need people with these skills – they provide a needed service to the industry. But we also need people that can decipher their reports – assess the risk and “sell” the results to management.

      We also need people that can speak with attorneys, work with external clients, conduct risk assessments, provide for physical data security, work with law enforcement on incidents, coordinate BCP/DR, etc, etc, etc.

      That’s what I love about IT security – as an ISO, I often work on a fair number of different activities on a daily basis.

      1. dane

        What did you expect, reading an article from Charlie Miller about his experiences in security?

        A recommendation to get a degree in audit and compliance?

        Sure, we need more people in other fields of security, but that’s not Charlie’s expertise, is it?

        Plus, it’s a lot easier to train a bunch of network engineers to learn about firewalls than it is to teach someone how to spot security vulnerabilities in an x86 disassembly of a program they’ve never seen before, so I think vulnerability research needs much more advocacy.

        1. ScottK

          I can see both sides of this, but I do have to agree with Tark’s point (though not so much the tone).
          I’m trying to break into InfoSec right now in a major company, and most of the Cyber Security guys are bug hunters. I don’t have the programming background to dig through the weeds and find a query box that doesn’t do input validation.

          Brian, will you have articles and interviews with people that aren’t digging through code? There seems to be a great deal of interest around that.

          1. Nick P

            “Brian, will you have articles and interviews with people that aren’t digging through code? There seems to be a great deal of interest around that.”

            That’s pretty much every other “break into security” interview he did before. It was the reason for my counter to all this criticism. You can find those interviews by clicking the “Break into security” link on the right. They have some good stuff in them.

    2. Nick P

      I agree with Dane on this one. The interview is with a guy who comes from a certain background and it mostly focuses on that INFOSEC niche.

      Even Krebs questioning leads in the direction of his niche, likely on purpose. He asks what languages the guy recommends, although he says learning web stuff is a good idea. Krebs asks about dead ends or how *he* might have done it differently. More importantly, Krebs asks about certain traits “in this space,” which he might have interpreted as general INFOSEC or his niche. Miller then talks about “this field,” partly meaning INFOSEC & mostly meaning his part of it, giving advice about what is good for people building a name in bughunting. Much of what he says in that light makes sense, too.

      I think you guys are knocking him too much. He’s made good points w.r.t. his part of things. That’s what HE’s focused on. It MIGHT be what Krebs was getting at. Acting like he’s an idiot for talking about his specialty certainly isn’t helping the kinds of people who want to read this interview.

  4. Richard Steven Hack

    To put another two cents in, here is how I think people learning infosec should proceed:

    1) Get a Usenet feed and download everything on security from alt.binaries.ebooks.technical. Unethical? Yes. Illegal? yes. Do it anyway. You can’t afford to buy all the good books on infosec – but you need that information. So get it any way you can.

    2) Ditto on video courses. There are some SANS and CEH courses on Bittorrent (although not that many). Get them. Paying a couple grand for one of these week-long courses is a waste of money. Get the courses any way you can and then do the exercises on your systems using VMs.

    3) Download and view as many of the talks from the infosec conferences as you find interesting. There you will get the skinny from the guys in the trenches as to what is wrong with the industry as well as some excellent technical presentations.

    4) As suggested, set up VMs, download all the “exploitable software” tools such as Damn Vulnerable Linux and the numerous exploitable fake Web sites and have at it.

    5) Download every security tool you can find and run it. Make an inventory of all the tools you have in some organized way, with notes on what it does, so you can use it if you need it.

    6) Most importantly, THINK! Think about “security” actually IS and whether it’s even an achievable concept. From this perspective, you’ll know more than most so-called “security professionals”.

    7) And THINK about what you actually want to accomplish in the field. For instance, only go for certifications if you think you’ll need to pass some HR checklist in applying for a position. It’s usually better to try to make connections in a company you want to work for – and know what companies you want to work for – then play the “resume game”, anyway. Bypass HR by making your own connections with people in the industry, possibly via LinkedIn, and certs will be much less meaningful.

    8) Finally, remember what Robert Ringer said in one of his books. Don’t work your way up the ladder. Leapfrog it and start operating on any level you want. Just be prepared to be knocked back down the ladder if you’re not really able to operate there.

    As an addendum to that last, I see a lot of people, while denouncing certifications as mostly useless, suggest that only people with ten years experience in the field should be hired. If that advice were ever really applied in any industry, the unemployment rate would be 99%…

    “Experience”, while important, is a variable which is hard to measure a priori. What counts is smarts and willingness to learn. People with limited time in field can still know a lot if they concentrate on learning from the experience of others – which is what you do when you read books and take courses, by definition.

    1. JCitizen

      Good post RSH! I have a lot of training certs, and diplomas in my resume for my specialty; but when it came to getting the job I wanted – a simple interview with the staff lead technician got me the job. He just asked me what some of the tools of the trade were to weed out ignorant responses, and BOOM! I had the job. If my health hadn’t failed me, I’d still probably be there.

    1. Nick P

      Cmon, now, there’s excellent examples of risk mitigation. Reverse stacks stopping buffer overflows, microkernels making exploits user-mode, safer languages stopping entire classes of attack, and so on. This tactic isn’t risk mitigation: it’s more security theater. I’m glad you posted it, though, b/c this gem gave me my laugh of the day.

      “The anti-ROP mitigations work by wrapping certain important system functions with extra code to verify that they’re not being called by attack code. Based on a brief technical description, the attack ***circumvents this protection by calling the system function directly, thereby bypassing the protective wrapper.***”

      So… calling through the security layer was… optional? (facepalm) LMAO!

      Relevant Homer Simpson quote, “Doh!”

      1. Steven Alexander

        In order to be portable between different patch levels, most malware/shellcode look up the system calls they need to use. This mechanism would (supposedly–I haven’t looked at the code) prevent that. If you know the addresses/offsets you want you can bypass it although your code will likely fail on different patch levels. It’s not theater exactly, but it’s not necessarily strong either.

        1. Nick P

          It also sounds like it wasn’t worth $50,000 either. 😉

      2. Richard Steven Hack

        Yeah, I rather thought simply calling directly being able to bypass kinda made the whole thing a little ridiculous…

        BUT it illustrates how complicated attempts to fix a problem usually lead to both failing to fix the original problem and adding in new problems.

        …a la one of the infosec talks described how one could bypass a lot of these “security appliances” (NGFW, NAC, IDS/IPS, etc.) and indeed compromise them so they could subsequently be used for attack purposes.

        Until a lot of so-called “mitigations” have actually been deployed and have withstood attacks from attackers who are not merely pen testers (or even “Red Teams”), but actual attackers with motivation and resources, I continue to state that they are unproven.

        My meme stands until most attackers actually get totally stopped (or at least invariably detected and unable to do what they intended to do once inside the system) despite their best efforts. I’m not holding my breath.

  5. Al Dorman

    Does the NSA creep have anything to say about his agency’s illegal spying on us?

Comments are closed.