June 25, 2012

At least once a month, sometimes more, readers write in to ask how they can break into the field of computer security. Some of the emails are from people in jobs that have nothing to do with security, but who are fascinated enough by the field to contemplate a career change. Others are already in an information technology position but are itching to segue into security. I always respond with my own set of stock answers, but each time I do this, I can’t help but feel my advice is incomplete, or at least not terribly well-rounded.

I decided to ask some of the brightest minds in the security industry today what advice they’d give. Almost everyone I asked said they, too, frequently get asked the very same question, but each had surprisingly different takes on the subject. Today is the first installment in a series of responses to this question. When the last of the advice columns have run, I’ll create an archive of them all that will be anchored somewhere prominently on the home page. That way, the next time someone asks how they can break into security, I’ll have more to offer than just my admittedly narrow perspectives on the matter.

Last month, I interviewed Thomas Ptacek, founder of Matasano Security, about how companies could beef up password security in the wake of a week full of news about password leaks at LinkedIn and other online businesses. Ptacek’s provocative advice generated such a huge amount of reader interest and further discussion that I thought it made sense to begin this series with his thoughts:

Ptacek: “Information security is one of the most interesting, challenging, and, if you do it carefully, rewarding fields in the technology industry. It’s one of the few technology jobs where the most fun roles are well compensated. If you grew up dreaming of developing games, the laws of supply and demand teach a harsh lesson early in your career: game development jobs are often tedious and usually pay badly. But if you watched “Sneakers” and ideated a life spent breaking or defending software, great news: infosec can be more fun in real life, and it’s fairly lucrative.

I’m a software developer. I try to look at security through the lens of computer science. To me, the most attractive thing about the field is the opportunity it provides to work with lots of different concepts at lots of different levels. Computer security might be the best way to work professionally with compiler theory, or to spend time understanding computer microarchitecture, or to apply advanced mathematics.

Other people get involved in security for different reasons, and those reasons are probably equally valid. Some people really like the “good guys / bad guys” narrative that comes with security. Some people see security as an opportunity to save the world. Other people are drawn to the competitive nature of the field: at the higher levels, it really is a cat-and-mouse game. It also comes closer to meritocracy than most of technology: you can either break in or you can’t; your defenses either work or they don’t.

But for me, it’s just one of the best development jobs you can get, and with that in mind, here’s my advice for people interested in pursuing a career in it.

First: I want you to learn how to program. Clearly, you can get through college and get a good-paying steady job without ever learning to love programming. But no one factor gives you as much control over your career, as much of an ability to write your own ticket, as the ability to solve problems using programming languages. A lot of very smart technology professionals have concluded that they just don’t enjoy writing software. Those people should reconsider. Try different languages, or different application domains, but find a way to make programming stick in your head.

Second: the best jobs in our field are in application security. You can get good steady work designing firewall deployments, setting up desktop agents, or responding to incidents. But all those roles are fundamentally reacting to what’s happening in appsec. The next generation of security products — along with the org charts of security teams at the savviest companies — are being designed now, by accident, by application security practitioners.

Appsec roles are roles that involve attacking software or devising fixes and countermeasures for those attacks. Two good ways to “come up” in appsec: become a penetration tester, or contribute to (or start) a secure software development practice in your company.

A good way to move into penetration testing: grab some industry standard tools and use an Amazon EC2 account to set up a “shooting range” to attack. Some of the best-known tools are available for free: the Nessus scanner, for instance, while not an application security tool, is free and can land you a network penetration testing role that you can use as a springboard to breaking applications.

When people reach out to Matasano asking how to get a foot in the door attacking software, we have a few simple steps to offer:

0. Learn to love programming in at least one language. The C Programming Language has the most cachet in application security, but for this step-by-step list, Java or Python or Ruby will do fine.

1. Grab a copy of  The Web Application Hacker’s Handbook.

2. Go to the “previous releases” archive at WordPress.org and grab very old versions of WordPress; install them at EC2.

3. Download OWASP WebScarab or Burp Suite Free Edition, both of which are free, and use them to find bugs in ancient WordPress.

If you’re already in an IT role, and want to come up on the defensive side of appsec, try to position yourself near custom software development. Most large firms build “line of business” applications. As a rule, building “line of business” isn’t particularly fun. But defending those apps can be; sometimes, the most boring applications turn out to be surprisingly sensitive.

And the good news for doing appsec in BigCo’s: most companies have very immature security programs. If you can get a role in QA, or in what the cool kids are calling “DevOps”, you can end up with a lot of influence in security. At Matasano, we’ve watched lots of ops people successfully transition to appsec and senior security management just by instituting basic security testing processes on their company’s software.

To me, the whole field boils down to studying, understanding, and manipulating technology. Like most software security practices, we’ve gotten our hands dirty with a fascinating cross-section of the whole technology industry. We’ve written attack tools that target the control registers of device chipsets, we’ve done custom RF work, we’ve written low-level debuggers for different CPU architectures, and we’ve seen and beaten up on products built in almost every programming language and every platform you can imagine. If crawling through the ventilation ducts of the world’s most important technology is something you think might make you tick like we do… well, appsec! Learn to code. Spend a couple days Starbucks money on an EC2 account and deploy some broken apps to break. Find opportunities to practice in your job.

And, uh, come talk to us? People like you are hard to find!”


58 thoughts on “How to Break Into Security, Ptacek Edition

  1. Datz

    Some more steps suggested:-

    1. Setup your home lab for playing around. Some resources on doing this are

    http://devilslab.wordpress.com/2012/03/02/setup-pentest-practice-lab-resources/

    http://www.irongeek.com/i.php?page=security/building-an-infosec-lab-on-the-cheap

    http://www.metasploit.com/help/test-lab.jsp

    http://www.securityaegis.com/network-pentest-lab/
    http://www.securityaegis.com/pentest-lab-web-application-edition/

    Another good reference on how to break-in to InfoSec is the result of a survey and research done by Robin Wood at
    http://www.digininja.org/projects/breaking_in_part_1.php

    &

    http://www.digininja.org/projects/breaking_in_part_2.php

    1. BrianKrebs Post author

      Hi Datz, yes, your comment got modded for approval. Comments chock-a-block full of links almost always get held. Sorry for the delay.

  2. Richard Steven Hack

    I like the idea of using Amazon EC2 as a “shooting range”. I really hadn’t thought of that! We all know you need to have some sort of network so you can practice your skills but usually people have thought in terms of VM’s running on your main machine or a bunch of old cheap machines (or your wife’s, if you’re looking to get rid of her!) But Amazon’s el-cheapo instances are probably a better way.

    1. Graham Sutherland

      I’m not sure I agree that EC2 is a better choice than VMs. If you have a machine that has the resources to run VMs (most modern desktops can) it’s a much better solution for many reasons:

      1) VMs are cheaper (essentially free!)

      2) EC2 is limited in terms of what OSes are directly supported. If you want to run something like DamnSmallLinux, FreeBSD, or even BackTrack, a VM is a better choice.

      3) With a VM you get full control of the network – you don’t have to worry about Amazon’s firewalls doing SPI on your traffic.

      4) EC2 leaves the vulnerable webapp open to the internet.

      5) When a VM is on your network, you never send out “nasty” traffic. This is useful if you’ve got a particularly oppressive / surveillance-heavy ISP.

      6) You can’t drop shells or other nefarious bits of code onto an EC2 instance without violating their T&Cs.

      7) The VM’s network connection can be isolated easily.

      1. Josh

        I’m not necessarily disagreeing, but I do think EC2 can be a better choice for certain types of tests.

        1) It’s true that EC2 isn’t free (unless you’re using it within your first year, if they’re still running that deal) but it isn’t exactly expensive either.

        2) I haven’t used EC2 in a while, but can’t you create your own images to use? Either way, the advice in the article was to use EC2 to quickly spin up VMs running outdated versions of software so you could attack it, presumably from a bootable distro on your home network or a VM.

        3) The focus of the article was on breaking apps, not on network penetration, so I don’t think SPI would be much of a problem. People need to be able to pen test their own software, so I’m assuming you can run things like Nessus without a problem.

        4) EC2 doesn’t leave the vulnerable web app open to the Internet. You can create a firewall rule to only allow your IP to connect to it. Also, you’re going to want to be turning off the VM when you’re done to save money.

        5) This may be true in other parts of the world, but I’ve done plenty of pen testing and I’ve never received so much as an email from an ISP in the U.S.

        6) I’m guessing this point is true, it’s been a while since I read the T&C.

        7) An EC2 VM can be isolated quickly as well.

        There are definitely times when VMs are better than EC2, but there are plenty of times were EC2 is going to be better than VMs as well. Using EC2 to quickly spin up VMs running vulnerable versions of WordPress or other commonly used Web apps so you can attack it is a great idea. Sure, a modern desktop with a few GB of RAM and a decent processor could do the same thing, but if you have a few bucks to spend then save yourself the time and let someone else spin it up for you.

        1. Graham Sutherland

          1) Free is still better than paid, and you don’t have to wait for support if something is broken.

          2) Sure, if you want to upload a 700MB+ ISO to an EC2 instance.

          3) EC2’s T&Cs basically say “you must not run any form of security attack against EC2 instances”. Whilst I’m sure that the intention is to prevent attacks rather than pentests, I’m not so sure that a low-paid level 1 tech will be amused.

          4) Ok, that’s a good point, but it’s extra hassle compared to a VM that’s isolated from the internet by default.

          5) The point still stands. There are ISPs out there with pretty poor policies around this kind of thing.

          6) Ok.

          7) Nowhere near as quickly as a VM, and you certainly don’t have the luxury of being able to just pull the plug.

          As far as your conclusion goes, I’m not sure I agree with it at all. The amount you’re spending doesn’t justify the few seconds it takes to load up a VM and copy some PHP files across. It’s not dependant on an internet connection, it doesn’t require a 3rd party, and it’s completely free. EC2 just seems like extra hassle to me. The only time I’d come close to recommending it is if you’ve got an exceedingly old machine, but then my first recommendation would be to invest in a new machine.

      2. HurpSecandDerpSec

        EC2 will probably send cops after you if they detect some sort of consistant attacking on their networks, I wouldn’t use it for testing.

        VMs are nice for testing, however many OpenBSD devs have discovered that buffer overflow due to terribad Virtualbox or VMware emulation code doesn’t translate into buffer overflow on actual hardware.

  3. Datz

    (I don’t know if my last post made it past the moderation so trying here again)

    Some more resources on setting up a lab:

    http://devilslab.wordpress.com/2012/03/02/setup-pentest-practice-lab-resources/

    http://www.irongeek.com/i.php?page=security/building-an-infosec-lab-on-the-cheap

    http://www.metasploit.com/help/test-lab.jsp

    http://www.securityaegis.com/network-pentest-lab/
    http://www.securityaegis.com/pentest-lab-web-application-edition/

    Another resource on breaking into Info Security is the result of a survey conducted by Robin Wood at

    http://www.digininja.org/projects/breaking_in_part_1.php

    &

    http://www.digininja.org/projects/breaking_in_part_2.php

  4. evin

    Does Mr. Ptacek (or anyone reading this article) have any good references of what to use to learn learn programming? The more and more I get into info sec, the more I know I’m behind the 8 ball because a lack of programming ability, even scripting. I’m trying to teach myself scripting on Linux then move to PowerShell/Vb on Windows but would eventually like to get to a “real language”. Can anyone recommend steps or which books to get the basic foundation?

    1. Graham Sutherland

      There are three languages that I’d recommend to anyone thinking of getting into sys-sec:

      1) PHP
      Most web-apps are written in it, and it’s a great way to learn about security problems. Most tutorials actually teach you to write terrible security holes into your code, so it’s fun to realise how screwed you are and go fix the problems. As part of it, you should learn SQL.

      2) Python
      This is pretty much the de-facto security language. It’s cross-platform, a large number of security tools are written in it, and it’s a great language (though I find its syntax a little abrasive).

      3) C
      C is the language of systems development. It allows you to get into the real low-level stuff. You can use it to learn about the sort of vulnerabilities and exploits that allow attackers to gain root on systems, e.g. stack smashing, heap overflows, SEH exploits, ROP, etc.

      I’d certainly recommend looking into web languages, especially JavaScript. Web security is a huge industry, so understanding how web exploits work will be greatly beneficial.

      I also recommend learning x86 assembly (or ARM assembly if you’re that-way-inclined). It teaches you have the processor really works, and allows you to really understand how attacks work at the lowest level.

      Other languages that might be interesting, in no particular order: Java, C#, Ruby, objective-C, Haskell.

      Security is full of steep learning curves, and you need to be extraordinarily passionate about approaching complex challenges. The most important bit of advice I can give you is to read about and try everything you can get your hands on. If your attitude to an interesting article about CSRF or ROP exploits is “that looks cool, I’m gonna try it out”, you’re already in the security mindset.

    2. Steven Alexander

      Google for “Learn Python the Hard Way” and/or “Learn C the Hard Way”. They’re both available online for free, but you can purchase a PDF, hard copy, or sign up for a course if you’re so inclined.

  5. Dan

    I think Thomas Ptacek is right about learning a language. I’m very young in the security field and I would say got lucky to get a job in IT Security. Only a few years out of college and almost an Information Assurance masters degree under my belt I’ve avoided programming like the plague. I have never enjoyed it even after taking a couple Java classes in college. But I can see where a programming language like C, or Java is useful. Malware analysis, digital forensics, and he mentioned appsec pen testing. Really programming would just be another arrow in the quiver. But I personally think if you want to become a true White Hat you need to be able to create your own functioning programs. I could be wrong though! Maybe someday I’ll pick up a Java or C book and reteach myself the language.

    1. Graham Sutherland

      I was trying to think of a way to put this elegantly, but I think it’s better if I’m just blunt:

      When your job is to break computer programs, you sure as hell ought to know how to write them in the first place.

      If programming is not for you, neither is computer security. The analytical and logical mindset required to write good code is exactly the same mindset required to be a good security engineer / pentester.

      No pentester should enter a job not knowing why this is horribly bad practice:

      int size = 0;
      fscanf(file, “%d”, &size);
      memcpy(input, output, size);

      1. Dan

        I guess it depends on which aspect of security you are talking about. If you are referring to pentesting, or other fields I pointed out in my previous comment you are probably correct. However, Information Assurance deals a lot with with process and procedure and securing digital information in a variety of ways. I secure information by making sure NIST recommendations are being followed, vulnerabilities are addressed, and that confidentiality, integrity and availability are being addressed in designs. To say that somebody MUST know programming to be in a security related job is a bit exaggerated. Although I do have a basic grasp of Java, JavaScript, PHP, and HTML, I don’t consider myself a programmer; nor are they required for my job and I would very much consider what I do a security profession. There is a lot more to security than pentesting and malware analysis.

        1. BrianKrebs Post author

          Yes, very many ways to approach this question. Which is why I’m making a series out of it, so that readers who are really interested in the answer can get a multiplicity of responses, from many different disciplines.

        2. Graham Sutherland

          I agree. That’s why I said “when your job is to break computer programs” – if your job is to break into computer programs, you need to be able to code. If that’s not your job, then you don’t need to be able to code. I’m certainly not saying that breaking code is the be-all and end-all of computer security.

          I personally split security into two groups: technical and legislative.

          Technical security staff are malware analysts, pentesters, researchers, etc. They discover the vulnerabilities, manage the hardware/software infrastructure, watch for trending security alerts and generally nerd-out in a pile of code and tech.

          Legislative security staff manage the policy, assurance, legal, compliance, etc. aspects. Their jobs are fundamentally different from technical guys in terms of focus. They manage security policy, ensure adherence to security and privacy standards, maintain compliance with the law, plan for breaches, etc.

          The two are fundamentally different, requiring completely different areas of background knowledge. If you want to work in technical security, you need to love digging into the technical stuff. If you want to work in legislative security, you need to love analysing systems at a high level.

          1. Dan

            Nicely said! I haven’t heard of it in those terms before but I agree. Right now I would say I’m legislative but trying to break into the technical. That is where I really want to be.

      2. anonymous

        of course it’s horribly bad practice – ‘file’ is undeclared!

  6. Christian

    interesting, i’ll stay tuned for more 🙂
    As a Sysop i have already a lot security related business.
    but currently my job got a bit stale and security is such a fascinating area, perhaps its time to move on.

  7. Whats in a name

    Programming is good to know I guess, but my experience so far is you have to know a heck of a lot more than that…

    The postion I’m doing is pretty entry level, and you still have to know most of the structure and mechanics of

    Linux,
    Windows 2000, XP, Vista, 7
    Windows Server 2000,2003,2008, etc
    Cisco commands
    HTML
    XML
    PERL/PHP/Python/Powerscript/windows scripting language
    DOS
    Hardware function/design/mechanics/issues
    Email types/routing/function/problems
    Network protocols, stack, types, problems

    What I’m finding is that to even be a decent security professional you have to be almost better at everything than the guys who do it all day, which is somewhat unrealistic in the long term.. Few people will ever know programming better than the programmer while also knowing active directory better than the admins and know the latest issues with SQL, Java, and wordpress.

    The thing I think that is making the difference is tools, which definitely require someone to program but can give you a lot more flexibility in scanning and identifying problems, and it also happens to be the way many of these miscreants find the problems as well. For many smaller companies it seems they don’t really have the money, time, or staff to manually train and upkeep all of these fields, but with the right set of basic toolkits they can at least meet standard best practices.

    1. Graham Sutherland

      The trick is that you don’t have to know all of that stuff. You need to know enough about the underlying systems and general software architecture such that you can very quickly pick up the rest as and when you need it.

      For example, I’ve never touched Active Directory in my life, but I’m completely confident that I could learn the basics and research the major security issues related to it in a day or two.

      If the AD server crashes because I sent it a malformed packet, I can attach a debugger and reproduce the crash to identify a vulnerability.

      The difference there is that, whilst my Active Directory knowledge is minimal, my general debugging and crash analysis skills are good enough to compensate.

      Moral of the story: focus on the generic, re-usable skills before you focus on the specific stuff, unless it’s explicitly required for your job.

      1. Whats in a name

        I find the opposite, that half of the problems with my company or any company are sloppy practices. If you don’t know enough about how they are managing things you can’t catch all the corners they are cutting or years old garbage that is never getting cleaned out.

        Either way, at this point I’m feeling an information overload, just too much stuff to try to stay current on all the time and proactively monitor. Your average hacker or script kiddie often just specializes in one area or has tools that do the looking for them, which is why I think the security effectiveness is never quite as good as the attackers, just too much ground they have to constantly patrol.

        1. Graham Sutherland

          Defense will always be one step behind attack. The security business has never been about altering that dynamic – that’s impossible. It’s all about closing the gap.

          If your company has sloppy practices, that’s a legislative failure rather than a technical failure. See my reply to Dan above for a longer description of the difference between legislative and technical security. You need to enforce that strong security practices are adhered to, reward people who go above-and-beyond the requirements, and punish those who break them.

          If you’re suffering from overload, perhaps you got in the deep end too quickly. Ask your boss for an hour a day to spend researching these topics. If he’s (or she’s) not willing to do that, it’ll at least show that you’re taking the initiative. Regardless, the field of computer security changes so quickly that a requirement of the job is to spend time outside of work researching and keeping up with things. It’s one of the reasons that computer security requires you to be extraordinarily passionate about the subject. If I wasn’t, I wouldn’t be where I am today.

        2. Dan

          I think that is why having a diverse Security Department is key. But this topic can open up another can of worms and a whole other discussion. This could be why AppSec is the upcoming thing. We create programs to do everything for us. Thereby creating more and more security holes in our networks. The very tools we use to protect our networks are the ones being used to attack our networks and the ones with the biggest holes that need patching.

          1. Graham Sutherland

            Diversity is always good in any team. For example, my knowledge is heavily tied to Windows development, network protocols, web-app security, and low-level (assembly / hardware) stuff. I wouldn’t have the first clue about how to pentest a SAP installation, or how to set up a secure MSSQL server, but we have other guys that do that. I focus on what I enjoy and what I’m good at, and occasionally dabble in the stuff that I don’t know much about to get a basic overview of what it is/does.

            Not identifying your staff members’ strengths and weaknesses is unforgivable in any business environment. Not basing your hiring policy on gaps in your knowledge-base is almost as bad. A combination of those two failures is a serious precursor to bankruptcy.

          2. Whats in a name

            I like to think of it more like the way video game programming has changed.. Back when I was in high school if you wanted to come up with a video game you had to do everything from scratch, including switching video modes, driver support, palette, audio, before you could even draw a single pixel. Nowadays there are frameworks built up that give most of that same efficiency but in a consolidated format so you don’t have to figure out how to do everything is assembly language and rewrite your program in 20 different flavors because of hardware incompatibilities.

            I’d like to see something similar with security, strong tools built on a solid low level framework that is built upon by the entire community to form a powerful and effective engine that increases what a single person or couple of people can do. I agree that companies should hire based on what people are good at, but many smaller companies don’t have the budget for more than a couple people and as a result the monitoring is patchy at best.

            1. Graham Sutherland

              What you described exists: MetaSploit.

              However, the aim of security differs strongly from the aim of game development. Security has complex nuances that aren’t understood by most and that can be completely missed. The smallest mistake or oversight can render a system critically broken. Game development, by contrast, is relatively approachable and flexible – bugs can just be patched later.

              Building frameworks for security suffers from this problem. By the time you’ve learned how to use a framework and when/where to apply different attacks, and the side-effects caused by such attacks, you might as well have studied the attacks themselves and learned something much more widely.

              Of course, tools such as BackTrack are useful in that they speed up pentesting significantly, but they shouldn’t replace solid understanding.

              1. Whats in a name

                Agreed, but that is kind of the issue I’m talking about, the level of complexity seems to be way too high for any one person to learn and maintain it all.

                Maybe I just need to focus on a particular area instead.

                1. evin

                  One thing you have to keep in mind is that attackers do not work in frameworks, nor do they work on a schedule. On the defensive side we have multiple projects going on that require different levels of focus and deadlines to meet. Attackers work at their leisure. Look at any of the latest data breach or APTs which lasted years before being discovered.

                  Security is only going to get better if security is built into apps and devices from the start and not added on later. And that starts with secure programming and companies taking the responsibility to produce secure products. If any major auto maker had as many TSBs for recall on their products like MS or Oracle/Sun Java has had to issue patches for vulnerabilities in the last few months, there would be class action lawsuits filed out the wazoo. I see this analogy used a lot when it comes to improving software security.

  8. Alex

    Quite interesting, that if you type the title of the “hackers handbook” google, you will lend at a free link that goes before all commercial links. Have you done this on purpose, Brian?))

  9. Dan

    Great article, and I can’t wait to read more. However, what I found more interesting were the multiple classy comments in this article! To the above posters, thank you for taking the high road when speaking with each other. It shows professionalism, and made me actually want to read all the way to the end of the page. Though I don’t know if I’m ready to take on this field, I’m always interested in what others think about security.

  10. Gary H.

    Thanks for gathering this info and archiving it, Brian. This will be a helpful resource to those of us who are trying to make career transitions into InfoSec.

  11. http

    I also get a similar question from time to time: “How do I become a hacker or an iPhone jailbreak developer?”
    I usually answer them with this link:
    http://www.jbfaq.com/?q=57

    1. _honkmaster

      > In short, forget that idea. But maybe you understand now that _there are only a few people that have these skills_.

      I lol’d at that. It’s not black magic. It’s something anyone with a few years of experience in low level programming should be able to do.

      1. http

        I agree that it’s not Black Magic. But currently there are only a hand full of people with these skills (pod2g, comex, geohot, MuscleNerd, planetbeing, i0n1c, that’s about it, with four of them no longer active; sorry if I forgot someone). Not that there wouldn’t be some more able to do so, but many people think they started a command prompt once and ran a jailbreak on their device and maybe created a VB application with some drag & drop and think they are hackers now, not having any idea of all the security mechanisms in the iPhone (code signing, sandbox, ASLR, NX code, hardware bootchain, etc.). Having a few years of low-level programming experience is by far not enough to create an iPhone jailbreak nowadays.

  12. lolor

    tl;dr – how to become a snake oil salesman …

  13. Brian

    Another important point to relay to people looking to enter the field is related to the accurate statement “closer to meritocracy than most of technology” .

    With the benefits of meritocracy, come the demands of meritocracy. An intelligent motivated person might be able to work into the upper tiers of the ITSec profession, but actually staying there will require a true love-of-the-game. ‘Cat-and-mouse’ is by definition an ever evolving challenge. You will need to always challenge your own proficiency, undertand your own limits of knowledge (i.e. when to get outside expertise), and engage in constant (and often expensive) training and exercise. To stay up top, I believe that love has to be in your formula.

    I had some time in the sun, but utimately I could not maintain the desire to keep up with the franetic pace. I decided to leverage my skills towards the more mundate business aspects, but I have the greatest respect for the folks that can stay in the major league year after year. Perhaps my best take away is that I can tell those who know their way from those who just say they do, and a solid respect for knowing where my own limits of understanding are.

  14. Frore

    There are many different ways of getting into the information security field as there are so many different aspects – many overlap. It touches so many different aspects of an enterprise and there are many different definitions. As mentioned above it can be how you write or break applications so you can get in from that side of the house. Networking is a critical part of the security – how do you protect your infrastructure? If you look at NIST 800-53, you get 18 different control families. ISO27K has 10. How do you provision or deprovision accounts? Do you sanitize your hard drives (or melt, etc.) before disposal? Much os security is about doing the right thing given the risk of the environment. If you want to break into security, learn the best ways of doing things, get relevant certifications for what you want to do, and go out and interview. The more you can demonstrate that you have an understanding of risk and an understanding of security, the more valuable of an asset you will become. Good luck!

  15. Matt

    “Second: the best jobs in our field are in application security…But all those roles are fundamentally reacting to what’s happening in appsec.”

    If anything, the comments to this article confirm the notion that information security is a ridiculously vast field that offers (requires?) many different types of specialization. Application security IMHO may even be trending down, at least when compared to forensics and incident response. Preventing vulnerabilities is so last year….now the focus is on responding to them:P
    But I digress, if you work for a company where security is not the core business you may find that netsec/incident response offers more opportunities to interact with business leaders than appsec, which is always a good thing. Both jobs pay similarly and honestly you can’t excel at either without some programming skills (but you can be excellent at either without needing to be a full on programmer)

    1. Graham Sutherland

      I unreservedly agree here. We’re starting to realise that there’s almost zero commercially viable way to deal with APT, polymorphic code / packers and the general speed at which malware is released. It’s almost ceased to be a technical problem – the focus is now on economics.

      If a single malware analyst costs $60kpa + benefits in the US, a malware factory in India can hire at least 8 developers for the same price, and that’s paying them a lot by current industry pay-grade standards over there. Then, by the time the analyst produces a detection signature based on a flaw in some polymorphism engine the bad guys are using, they’ve already written two more engines and deployed them to the field. The cost of keeping up is too high.

      We’re starting to change tactic to fit in with this new ecosystem, but it’s slow progress. The focus is shifting towards preventing the obvious / basic attacks (known shellcode, easily identifiable malware, SQL injection, etc) and recovering after attacks that aren’t viable to prevent. Strong security procedures and education are the focus at the human level. Properly segmented networks, secure backups, logging, forensics, etc. are focused on at the technical level.

      I have a feeling that this dynamic will shift quickly, as the bad guys adapt again (they’re like the damn Borg!), which will mean we have to react just as fast. The one benefit is that once a strategy is in place, it requires little alteration to keep up.

  16. Terry Ritter

    What Would Happen and Who Benefits?

    I did not invest my life in computers so a few could rip off everyone else. The computer security industry has an ethical “elephant in the room” that we need to recognize: What would happen if we actually solved the problem?

    When users have secure computers, the need for many computer security jobs goes away. The need for PC security add-on’s disappears, taking that industry, most such businesses, including their employees, right along with it. Many of the people commenting here could be out of work.

    Who among us would promote a solution knowing it would put ourselves out of a job, to say nothing of destroying our company and a growth industry? Even if we would, what company could possibly allow that? This is a serious ethical issue of our field, and our times.

    As long as we can argue that computer security is so complex that no fundamental answer exists, there is no ethical conflict. Alas, we do have working examples of secure PC systems in “Live DVD” Linux. Although these are not practical for many people and companies, nor is their security perfect, they do demonstrate a valid proof that very secure PC systems can exist, and do so without costly complexity and even without cryptography. Appropriate new designs could produce secure PC’s having hard drives and functionality much like the insecure PC’s of today. That we do not see such designs raises the question: “Who benefits from our insecurity?”

    Of course, the security add-on industry and computer repair shops benefit. Design project managers would be intensely aware of the potential effects on their co-workers and company, and that alone may explain what we have. But it is hard to imagine that government security people do not see some advantage in the ability to infect virtually every PC on the planet. Could they bring themselves to destroy that “advantage”? Of course, government security people around the world all see the same advantage.

    1. Dan

      Like you pointed out, no system is 100% invulnerable to security issues. Until that happens, security will be around. I don’t believe it ever will happen. Security is a huge money making market and maybe that is why most security solutions are defensive based rather than offensive. If the problem goes away the jobs go away. We don’t go looking for trouble though, we defend against it. Even if we did fix all the hardware and software related security issues, you can’t download a patch to fill the gap in knowledge that everyday users have. Social Engineering and user error will ALWAYS be an issue. Companies will continue to adapt to make money the same way security adapts to new trends and technology.

      1. Whats in a name

        Simply from a social engineering standpoint, it will never be truly eliminated…

        Once you can trick someone into installing a program onto their machine with admin rights (or bypass) then you can own that machine… Doesn’t matter what OS, what other features you have, 99% of the time if your CEO is dumb enough to click yes you lost the battle right there.

        But yes I do think a lot of the back end money would be better spent making sure the products being developed were more secure out of the gate. I actually had a class on that in school, and what it boils down to is that the company designing the bad product rarely has to pay for the damages, and also faces pushback from the user base if they make it too restrictive. Until the developer writing the code has some skin in the game, they are going to do keep doing the fastest and most profitable code they can.

        1. Graham Sutherland

          I was about to post your exact response.

          I’m currently in the process of working on a social engineering project that educates users about how it’s done and what they shouldn’t do.

          Essentially, I’m dumping some “encrypted images” with various interesting names on USB sticks, with a decryption program. Most people are curious, so run the program to look at the pictures. What the program actually does is open their browser to a page that explains the dangers of what they just did, and how they can avoid becoming targets of social engineering. They also get to keep the flash drive! 🙂

        2. Terry Ritter

          @Whats in a name: “Once you can trick someone into installing a program onto their machine with admin rights (or bypass) then you can own that machine… Doesn’t matter what OS, what other features you have, 99% of the time if your CEO is dumb enough to click yes you lost the battle right there.”

          If we want computer security, we cannot wait for CEO’s to get smarter. Nor can we depend upon user education to prevent inevitable human errors. Instead, we need systems which minimize the consequences of error. Current PC’s are peculiarly susceptible to infection, which is exactly why we need new designs for more secure equipment.

          Any system which re-boots a clean OS immediately before banking is “99%” unlikely to be “owned” during that banking session.

          There is, of course, no *perfect* security. But a DVD-load OS is inherently and vastly *more* secure than the hard-drive-boot alternatives now in use. DVD-load systems, without a hard-drive (or flash drive), are easily cleaned with a simple re-boot.

          In contrast, there is no simple way to un-own a hard-drive-boot system. There is not even a way to guarantee *detecting* when a conventional system is owned essentially forever.

          Is the need to reboot a security hole? Sure; somebody has to manually command a reboot. But manual reboots could be unnecessary in new designs which automatically force a reboot before banking online. With new custom hardware (and corresponding OS changes), such a design could even support hard drives and operate much like a conventional PC. When lesser security guarantees are acceptable, automatically booting or loading a new virtual machine to do banking could be useful.

          The hope of very secure online banking equipment is not a dream. Non-ideal but working DVD-load systems exist now. Which makes it only reasonable to ask why our equipment is still so insecure.

          The business of selling anti-vi and other security add-on’s depends on failure. If there were no failure, no “owned” equipment, and no consequences, there would be no continuing business. How can we say that security companies want to solve the malware problem, when their very business depends on security failure?

          Ultimately, we have to decide which side we are on.

          1. Roxberry

            Nothing is ever that simple. BYOD changes the game. Can you lock down an iPad? The idealist says yes, but of course these devices are vulnerable. Maybe Win 8 will better, probably not. No technical solution can keep up with the user vulnerability.

            I would add as a base line get a certification, cheaper than a degree and more up to date. CISSP and CEH require CPEs per year attempting to keep practictioners relevant, unlike a 4 yr degree. While some think they prove nothing other than motivation and the ability to retain some information, they’re better than a criminal record. 120hrs/3 years means at least you are paying attention.

            Get involved with OWASP and ISSA. Do community outreach, grade schools, high schools, retirement communities.

            Get the HACME apps for each language and hack away and learn SEVERAL languages, at least architectural and syntactically. Code in one or 2 for your own programs, but I don’t know of any pure homogenous application.

            Get the OWASP testing guide, love it learn it, feel free to edit or add to it. OWASP needs volunteers to stay up to date.

          2. meh

            Even with a dvd boot live system you can still have people tricked into giving out info, emailing critical files, or memory based attacks. I agree you can’t wait for CEOs to get smarter, but you can’t realistically lock them out of every single sensitive server, spreadsheet, or webpage either.

        3. HurpSecandDerpSec

          Open/FreeBSD you can avoid this by easily locking down user /home areas and preventing root escalation. So the CEO can’t screw himself over. However, Social Engineers get around this by simply impersonating being a contractor and needing the root password at a time when nobody else is around, so they just convince whoever is on the phone during nightshift to give it to them ‘temporarily, change it afterwards’ then they install whatever they want and leave. 🙂

          Some criminals here got jobs as night cleaners and just installed hardware keyloggers when nobody was around. All security meaningless when they have every password.

  17. Chris

    “If crawling through the ventilation ducts of the world’s most important technology is something you think might make you tick like we do… well, appsec! Learn to code. ”

    ??? that makes no sense. i dont know a single “appsec” person doing that.

    1. Graham Sutherland

      I think he was referring to application security in the exploit-finding sense.

      I think “crawling through the ventilation ducts” very accurately sums up the feeling you get when you have WinDbg + Immunity + IDA + a dozen other windows open, digging through stack traces and virtual tomes of assembly.

      1. Roxberry

        Or proxying http calls and replaying response/request interactions. Web service hacking very much feels like crawling through ducts (especially on unpublished APIs).

  18. Jim Lippard

    Other recommended reading:

    Mark Dowd, John McDonald, and Justin Schuh, The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities
    Michal Zalewski, The Tangled Web: A Guide to Securing Modern Web Applications

    Bigger picture:

    Ross Anderson, Security Engineering: A Guide to Building Dependable Distributed Systems (2nd edition) — excellent book, deep on technical detail AND on economic and social aspects of security, with many handy references
    Bruce Schneier, Secrets and Lies: Digital Security in a Networked World
    Bruce Schneier, Liars and Outliars: Enabling Trust That Society Needs to Thrive

    In addition to programming, I recommend learning symbolic logic, critical thinking, social psychology, and epistemology. (I got into security from a background in philosophy, cognitive science, and computer science.)

  19. HurpSecandDerpSec

    A lot of pentesting is deemed ‘illegal’ by companies, primarily side-channel email attacks which almost always work. They claim this is ‘cheating’ or dangerous as you getting shell through the browser and not actually outside the network punching your way through with some sort of pen testing attack.

    Even government servers end up side channel attacked now and then and they sheepishly admit to the press how some low level employee clicked on a spear phishing link, was browser hijacked and now that guy is running all over the network because he’s impersonating a trusted local user, and companies and governments seem to love hooking up absolutely everything to a network.

    Reminds me when the Cdn gov’t was side-channeled by presumaly Chinese state hackers who didn’t have access to the main database of secret info, so they just lay waiting in some unsecure printer for the documents to be printed off and caught them as they went past. Crucial, top secret documents that were encrypted but didn’t matter because you could just sniff the printer traffic and get everything you wanted.

Comments are closed.