Last month, I published the first in a series of advice columns for people who are interested in learning more about security as a craft or profession. In this second installment, I asked noted cryptographer, author and security rock star Bruce Schneier for his thoughts.
Schneier: I regularly receive e-mail from people who want advice on how to learn more about computer security, either as a course of study in college or as an IT person considering it as a career choice.
First, know that there are many subspecialties in computer security. You can be an expert in keeping systems from being hacked, or in creating unhackable software. You can be an expert in finding security problems in software, or in networks. You can be an expert in viruses, or policies, or cryptography. There are many, many opportunities for many different skill sets. You don’t have to be a coder to be a security expert.
In general, though, I have three pieces of advice to anyone who wants to learn computer security:
- Study: Studying can take many forms. It can be classwork, either at universities or at training conferences like SANS and Offensive Security. (These are good self-starter resources.) It can be reading; there are a lot of excellent books out there — and blogs — that teach different aspects of computer security. Don’t limit yourself to computer science, either. You can learn a lot by studying other areas of security, and soft sciences like economics, psychology, and sociology.
- Do: Computer security is fundamentally a practitioner’s art, and that requires practice. This means using what you’ve learned to configure security systems, design new security systems, and — yes — break existing security systems. This is why many courses have strong hands-on components; you won’t learn much without it.
- Show: It doesn’t matter what you know or what you can do if you can’t demonstrate it to someone who might want to hire you. This doesn’t just mean sounding good in an interview. It means sounding good on mailing lists and in blog comments. You can show your expertise by making podcasts and writing your own blog. You can teach seminars at your local user group meetings. You can write papers for conferences, or books.
I am a fan of security certifications, which can often demonstrate all of these things to a potential employer quickly and easily.
I’ve really said nothing here that isn’t also true for a gazillion other areas of study, but security also requires a particular mindset — one I consider this essential for success in this field. I’m not sure it can be taught, but it certainly can be encouraged. “This kind of thinking is not natural for most people. It’s not natural for engineers. Good engineering involves thinking about how things can be made to work; the security mindset involves thinking about how things can be made to fail. It involves thinking like an attacker, an adversary or a criminal. You don’t have to exploit the vulnerabilities you find, but if you don’t see the world that way, you’ll never notice most security problems.” This is especially true if you want to design security systems and not just implement them. Remember Schneier’s Law: “Any person can invent a security system so clever that she or he can’t think of how to break it.” The only way your designs are going to be trusted is if you’ve made a name for yourself breaking other people’s designs.
One final word about cryptography. Modern cryptography is particularly hard to learn. In addition to everything above, it requires graduate-level knowledge in mathematics. And, as in computer security in general, your prowess is demonstrated by what you can break. The field has progressed a lot since I wrote this guide and self-study cryptanalysis course a dozen years ago, but they’re not bad places to start.
Have you seen:
How To Break Into Security: Ptacek Edition…At least once a month, sometimes more, readers write in to ask how they can break into the field of computer security. Some of the emails are from people in jobs that have nothing to do with security, but who are fascinated enough by the field to contemplate a career change. Others are already in an information technology position but are itching to segue into security. I always respond with my own set of stock answers, but each time I do this, I can’t help but feel my advice is incomplete, or at least not terribly well-rounded.
“The only way your designs are going to be trusted is if you’ve made a name for yourself breaking other people’s designs.” Sounds like something Frank Abagnale Jr. would say.
I’ve been a Schneier fan for a while, but lately I’m not a follower (all I can do to keep up with KoS). Lot of interesting links & books in this post. Any in particular you recommend or disrecommend for the casual reader/ non C coder?
Interesting that he says, “You don’t have to be a coder to be a security expert.” I mostly hear the opposite.
Read the entire paragraph. It depends on the way that you intend to be involved with security whether coding is important. In some cases, it can be necessary to work at the machine language level to crack an exploit, let alone C language level.
If you want to do hands-on technical security, you should learn to code even if it’s not a day-to-day job requirement. If you want to do management/policy work, it’s not needed although it may give you some perspective on the issues.
I don’t have strong feelings (or qualified opinions) on the matter of whether or not coding skills are necessary, but I took Bruce’s comments to be more directed toward those who might otherwise see not having programming skills as a disqualification from considering a career change or embarking in this direction.
Another area that doesn’t necessarily require coding skills is security education. Get the word out – teach someone 🙂
Policy work and network security are different than Application Security. If you’re going into AppSec it makes sense to understand how apps are assembled. Frankly, I wouldn’t trust anyone who called themselves an AppSec Specialist who hadn’t spent at least some time in the code trenches.
That said, I believe the personality type of a security specialist is probably more important than any resume fodder. Some people are just born to poke, prod, see what happens, poke some more….
One thing I’m surprised to find missing is, “show honesty”. Security people are in possession of very dangerous information, and they need to show in all areas of their lives, at all times (from a young age), good judgement. I would never feel comfortable working with a “reformed” black hat hacker. Call me conservative or old fashioned or whatever, but even if you say to me “I’m reformed”, I don’t believe it. If you didn’t have the fundamental honesty to stay out of trouble to begin with, why should I believe you now? Go do some other job that I don’t need to trust you so much to do. You’ve burned your credentials as a security guy for me, forever.
Define a “young age”. Heck, define “black hat”. Do you include everyone who ever hacked a computer system without permission or just people who have a criminal background? Do you not hire the guy who admits that he hacked a BBS once back in 1985? There’s a huge range of behavior and motivations here and there’s a big difference between mischievous behavior in childhood/teen years and for-profit criminal enterprise as an adult.
If you want only people who showed good (and never bad) judgment from an early age, you can automatically disqualify almost everybody. If you just want to disqualify people who have criminal records and/or have engaged in malicious activity as adults (or a little younger depending on the seriousness), then I think what you’re saying is a lot more reasonable.
“If you didn’t have the fundamental honesty to stay out of trouble to begin with, why should I believe you now?”
Because people can change. They do. Don’t you ‘believe’ in second chances? What if no one believed you?
Fool me once, shame on you, fool me twice, shame on me.
Can’t say I agree here. I think if you disqualified from the security profession everyone who was at one time or another on the wrong side of the law or who criminally trespassed on someone’s network, you’d have to fire about 80-90 percent of the industry.
The whole black/white/grey hat thing gets a bit old after a while. I know plenty of people who are very gainfully employed in the security industry and do fantastic work and are an inspiration to others who frankly started off on the wrong foot. They initially found it more fun to break into places and take things that didn’t belong to them. Eventually, however, these folks had a personal/psychological/professional shift, where they decided it made more sense (and perhaps more money) to be on the defensive or at least creatively destructive side.
Agreed. I’d also add that law abiding and ethical/trustworthy aren’t the same things. The law is an approximation of our society’s morality with many wrong turns, vestigial appendages, and senselessness. Young intellectuals are prone to rebel against blind obedience to authority, especially if they feel no harm is caused.
A common trend from our hacker groups in high school was getting in, proving it for ego purposes, causing no lasting damage, and then sending them the vulnerability or other security advice. The hackers mainly targeted the damaging attacks at each other for fun and games. The law-abiding people working on defense that that time were comparatively unskilled and careless. They often failed to fix problems even if the fix was free. I remember accomplished British hacker Coldfire pointing out on a documentary that many banks and such were using “root” username with default passwords like “control.”
The point is that protecting the sheep is easier if the defender can think like wolves (i.e. security mindset). Best way to learn this is acting like one over time. Now, we don’t have to do this illegally or in a damaging fashion. It’s just that some people did at one point. However, it’s hard for me to see an ITSEC guy doing effective risk assessments of new scenarios using only checklists from old ones. People with a little dirt on their hands are usually better at the “what-if” analysis.
Many people also seem unable to wrap their mind around it because the mindset itself is abhorrent to them & anyone commanding it seems criminal. I tested this against many random people in the general public & it was consistent enough that I keep my mouth shut on risks unless someone asks me about them. Otherwise, although I’d cause no harm, the average person is prone to say “watch out for this guy” and trust me less.
Also, I personally have a hard time trusting people who appear totally blameless or preach 100% moral behavior. In my experience, those types of people have caused the most damage to organizations as insiders & society in positions of authority. The reformed hackers rarely act on their dangerous potential in comparison.
I like your style
Would you work with a ‘recovered’ [7+ years — using a Buddhist recovery concept here] alcoholic or substance abuser [a] with no criminal record, [b] with no felony pleas or convictions ???
Disagree. Some skills you only get by doing…and the only way to do is to *do*. Also, there’s a reason why the best cops are the ones who fractured a few laws when they were young. Knowing how miscreants think is a huge advantage over someone who “just can’t imagine” doing something less than honorable.
I completely understand your way of thinking here, but the fact of the matter is that those who have successfully broken into systems or networks (irrespective of which hat color they were wearing at the time) bring a wealth of knowledge to any security organization. I have coordinated and analyzed Pen Tests with organizations that are considered some of the best at what they do, and at the end of the day they come up with attack vectors worthy of their reputation. However, I have seen these same consultants blown away by both the number and quality of exploits found by the in-house AppSec resource, who found things because of an ability to think like an attacker…an ability that was cultivated by having experience as a “blackhat”. So although I agree that the principle of honesty is profoundly important, the ability to keep an open mind is vital to any security program’s success.
There’s no such thing as “trust” in the security field (or really in any other field.)
How many CIA guys have gone “rogue” over the years?
How many cops have gone “rogue” over the years?
Why are there more lawyers in jail per capita than any other profession?
Most “security people” CAN NOT think like hackers. There are exceptions. Read Richard Marcinko’s “Red Cell” book on how his SEAL team learned how to penetrate military security ridiculously easy: because they thought like, and worked at acting like, real terrorists. But that was because most of his guys were “rogue military” in the first place. To get anywhere in Spec Ops, you HAVE to be the sort of person who doesn’t fit in with rules and regulations! And Marcinko demonstrated that his entire career in the Navy.
Robert Steele has said for years that America’s hackers are an un-exploited resource. And there are more IT security firms who have former “black hats” on staff than will admit to doing so.
It may make one feel morally superior to not offer an ex-con a legitimate job because of “trust issues” – but all that does is guarantee the ex-con has to go back into crime to make a living. By definition, this is not a solution…
Now would I suggest hiring a former KGB spy to work in classified offices in the Pentagon? Probably not. But that didn’t stop Rumsfeld allowing Israeli military personnel to wander around the Pentagon in the run up to the Iraq war (look it up.) Or allow the FBI to outsource the CALEA software to Israeli companies who then got caught selling the data to organized crime in Los Angeles….
So who do you trust? It’s better to do what Reagan said: Trust, but verify…
Ah, my ever-present archnemesis RSH. I remember both of us wholeheartedly agreeing on this point and the specific guy driving it home. (Two experienced often-rivals agreeing: pay attention readers as it might be important! 😉
Marcinko, aka Rogue Warrior and Sharkman of the Delta, devastated opponents foreign and abroad with unconventional thinking. In Vietnam, he used VC guerilla tactics & equipment in their own territory rather than waiting to die in static ambushes. The result: he terrorized them so much that they put a bounty on him. In the USA, he was tasked to create a terrorist-like group to test Navy security. He proved it was an oxymoron by using the tactics of the bad guys, rather than WW2 armies, which the Navy hadn’t considered (!). Then, they revealed another organizational issue by trying to minimize him rather than fix problems (!!!).
I’d recommend reading Rogue Warrior before Red Cell. YouTube also has the video footage of Red Cell operations (watch now!) they did, along with good interviews & analysis. The real takeaway is what Bruce calls the “security mindset” and unconventional thinking of Marcinko. You must be able to think like the bad guys. People with a bit of bad in them are inherently better at that & pen testing jobs make a terrific outlet for their darker impulses. (I speak from experience, haha.)
Now, for some dissent from RSH’s all-inclusive claim. Actually, corroborating one of my previous comments: many security goals in ITSEC positions can be accomplished with little effort by following best practices & managing key risk areas. However, this only stops script kiddies, black hats casting wide nets, etc. (They are the 80-99%, though.) Stopping sophisticated, targeted and well-funded attackers requires pervasive risk mitigation and management throught an organization’s operations. Here, the security mindset is one of the best tools as it can identify the most likely avenues of attack/failure and even save money to be wasted on useless security countermeasures ($100k+ buzzword here).
I’m generally a fan of Schneier and his relatively pragmatic views on security, but I do take issue with two of the assertions he makes here, namely that “You don’t have to be a coder” and “a fan of security certifications.”
I think a major failing in information security is the belief that you don’t need to understand code or programming. To be ignorant of the fundamentals of computing means ignorance of how failures can happen. It also makes analysis harder if you’re not able to write code to automate parts of it. Anyone who thinks they can be an expert without understanding coding is seriously handicapping their ability to execute.
I have held a widely regarded security certification before and let it lapse. I saw plenty of people achieve the certification without actually understanding the body of knowledge it covered, and the continuing education requirement really just rewarded attendance and bookkeeping, rather than actual learning and knowledge. Most of the people I know who are actually skilled regarded certifications as money-printing rackets for the certifying bodies.
I partially agree with your statement but the same can be said for some “security experts” who have no certs as well. I have known people who claim they’re “security experts” because they have “lots of experience” but really don’t know the fundamentals.
Bottom line is, you must really enjoy the business whether you decide to pursue a cert or not. I have two certs and both were valuable in learning all about security.
I also want to echo Jeff’s comments, show honesty. Trust is critical in security and sometimes being honest means admitting you don’t know something.
You can also show honesty by admitting you did some wrong things.
There’s a difference between going through a cert COURSE and waving the cert itself around.
I’ve been going through a ton of security conference videos this past week. I can’t count how many times I’ve seen the presenters admit “I have no certifications”. Or how many times I’ve seen presenters WITH CISSP certs proclaim “certs are worthless.”
That said, if you want a JOB, you have to go through HR – and HR wants all the buzzwords checked. So there the certs are worth something.
But if you work for yourself, they’re worthless. The INFO is not worthless, but the cert itself is worthless except as a PR flag.
I’m reminded of some guys during the dot-com boom who dressed funny but ran a major Web development firm. A client asked one of them how he got taken seriously dressing the way he did. The guy said, “I open my mouth.”
If you can “open your mouth” and demonstrate knowledge, you don’t need a cert. Especially since ninety percent of the management you talk to haven’t a clue what you’re talking about anyway….
The same thing about code was said by another poster. It’s incorrect, though. Bruce pointed out that INFOSEC has many different specialist and generalist areas. Many require no knowledge of application coding. I contend that you rarely need that even when dealing with application security, unless you’re a developer or perhaps product manager. Application security for most people in business usually comes down to installing/configuring correctly, privilege minimization, applying patches promptly, & proper procedures. That stops & minimizes most threats to companies entering via applications. Where do you see coding requirements?
There are many other jobs in the field requiring no coding experience. Most network security jobs dont require coding experience (shell scripts make a coder? ;). Jobs in database & transaction systems that focus on integrity/availability require little coding experience (unless you count SQL). Many companies can be penetrated using off-the-shelf black hat tools, social engineering or tailgating. That’s little to no coding. Finally, even software QA might require little knowledge of code if the position’s focus is on acceptance tests & tools/people are in place to keep the test writer from internal details.
Note: I’ve given all of you pro-coding people an out by not forcing you to define it. Do you mean assembler coding for binary analysis of critical proprietary software? C/C++ skills? .NET or Java? Web programming? Shell scripting? SQL or 4GL’s? These are all quite different skills with different areas of applicability. Except for scripting for admins or SQL, barely coding skills, most of them still aren’t necessary for positions in above paragraph. And that wasn’t an inclusive list of positions where coders aren’t essential.
So, Schneier is correct in saying not all security jobs benefit from coding skills/knowledge. I personally proved this by defending boxes from skilled hackers & black hat coders for a long time in the past. All before knowing how to code. (Well, I did a bit of Visual Basic and Perl at the time. “Real” coders always laughed when I said I was a programmer.)
I’d say that any aspiring security person should pick a specialty to start with, preferrably one that pays, and learn the skills CRITICAL to THAT specialty. Build on top of and beside that. I’ve spent years to try to develop a guru-like knowledge of security in general. My main niche is medium to high assurance system design & implementation. I can personally tell you specialist skills are WAY easier to develop and pay more. Additionally, 80-99% of the job is often accomplished with 1-20% of your knowledge/skills. So, focus on the critical stuff.
“You don’t have to be a coder to be a security expert.”
True. However, if you intend on being a more secure developer then I strongly recommend that you learn assembly language. Learning assembler forces you to learn how the hardware operates that that will give you the best lead in knowing how to properly code high or low level.
Or if you want to be a reverse engineer of any kind, for developing exploits, decoding malware, or simply understanding IDS alerts/signatures. Passing knowledge of assembly is highly desirable for more technical security jobs.
If we are talking computer security, I think some knowledge of coding is essential, if for no other reason than to effectively communicate ideas/findings etc.
I happen to have a manager who I do not think wrote a line of any code his whole life, yet he is telling people what to do. This breaks down swiftly when 1) he doesn’t understand the problems (since he doesn’t know sick-um about coding) so he doesn’t know what to do, and 2) makes unreasonable deadlines for work since he has no clue on the reality of writing software.
Here’s a massive starting point:
HUGE Security Resource – version 5000
I look forward to the next part of this series. It’s extremely useful, and I’m sure others feel the same.
So there is clearly a distinction drawn between ‘security engineers’ vs. a cryptographer, with perhaps ideally a Ph.D., where cryptography is largely a ‘way of thinking.’
The ‘this guide’ and the ‘self-study cryptanalysis course mentioned by Bruce were quite instructive,, but for those of us with NSA backgrounds in Operations, there is tremendous Merritt to the old fashion ‘one time code’ created essentially by a ‘set of non-repeating letters, characters and numbers.’ To break that, one needs to steal a copy of the code book being used and knowing where one is in the many page codebook is EXTREMELY helpful.
Further Operations needs to control over the analysts, otherwise one gets disasters like Khost, Afghanistan.
“To be or not to be [a coder], that is the question [of this thread] ” 🙂 .
But first – thanks Brian for opening up this new line of interviews with ‘celebrity’ people of IT, not easily reachable to common guys/gals like us.
The severe problem of under-staffing IT Security field
because there are no qualified folks to fill it is here to stay and very may be that your articles will spark interest and bring more neophytes to the ranks .
As regarding to be a coder or not – bad news is that you can’t be above average in IT Security if you don’t code. Modern IT (and especially Security) is all about information/data, and you are supposed to process a lot of data which is impossible without some coding.
Be it analyzing forensic image of many Gbs in size, or
a tcpdump network capture of 20-30 Gb , or firewall/IPS logs month worth of data – all this can be done via custom code fast and effectively .
And while you can double-click through your career as Windows sysadmin, in Security you cannot.
All the said above, of course, doesn’t apply to Appsec where all the work is about coding.
The good news is that it doesn’t matter at all which programming language you use as long as it brings the needed results . I do a lot of log processing in awk –
just because I know it and it is swift in text-processing.
If I need some non-text processing (binary files, network sockets) , I go with the Perl . But it could be any language of course – if you know to program in Basic (using GOTO) – just great, as long as you get results you desire.
So stop procrastinating with useless “to-be-or-not-to-be” questions and start learning some coding.
I have read through all the comments on this article and the breaking in article from last week. It is overwhelmingly apparent that a lot of people think security expert and immediately ‘oh you have to know programming, you’re just nothing if you don’t program.” I’ll say right up front then that I must be a huge fraud and have made a living for the last 10 years as a charlatan. I know basic programming and scripting. Nasty horrible ugly scripting that gets the job done and makes repetitive manual task go away. Security Expert is a very broad field and has plenty of room for non -programmers in a technical use. Someone said in a posting “I might not know AD but if I crash it with a malformed packet…” If you don’t know what AD is and what is does – you’re a programmer not a security expert.
A security expert does more than code solutions, they design solutions, implement solutions, maintain solutions and look for ways to improve overall security constantly. I know Windows servers and workstations inside and out, and I am perfectly at home in 3 major versions of Linux, and proficient in AIX, and Solaris 10. I am also a network guy – I completely understand configuring and building networks to include VoIP, vlans, routers, switches, NAC, ACL’s, IDSes and IPSes Firewalls especially ASA 5510’s. Implemented more websense and Bluecoat solutions than anyone should have to. I review IDS, IPS, HIPS, virus scan results, Nessus/Retina/Qualys scans, read all the patch release notes published, can run metasploit like a champ, and I have personally worked forensics cases using FTK and SIFT. I worked a case today and was able to show that yes a machine was compromised because I understand operating system logs and how operating systems work. I can recognize that hey when the new process is created and it is cmd.exe and the creator process ID belonged to – iexplore… that is a bad thing. It doesn’t require a background in assembly to reach the obvious conclusion. You do not need a background in programming to under stand that when you do a netstat -aonb and you see a connection to a remote machine on some wonky port that was opened by a process that should never be accepting network connections something is wrong. However though roughly 90 percent of the comments on the two breaking in articles would say I’m not a security “expert” despite the companies I have worked for, what I do now, the degree I have, and the sickning collection of letters after my name I have earned – GCIH, CISSP, MCSE:security, MCSA:Security, CEH, Security+, Network+, A+, and yes even more.
if you don’t know the difference between an IDS and an IPS and HIPS and HIDS, the difference between encryption and hashing, Asymetric encryption vs Symetric encryption, how vlans work, why you use them, the importance of centralized logging, configuration management, defense in depth, secure configuration, dangerous default accounts, default passwords, default configurations of popular network equipment, process listings, how to get them, how authentication works, the value of egress filtering, how to read a packet sniff, etc but you know that a c program has more than an include statement at the top… well ride that NOP sled to the corner over there my enterprise environment needs folks with a broader skillset.
Speaking of hiring past hackers, I’m just watching Jayson Street doing his DefCon 17 talk about “CyberWar”. At one point he mentions the history of the formation of several major Chinese hacker groups.
And then he mentions that each of the founders of those hacker groups…is now a computer security researcher, including an IBM researcher…
Street says, “Well, you know, you have to make a living and sometimes crime doesn’t pay as well as IBM…”
Anybody able or willing to make some observations about either the Master of Engineering Courses or the Graduate Certificate Courses being offered by the Office of Advanced Engineering Education at the U. of Md., College Park ?
I am surprised nobody has mentioned has mentioned Competitions as a way to get involved. As people learn, games make a very easy medium to get people to focus their time and energy better. Capture the Flag (CTF) events are a fun way to get involved, and in preparation, you learn tons. As most college’s fail to have good practical education programs, Competitions have filled those gaps well.
My site (http://ctf.forgottensec.com) is dedicated to providing resources to those looking to get started in those competitions. I started with detailing all the english virtual/Conference competitions I could find. I am now working on writing some training materials on skills necessary to succeed in these competitions. This project is by far in it’s beginning stages still as you can see by the many pages that I have yet to create, but for being 4 months old and me writing 95% of the content, it is going well.
Many competitions are offensive based which isn’t applicable to 90% of the security industry, but the skills to understand how the attacks work will help people better understand the systems they are defending and if you try those same attacks on a system you build, you can see how defenses/logs react which gives huge value.
Finally many competitions have recruiters watching them quite closely looking for knowledgable candidates.
i’m looking for further how to break into security
i hope there is one by Dider stevens and Adam boilou
“You don’t need to be a coder to get into computer security”
Although this statement is somewhat true (UGnazi and other hacking groups simply use social engineering 9/10 times to open up holes in major systems) it’s also like saying you don’t have to know how to use a sword to be a ninja.
Sure, as a non sword weilding ninja you could throw shurikens to distract, punch your opponents and run away up a wall if needed but you’re missing one huge element that is pretty crucial to being a ninja.
Same goes for computer security. If you’re in charge of securing the root server then you want to also be able to at least check them for properly escaped SQL, buffer overflow vuln, and other common errors in applications. Yeah they should hire a pro security dev to do this, or you could buy $10,000 auditing software but often companies don’t. They have a deadline and just expect all the devs working for them to already know security of whatever they are making.
It’s not difficult to learn to code. Look at the MIT open courseware site. Go take the first couple of classes, there you can at least code in python now and it costs $0.00
I’ve never met any BSD/Unix Sr. Admins (or even Jr admins) who weren’t at least somewhat proficient in C/python. The certs are meaningless, but some companies require you to have them for insurance/warranty reasons especially if working with new hardware. Companies like to be able to blame somebody and sue them when something goes wrong.
Moxie Marlinspike has no university training, neither did Max Vision but they can both code. Came in handy for Max when he was writing his own exploits to break into Capital one and other banks.
Since a ninja only has 4 attacks/abilities yes not knowing how to use a sword would be a massive liability. Perhaps that ninja could have learned to use a gun, blowgun, hatchet, garrot, or something else. You must be a programmer.
I’ve never met any admins in typical jobs who had to audit application source code for security vulnerabilities, either. Many systems have been made resistant to attack without the administrator knowing that stuff at all. Networks and organizations, too. I wonder why your analogy doesn’t make since…
…oh wait, the time I spent doing Ninjutsu & studying Ninja made me remember something. The sword was a major tool in their trade, esp. ceremonial swords. (I’ll leave it to you to figure out why on the latter.) Anyway, a practitioner of a trade not having one of his most important tools would suck. Now, what are the most important skills/tools of sys/net admins, firewall maintainers, patchers, SIEM monitors and other security roles where code is rare? Did you say “a whole bunch of stuff other than coding”? Absolutely correct!
Oops, submitted too soon.
As for cryptography to properly understand it, you need graduate level math (learn it free on MIT site) however ever modern instance of crypto being broken I’ve heard of was usually due to some sort of side-channeling or other indirect attack. It was never an attack on the algorithms themselves as they are pretty much sound. But how is the crypto being implemented? The guy’s who broke all the modern crypto usb keys (except for cryptostick and Ironkey) simply found ways around the implementation they didn’t strip down the algorithms and spend hours in front of a chalk board finding holes in the formulas.
There have been direct attacks on cryptosystems. DES-40 & MD5 collision attacks are good examples. They barrel through it rather than going around it. However, as Bruce has stated, crypto is usually the strongest link. I speculate that it’s because it’s small enough for rigorous analysis & the other links are incredibly weak. Mainly other stuff being too weak.