Tuesday, October 27, 2009

Item 57: Security is a process, not a product











 < Day Day Up > 





Item 57: Security is a process, not a product



"If you think technology can solve your security problems, then you don't understand the problems and you don't understand the technology" [Schneier01, xii].



To the casual eye cruising the Internet, the various trade press publications, or the various security vendor Web sites, security would seem a kind of postdevelopment add-in that we can sprinkle all over an application to render it suddenly "secure," immune to attack and safe from harm. Just buy a product, make a few API calls, and voilà! Instant secure application, and it took only a few minutes to put it in. What could be better?



Developers look at cryptography in much the same way. All we have to do to make the application secure is encrypt the data somehow, relying on the mathematics of the cryptographic algorithm to prevent the data from being viewed by unfriendly eyes. Bruce Schneier himself even subscribed to this view when he wrote, "It is insufficient to protect ourselves with laws; we need to protect ourselves with mathematics" [Schneier95, xx].



Unfortunately, this attitude is exactly the wrong way to think about security. The various vendor products across the Internet cannot make your application secure. No one security technology will protect your application from all harm, not even Transport Layer Security (TLS), also known as the Secure Sockets Layer (SSL).



The problem is simple: developers wrap themselves in the belief that "cryptography equals security" and that if the crypto key is strong enough, the system will be secure. Unfortunately, it's a horrible fallacy and one that Schneier himself admits to in the Preface to Secrets and Lies:





The error of Applied Cryptography is that I didn't talk at all about the context. I talked about cryptography as if it were The Answer. I was pretty naïve.



The result wasn't pretty. Readers believed that cryptography was a kind of magic security dust that they could sprinkle over their software and make it secure. That they could invoke magic spells like "128-bit key" and "public-key infrastructure." A colleague once told me that the world was full of bad security systems designed by people who read Applied Cryptography [Schneier01, xii].



This is not a confidence-inspiring editorial. If the one man who arguably knows most about cryptography in the Internet era suddenly feels that cryptography isn't the solution, then how, exactly, are enterprise developers who haven't the time to learn cryptography to the depths that Schneier knows it supposed to make our systems secure?



The problem isn't in the use of cryptography itself; the problem is in the belief that most developers have that cryptography holds the solution to all of our security needs. Consider the canonical e-commerce Internet application: a new company, seeking to peddle its wares across the Internet, creates the onlisne shopping site e-socks.com, the World's Premier Internet Retailer of Soft Fluffy Footwear. As developers, we build the site to provide all the classic e-commerce functions: shopping cart, customer checkout, and so on. And, in typical fashion, to allay customers who fear sending their credit card numbers over the Internet,[1] we take their credit card information over an HTTPS connection. So we're secure, right?

[1] Ironically, these same customers have no qualms about giving their numbers over the phone to unknown customer service representatives or handing their credit cards, on which the numbers are prominently displayed to any who look, to servers at restaurants to pay for dinner.



Unfortunately, no. While the system may be sending the credit card number in secure format to render it inaccessible to prying eyes, the wily hacker is far from stymied. Any number of ways into the system are possible, some of which are highlighted here.





  • Social engineering attack:

    "Social engineering" is the euphemistic name we give to that form of attack traditionally practiced by those whom we used to call "con men"�in short, a charming, swift-talking, charismatic individual convinces someone within the system to surrender information. Kevin Mitnick, in The Art of Deception [Mitnick, 45�46], describes a story in which a son was able to win a $50 bet with his father�the challenge was to obtain Dad's credit card number from a video store. It took three phone calls and about ten minutes to do so. How hard would it be to convince a customer service rep to hand out a particular consumer's credit card number? Ask Mitnick�he made a living off the idea (and continues to do so today, although from the other direction).



  • Database attacks:

    Many systems store consumer information as part of the users' profile on the company site, as a feature to prevent users from having to enter their credit card numbers on every purchase. Most companies don't bother to encrypt these numbers, and in fact many companies aren't quite as tight on security procedures on the database as they are elsewhere. (Most companies don't assume insecurity, as explained in Item 60, for example.)



  • Man-in-the-middle attacks within the corporate firewall:

    As long as any part of the company firewall can be compromised, the attacker now has free reign anywhere within it. SSL typically runs only to the proxy server or firewall of the corporate network, since load-balancers and routers need access to the underlying data if they're to do their jobs. So the hacker gets into the demilitarized zone (DMZ), sets up a network sniffer, and watches the packets after they've been decrypted by the proxy.



Other attacks are certainly possible, and I'm sure we've not even scratched the surface of possible attack vectors. Note that none of them involved trying to go directly against the SSL layer itself; instead, they attack other parts of the overall security of the system. Why bother going against SSL and its key exchange protocol when it's far easier for the attacker to engage in one of the dozens of other forms of attack, all of which end with the same result (i.e., your credit card number in his highly immoral fist)?



Security is not something that we can simply "turn on" as a feature at some point in the system's implementation lifecycle. Unfortunately, this is exactly the attitude that most development teams and managers take: "Well, sure, the system needs to be secure, but we'll get to that after we get it up and running." While this particular approach and attitude might work for optimizing a system (even then, it's debatable), this will never work when discussing the security of a system as a whole. Concerns about security have to be factored into the analysis, design, implementation, and test phases of every iteration of the system's development, or gaping security holes will result.



For example, consider the e-socks.com e-commerce application again. Assuming this is a classic Model-View-Controller application, where do we need to worry about security? What are the security concerns? A partial list includes the following issues.



  • Assuming the site makes use of some kind of per-user session state (e.g., HttpSession), we need to ensure somehow that an attacker cannot "guess" a valid in-use JSESSIONID value and thus gain access to another user's session state. For e-socks.com, the concern would be that an attacker could use my credit card to ship silk stockings to his shipping address. For a site dealing with financial or medical data, the implications could be far, far worse. Never use a servlet container that doesn't generate some kind of secure random value for the JSESSIONID query parameter.

  • Each servlet processing input on the page must make sure the input falls within valid ranges. For example, when verifying login credentials for the user against the database (SELECT * FROM user WHERE...), make sure the username and password aren't hiding a SQL injection attack.

  • Each servlet and JSP page must be carefully examined to make sure that out-of-order page requests don't bypass critical data-entry information. For example, it shouldn't be possible to bypass the "select method of payment" page to go directly to the "confirm this order!" page. Ideally, of course, the rest of the back-end processing would verify that the order had, in fact, been paid for before sending it out, but how often do you as a programmer double-check something that "I know has already been processed" further up the chain?

  • How do the pages calculate the current running price total of the user's shopping cart? If the value is cached in a hidden field, an attacker can always bypass the browser entirely and hand-submit (via Telnet) an HTTP request that contains thousands of items in the shopping cart and that hidden field containing a value of $0.01. Even if the value is calculated on each request, where does the shopping cart get the prices of the items put into the cart? Again, if it's from an HTML form field, this data can be mocked up pretty easily.



Subsequent items in this chapter cover many of these concerns in further detail, but all of these issues are meant to serve as background material to support the main title of this item, a phrase Schneier uses over and over again: "Security is a process, not a product." You have to consciously think about this at every stage of the system's development if you're to have any hope of building a system that's remotely attacker-resistant. This requires a shift in your mental model: it requires you to briefly put on the attacker's black hat and think about how you might attack the system and then, putting the white hat back on, what you might do to prevent that attack. And not only the architect or technical lead needs to think about this�everybody, at all levels of the system's implementation, needs to have security in mind. "Write secure code" should be a driving principle of every programmer, just as "write good code" and "write elegant code" are.













     < Day Day Up > 



    No comments: