Archive for the 'Security' Category

Apple, Security, Usability

Password Safe on the Mac

I’ve been using a Mac at work for a short while now and am much, much, much happier for it. As my coworker Mr. Ladwig says, I swear a lot less at the computer now. But there are a few Windows apps I’ve missed. Small things that aren’t quite worth firing up Parallels for, or that it wouldn’t make sense to anyway. TortoiseSVN is one, although I can work around that with the command line and Eclipse (and wait for Versions to be released). I miss TrueCrypt, which I used for anything that mattered, but FileVault and OS X encrypted disk images meet my needs, though I do look forward to an OS X version of TrueCrypt. If I had ever been more willing to dive deeply into the Windows world instead of just tolerating it, no doubt I would sorely miss PowerShell. But I don’t.

I can cope without all of that. What I really, truly miss is a good password manager. Namely Password Safe. With Password Safe, I never need to know any of my passwords. And I don’t. Password Safe can generate and store strong passwords and never display them to me. (Under the same principle, for some web sites I use a modded version of a password generator bookmarklet that you might find useful. It’s not perfect but for many things it’s good enough.) Passwords are stored in a believably cryptographically strong manner. After I copy a password to the clipboard to paste elsewhere, the password can be cleared from the clipboard by minimizing or closing Password Safe. Yes, keeping sensitive data in a shared clipboard makes me nervous. It minimizes and locks itself after a configurable period of time.

It works well and I trust it.

OS X has Keychain, a password store with strong crypto. It’s nicely integrated into the OS and made available to applications. Subversion finally uses Keychain to store passwords on OS X (instead of leaving them in cleartext, which you’ll find on Unix systems. Grrrr…). I can use Keychain to manage my passwords, but it badly needs some user interface work. Yes, it can generate passwords using several different algorithms, but I rarely succeed in creating a new password. There’s no clean way to copy the password to the clipboard, and when I do it visibly exposes the password in cleartext. Then I can’t clear it from the clipboard.

Keychain just needs a little UI love.

Last night on Twitter I was bemoaning the situation. Stephen Collins immediate responded, pointing out that there’s a Java version.

What? I didn’t see that in the list of related projects! Oh, that’s because it’s not there. It’s down under news from 16 January 2007. Of course.

But it’s there, and it works. Not surprisingly for something that’s at version 0.6, it’s not as polished as the native Win32 version. And maybe it needs a little Filthy Rich Clients love. But so far it’s a far sight better for what I want than Keychain is.

I should probably try Password Gorilla, too, which I’d conveniently overlooked. It reads and writes Password Safe 3 databases.

Thanks, @trib.

Security

Missed MinnSec

For those who inquired and for those who sent their greetings, yes I missed MinnSec on Thursday due to family obligations. It sounded good, I hope that there will be more, and I will try to make it to them.

For those who don’t know what the heck I missed, follow the link. Or don’t, since Gunnar’s summary is succinct: “unmediated, unvendorized, peer to peer security meetup.”

Open Source, Security

Two Top Tens

I spoke about the latest OWASP Top Ten at my local OWASP chapter yesterday. To be frank, I wasn’t entirely sure why. This is by no means the first time I’ve given a talk about or framed by the Top Ten — indeed, when I was supposed to be giving this talk at the April chapter meeting, I was instead doing so elsewhere. But I figured that if any group of people is going to keep on top of the OWASP Top Ten, you’d think it would be people who go out of their way to attend a chapter meeting. Sure enough, everyone was familiar with the basic document but not necessarily the 2007 update. So for better or worse, especially since I had only a half hour, I just did a quick diff and highlighted the important changes. It was perhaps too casual an approach, but that’s the mood I was in. If you want more detailed discussion, I can certainly provide that at great length.

Then Gunnar Peterson gave a rapid-fire version of the talk he gave in Helsinki on his top ten list for Web services security issues. Amusingly, in Helsinki he was also preceded by someone talking about the OWASP Top Ten. Gunnar possesses an impressive ability too make the much-maligned WS-* security standards seem reasonable. More than reasonable: self-evident. Always a pleasure, Gunnar, thank you.

Almost inevitably, discussion turned to the question of what can be done to “make” developers write more secure software. This always sets me on edge, largely because of the subtext that software security is just a developer problem. There’s no question that developers need to learn more about writing secure software, but it is also true that security is too infrequently considered as part of the requirements or design phase. This has been much on my mind lately, since in the absence of security requirements I’m being forced to write them myself, so expect more from me on this soon.

Programming, Security

Web frameworks should do more

In his Web Framework Manifesto, David Pollak identifies a number of features that he believes web frameworks should have, and I think he’s right on the money. He starts out with what you’d expect to find from one of the newer frameworks.

  • A quick and easy way to map between a relational database and the target application.
  • Easy, “right by default,” HTTP request mapping.
  • Automatic “view” selection and composition.

So far, so good. In Rails, Django, and other similar frameworks that emerged about the same time with the same values, this is largely accomplished by staying DRY, by favoring convention over configuration, and by being focused and limited in scope. DHH makes much of his having no interest in Rails as a one-size-fits-all framework, and this is a Good Thing. Yes, it means that Rails isn’t a good fit for every app. Get over it. It also means that although Rails et al changed a lot in their emphasis on radical simplicity, they didn’t change enough. I don’t see Rails as the future of web development so much as I see it as representative of a final stage of MVC web frameworks, wrapping up up a lot of good ideas and giving us something to work with as we get over the hump into the next stage. Whatever that will be.

I’ll put my money on a few ideas: component-based architectures, addressing how Ajax shifts how and where state is managed, and better state/scope/workflow management with continuations.

Pollak gets into some of that in his next criteria, which is quite a list. He starts with a discussion of components. (I won’t quote everything here. I really suggest you go read the full piece.)

  • Pages must be composed of arbitrary components that manage their own state.
  • The rendering of components must be asynchronous based on user-based and external event-based state change.
  • Components should be live (or seamlessly persisted) at all times, ready to respond to events.

Right away we’re getting into an area where older-style MVC frameworks like Struts, which is still alive and kicking thankyouverymuch, completely fall down. I’ve been dancing around this for years without getting into it too deeply here, but I just don’t think that MVC is a perfect match for web applications. It is useful as a separation of concerns, but the action-based controllers that we’ve been used to in MVC web programming are becoming inadequate with the rising use of Ajax. We are hitting a wall and should be able to do so much more.Both the Java and the .NET worlds seem to be moving toward components, albeit with slower adoption than either Sun or Microsoft would like. (I don’t know what I’m basing that on, by the way: I freely admit that it’s just a gut feeling. I also admit that I don’t deeply understand the shift to components and what it entails. I just haven’t taken the time.)

  • The browser should be honored and feared. That means the back button should “do the right thing”
  • There should exist a simple, unified way to describe modal user behavior (e.g., filling out a multi-page form.)
  • Sessions should be tied to a browser window/tab, not to a browser session.

I am interested in Seam and RIFE in part because of how they manage state — especially using continuations in RIFE. Seaside has demonstrated what that can be like, done well. Not only do we have continued usability problems with managing state across multiple HTTP requests, but considering that it’s still in the OWASP Top 10, I’d say it’s fair to say it’s a significant security problem, as well.

  • Mapping between object fields and HTML (or whatever the presentation layer is) should be “right by default”

We spend far, far too much time messing around with this at a low level, and it offers far too many opportunities for niggling little bugs to introduce themselves. I won’t trouble you with details about how I just spent two days doing something that should have taken twenty minutes. I’m still too angry about it.

It should come as little surprise that I am delighted by Pollak’s emphasis on security. Web frameworks do not do enough for security, period. I believe that developers can and must understand the range of attacks to which their software is likely to be subjected, even if they don’t become security experts, but frameworks should also make it easier to do the right thing than not to. Input validation should be centralized, simple, and a default. Output encoding should be handled properly by default. Access control should be more transparent and pluggable. And so on. Pollak has good suggestions.

  • input from the browser should never be trusted, but should always be tested, validated, and destroyed if it is unexpected
  • There should be a single way of describing input validation.
  • There should exist an orthogonal security layer such that objects that are not accessible to a user should never be returned in a query for the user and fields on an object that are not accessible should not be visible.
  • Code should be impervious to a replay attack. That means that fields in forms should have random names that change for each request.
  • The framework and runtime should correctly and gracefully deal with non-ASCII characters.

The last one is not just a security issue, but I lumped it in there because it does fit.

I love the idea of random names for form fields. So simple, yet we’re so caught up in the annoying low-level mapping of field names to back-end objects, that it gets overlooked. And as valuable as the Commons Validator framework can be, calcifying field names in even more config files is tiresome, error-prone, and constricting.

Finally, deployment. I’ve written before how as much as I like Rails, Capistrano, and Mongrel, and as confident as I am that a Rails apps can be deployed so they perform, scale, and can be well managed, I’d still feel sheepish suggesting deployment scenarios that change as often as Rails’s “proven practices” seem to. Maybe I should just get over it. Regardless, it’s clearly something to take into account when considering a development platform.

  • Deploying the web application should be as simple as putting a file in a known location (e.g., a WAR file on a J2EE server) or by executing a single command (e.g., Capistrano.)
  • Deployments should contain all dependencies.
  • The production environment should support modern technology including executing multiple threads in a single process and allowing for many “live” objects to be corresident
  • The production environment should support hot code replacement
  • The development environment should support hot code replacement such that once a file is saved, it becomes live at the next HTTP request.

The last is so very, very important. Shortening the feedback is vital, one of the much-touted features of Rails and other dynamic language frameworks. It can be done with Java and a decent IDE, but not always. And although I think it’s obvious, this is far from universally understood or believed. Not long ago, because of a really slow database load and a design error, a developer I know had to wait 15 minutes for a Java web app to deploy and load. Unacceptable. I don’t know how she or her supervisor tolerated it, because it meant that she could make and test at most four changes an hour. Or, more likely, make more changes than were really safe, which is what a long feedback cycle encourages.

The benefits of frequent and automated testing should be well known by now, or at least I think they should be, so I won’t write ano more about that now.Where does all this lead? Shortly after he wrote this manifesto, David Pollak responded to it by releasing Scala with Sails, now known as the lift web framework. You can get a feel for some of it in the recent announcement on his blog. Lift is written in Scala, a functional/object-oriented language (yes, both) that compiles to Java bytecode, so it runs on the JVM and has access to the Java API. Scala is my next language, if only so I can work with (and on?) lift. Much as I have high hopes for JRuby on Rails, I think that lift is something to watch, especially if the development team can do half of what they set out to do. As is clear from the Pollak’s essay, we need to push web frameworks more than we have done.

JavaScript, Security

JavaScript malware

I had lunch with Gary and Matt the other day. After politely reminding me that I hadn’t blogged at all lately (it seems del.icio.us doesn’t count), they listened to me blather on about what’s been occupying my thoughts and time lately, especially 1) JavaScript malware, and 2) dynamic languages in thhe JVM and CLR. Thanks, guys. Once I get started on a topic I can be hard to shut up, so I appreciate your patience. Here’s that blog post you asked for.

So. JavaScript malware? Three presentations at Black Hat caught my attention.

  1. Jeremiah Grossman and T.C Niedzialkowski on Intranet hacking wiith JavaScript malware.
  2. Billy Hoffman’s “Analysis of Web Application Worms and Viruses” (PDF slides). Shortly before Black Hat, SPI Dynamics (where Hoffman works) released a paper and proof of concept code on “Detecting, Analyzing, and Exploiting Intranet Applications using JavaScript Malware.”
  3. Tom Ptacek and Dave Goldsmith, “Do Enterprise Management Applications Dream of Electric Sheep?” If enterprise agents don’t make you nervous yet, they will.

The first two talks explore different aspects of what Grossman is calling JavaScript malware. The upshot is that cross-site scripting is much, much worse than we had ever thought — “the new buffer overflow” — and opens the door to internal network scanning, JavaScript worms and viruses, and all sorts of other excitement.

This is bad enough, but taken as a backdrop to the Matasano presentation on attacks behind the firewall — ridiculously insecure enterprise management agents — it’s terrifying enough to send me whimpering into a corner.

Subsequent work has made it even worse. JavaScript is everywhere, and its environmental restrictions vary. PDF, QuickTime, MP3 (!), Flash, RSS feeds… dang. The outlook is not good. From a recent email exchange in which I responded to an assertion that PDFs don’t yet have the ability to transmit worms/viruses:

Because PDFs can run JavaScript, whether they can themselves transmit worms/viruses isn’t terribly important. PDFs can make web services calls over HTTP & HTTPS, they can connect to databases, they can retrieve and play backdoored media files like Quicktime and Flash (QT can run JavaScript, btw), they can cause a web browser to launch and make arbitrary HTTP requests. With JavaScript (in the browser, at least), I can scan an internal network, probing and fingerprinting network devices (or intranet sites) use them as a launching pad for a more devious attack. Is that printer vulnerable? Quite possibly. Does that router have a web interface? Ooh, that’s interesting. Does that intranet portal have XSS vulnerabilities that can help me transmit a JavaScript worm? Quite probably.

The usual network admin concern with perimeter security is insufficient. The likelihood of running across cross-site scripting over the course of a day of surfing is pretty high; cross-site request forgeries are likely everywhere. They can blast undetected right through your network perimeter and tackle all the fun stuff on the inside. Even trusted web sites are not safe, and the consequences are getting worse every day. Remember: script kiddies are not the danger anymore. The real threat is well-trained and funded crime syndicates motivated by scads of cash.

I’m barely scratching the surface but wanted to give you at least some idea of what’s been banging around in my head. Read Jeremiah Grossman, RSnake, pdp (architect) to start if you’re interested in studying up.

JavaScript, Security

Ajax Security, Part II: Attack Surface

Today I pick up from part 1 and discuss one of the challenges in Ajax security: attack surface.

Attack surface describes the points of entry that an attacker can abuse to compromise our application. When writing software with security in mind, we try to minimize attack surface to reduce the likelihood and the impact of successful attacks.

One straightforward way to reduce attack surface is to remove features that are not used or that pose too great a risk. If the code isn’t there, it can’t run so cannot be exploited. Another technique is to reduce the number and complexity of exposed entry points. In web applications, the most obvious entry point is the HTTP interface. This is where Ajax comes in.

Ajax & Attack Surface

Almost by definition, Ajax applications have an larger attack surface than their non-Ajax counterparts. The reason is simple: there are more server-side services exposed — i.e. more URLs. URLs are web applications’ exposed APIs, and Ajax apps make more of them available to end users.

Why? Ajax code needs to talk to something on the server, and usually that means exposing an additional functional URL. Take Google Suggest for an example. As you type in the search box, your browser is making HTTP requests behind the scenes to http://www.google.com/complete/search (I’ve left off query parameters). When you submit the form, it goes to http://www.google.com/search as usual. So to introduce an Ajax feature, Google has exposed an additional entry point to their application(s).

Let’s use a more detailed example from an app that I’m working on, a course search. It’s simple enough, really, there are three basic pages: a form where students enter search criteria, a search results page, and a course detail page for each of the results. The URLs (minus query parameters) might look something like this:

/search/
/search/results
/search/detail

To be clear, I should add some reasonable query parameters:

/search/
/search/results?subj=english
/search/detail?courseid=34512

There are lots of ways we can introduce Ajax to these three pages that make it a little bit easier for the student to use. One is to remove the extra step of looking at course details on a new page. When a student clicks on a course title to get detail, instead of following the link to /search/detail, fire off an Ajax request to get the details and display them in situ, right on the results page. That way the student doesn’t have to keep going back and forth between search results and course detail, and can more quickly find course that interests her.

Here’s the question we face: what is the URL the Ajax code’s HTTP request? A few possibilities:

  1. /search/detail?courseid=34512. The problem here is that normally the response to this request will be an entire HTML page, complete with page headers, footers, navigation, etc. We don’t want all that stuff, we just want the course detail. This will not work.
  2. Same URL as the first option, but with an additional HTTP header that indicates that this is an Ajax request. The server-side code looks for this header and responds accordingly. Prototype adds a custom header X-Requested-With: XMLHttpRequest, which is great but if you ever move away from Prototype you’ll either need to adjust your JavaScript to add the same header or change your server-side code to look for something new. Not necessarily an onerous task, but it’s worth considering.
  3. /search/detail?courseid=34512&output=json. Here we’ve added a query parameter to tell the app to respond with JSON instead of a complete HTML page. Yahoo! does this for their REST web services. We could just as well have added output=xml our output=html to get XML or just the HTML to add directly to the DOM. This isn’t too bad a way to go, but we’ll want to find a way to avoid repeating the code that looks for the output parameter.
  4. /search/detail-ajax?courseid=34512. Use a different page name entirely. This way it’s easy to spot what’s going on, but it seems a little clumsy, especially as you add more Ajax-responding URLs. If you’re using a framework like Struts & Tiles, this is probably the easiest way to go, because you can point to the same Action but use a different set of Tiles for the output. On the other hand, your config file can quickly get pretty large.

If you choose option 3 or 4 (especially 4), you’re introducing new URLs to your app, thereby creating new points of entry and increasing your attack surface. Heck, even for option 2 you’re now relying on a new HTTP header. Never forget that HTTP headers are inputs to your application that need to be examined and carefully validated.

That’s just one example on a single page: I didn’t even mention all the possibilities for Ajax on the search form itself, dynamically creating or updating form fields based on selections made in the form. With the addition of each new Ajax feature, I’ve expanded the possible ways (permissible ways, really) of interacting with my application, increasing the amount of work my server-side code needs to do and the vigilance I need to bring to developing and reviewing the design and code.

Toolkits & Frameworks

I’ll write more about toolkits in a future entry, but I do want to mention now that server-side integration toolkits like DWR are good examples of increasing attack surface. With DWR, you can configure specific classes and methods that you want to expose via JavaScript, and DWR generates the JavaScript as a remote interface to your Java classes. I believe that CFAjax works in much the same way (heck, if memory servers it’s unapologetically a DWR port), as do many PHP Ajax frameworks. Every one of these interfaces is an extra API to your application that you are exposing, making your attack surface that much larger.

I haven’t researched this carefully, but some PHP Ajax frameworks have historically had bugs that permitted arbitrary code execution. Needless to say, this expands the attack surface a bit. :)

Client-Side Business Logic

The Ajax example that I gave earlier is a simple enhancement, but many Ajax apps, as opposed to Ajax-enhanced web apps, are moving more and more business logic to the web browser. This has subtle and not-so-subtle effects on state management, as I’ll explore in an upcoming entry. It also means that there’s a lot more code available for an attacker to play with.

I have encountered a strange, unstated expectation among some developers that once JavaScript code is sent out to a client, it’s untouchable. This seems to be especially the case with code generated by a framework. I’m not sure where this idea comes from, and I hope it crumbles with even the slightest examination, but it’s out there. And it’s flat-out wrong.

You don’t, for example, want to do things like send SQL queries in your Ajax requests. I swear, I actually saw this in a web app a couple weeks ago. I happened to have LiveHTTPHeaders open while I was surfing and saw a SQL statement scroll past. I investigated and sure enough, the server was executing whatever arbitrary SQL was sent to it from the browser.

The developer didn’t need Ajax, of course; it could just as easily have been a submitted HTML form. Still. Do not do this.

Another effect of moving more Ajax code onto the client is that there are new trust boundary concerns. As you know, input validation is essential to web application security. Software security in general, really. Any time data passes across a trust boundary, where you don’t entirely trust its origin or integrity — data submitted from a browser to a web server, for instance, or from a remote web service or a database to your application code — it is important to perform rigorous input validation. We are (hopefully) used to this. It ought to be second nature.

Let’s make this clear: we’re used to this on the server side, working in a trusted environment accepting untrusted input.

In Ajax apps, though, lots of data is being passed back and forth between client-side and server-side code. If there’s significant business logic running in the browser, it stands to reason that the browser should validate data that crosses a trust boundary on its way from the server. Here’s the thing, though: while on the server side our code runs in a trusted environment (or what we hope can be trusted), in a browser our code (or what we hope is our code) runs in an untrustworthy environment accepting untrusted input. In the past, we’ve more or less assumed that we could trust what came from the server. We learn from Amit Klein that we should not: the communication channel between the client and server is vulnerable and suspect.

The conundrum is of course that the code that does the validation is completely exposed on the client.

One thing to watch out for is blindly eval()ing JavaScript. As I wrote in part 1, the server response to an Ajax HTTP request is typically either XML, JavaScript, or HTML. Nowadays the JavaScript is usually JSON, which looks like this:

{
"birthdays":
[ {"name":'Abe Lincoln',    "bday": "12 February 1809" },
{"name":"James Buchanan", "bday": "23 April 1791"}]
}

Here’s a common way of using that response.

var obj = eval( '(' + res.responseText + ')' )

The risk here is that the server response may contain malicious code. If you blindly eval() the response, you’ll immediately execute that code. You can mitigate the risk by using a JSON parser to at least ensure that it is indeed JSON.

var obj = resp.responseText.parseJSON();

This is of course far from a perfect solution. I confess that I am not sure how best to handle this. I need to do more thinking and more research. Pointers are welcome.

Web 2.0 Meets SOA

As more and more business logic pushes out to the browser, it is tempting to have that client code connect directly to web services in a service-oriented architecture (not that you need web services for SOA, but bear with me). JavaScript can do SOAP, so why not? The problem is that often those services are thin wrappers around 20-year old COBOL code. You don’t want to be exposing this to the world. Andrew van der Stock discusses this in interview for SearchAppSecurity.com.

I don’t have a problem with JavaScript making web services calls using SOAP, REST, XML-RPC, whatever. Keep in mind, though, that the browser environment cannot be trusted, so any validation or access control you’ve put in place there doesn’t do any good. If the value of the transaction is high — e.g. if it needs authentication at all — or if the code behind the service was written with the expectation of running in a completely trusted environment, then you’re far better off minimizing exposure by providing carefully controlled and monitored access to that code. Meaning, don’t allow anyone in the world to connect to it: limit access to code running on your server. If you want to use JavaScript to access the service, have it contact another service that’s tied into your access control mechanisms.

Resources & Further Reading

I’m not pulling all this out of thin air. Much has been written on the topic of Ajax security. A lot of it FUD, but there is some very worthwhile reading out there that I’ve used to inform my own research and writing. Here’s some of it:

Mitigate Security Risks by Minimizing the Code You Expose to Untrusted Users
An article about attack surface reduction by Michael Howard
Ajax Security
A presentation at AppSec by Andrew van der Stock.
OWASP Guide 3.0
The new version of the Guide to Building Secure Web Applications has a draft chapter on Ajax security.
Ajax security resources
At cgisecurity.com.
del.cio.us/afongen/ajax+security
Rather than just duplicate more links, I’ll send you to my del.icio.us links.

Coming Up…

Next time, we’ll look at another challenge in Ajax security: state management and access control.

JavaScript, Security

Ajax Security, Part I

“Did you see 1 Raindrop today?” a coworker asked, referring to something Gunnar Peterson had written about Ajax security. I went off on a little rant about how yes XMLHttpRequest exploits are interesting and Amit Klein’s work is marvellous, but XHR can be used in an exploit regardless of whether a web app actually uses Ajax, that from a development perspective the approach is the same: we’re accepting requests from an untrustworthy client, so we need careful consideration of security throughout the development lifecycle, solid input validation & output escaping, session management, access control, and so on.

He waited patiently and smiled knowingly. “Yes, Sam, but you are already thinking about these things.”

Ah. As I also frequently complain, thinking like this is still unusual. That’s why software security is such a problem.

So I started looking around to see what was happening in the Ajax security space. It isn’t pretty, but the Ajax angle to the security problems is as overhyped as Ajax itself. There’s been a fair amount of discussion/coverage worth reading, including a new chapter in the OWASP Guide 3.0 draft. Gunnar’s post points to work by Amit Klein that I highly recommend.

On Tuesday I gave a short and overcaffeinated talk about Ajax security to my local OWASP chapter. To follow up on that, I’m starting a series of posts here addressing the topic in greater detail.

Executive summary: in the rush to add Ajax functionality to web applications, security is being disregarded or included as an afterthought (which often amounts to the same thing). Ajax itself isn’t insecure, but it sure can be unthinkingly misused and made insecure. Ajax can make other attack vectors like cross-site scripting and cross-site request forgeries worse, whether or not you actually use it in your application. As it has recently been made clear, XSS is far worse than we’ve thought, so it is important to be careful.

My position is actually a little more nuanced than that, and may well change as I write here. We shall see.

Quick Intro to Ajax.

You may already be familiar with Ajax, especially if you’ve read this far without being bored out of your mind. But just so we’re all on the same page, let me take this opportunity to show off a little Flash demo that my colleague Dave Kruse whipped up over breakfast just before an Ajax presentation that we were about to give.

You know how web applications have traditionally worked. Your browser makes a request to a web server, and the server responds by returning a whole new web page.

(For now, until I add some JavaScript to dynamically show/hide the animation, here’s a link to the first demo Flash movie. Thanks to Dave for his permission to use this here.)

With Ajax, your browser makes a request and the server responds, but this time with a small snippet of data that is used to update just a section of the page. No need for a full page reload.

(Link to the second Flash movie.)

Two key elements to this are that we are using JavaScript to make HTTP requests and to update the HTML page. We’ve actually been able to do this for some time, but in Ajax we can more easily manage the HTTP request and response using an object called XMLHttpRequest. The response from the server is usually one of the following:

  • XML, which is parsed by JavaScript to generate the (X)HTML to add to the web page;
  • more commonly we’re seeing JavaScript, which is eval‘d and run. JSON is a popular way to do this.
  • the actual HTML that will be added to the page.

Ajax use in web applications falls along a spectrum ranging from small usability enhancements such as auto-populating a drop-down without reloading the page (dubbed by Harry Fuecks as HTML++), to mopre full-blown client-side apps, where lots of business logic is in JavaScript on the browser. How you choose to use Ajax will affect your security posture, as we will explore in future posts.

Ajax Security

Fundamentally, Ajax is no different when it comes to web application security. We have the same concerns:

  • We’re dealing with completely untrustworthy clients.
  • Input validation is essential. From the perspective of server-side code, requests are coming in from anywhere — untrusted clients, remember? — and we need to be sure that the data in the requests is what we expect.
  • Escaping output, nothing new there.
  • We have to pay careful attention to session management and access control.
  • Code injection is a problem. JavaScript, XML, LDAP, DOM…
  • The list goes on. See resources at OWASP. There’s even a new Ajax chapter in version 3 of the OWASP Guide.

That said, developers (and architects, too, don’t let them off the hook!) are making security mistakes when they introduce Ajax to their applications. I don’t blame Ajax for this: the real danger is JavaScript that operates in a browser environment, allowed to make HTTP requests and modify the DOM, and web apps that allow this to be abused easily. Actually, the real danger is that software security is too often an afterthought. For some reason, the rush to use Ajax has led to developers placing too much trust in client-side code. What worries me most about this is that at separate presentations at last week’s Black Hat, Jeremiah Grossman and Billy Hoffman described a new spate of JavaScript malware that takes advantage of XSS and CSRF vulnerabilities to do some seriously scary stuff. Grossman describes XSS as the new buffer overflow. Ajax isn’t necessary for some of the techniques described, but it can make an XSS attack worse — whether or not you use it in your web apps.

So what’s different about Ajax from a security perspective? I’ll take that up in the next entry.

Security

Cross-Site Scripting Tutorial Podcast

Dan Kuykendall has posted a cross-site scripting tutorial over at the MightySeek podcast. If you don’t understand cross-site scripting or have a shaky understanding, I recommend it. Dan suggests that while listening you follow along in the show notes and actually try the attacks on a sandbox he’s set up at hackme.mightyseek.com. I didn’t do this, but if you’re new to XSS then it’s probably a good idea to learn by doing.

Java, Security

Generate new session ID in Java EE?

Is there a Java EE equivalent to PHP’s session_regenerate_id()? I’d expect to find it in the neighborhood of HttpSession but don’t.

I like to change the session token whenever there’s a change in a user’s privileges. For example, let’s say that Suzy is surfing a site anonymously for a while before she logs in. As an anonymous user, she has pretty low privileges. Then when she logs in, she has greater privileges on the system. Maybe she can view her home address, update billing info, or heck: maybe she has administrative access. There has been a change in access level.

A problem arises when while surfing anonymously with low-level access, Suzy’s session ID is stolen by an attacker using cross-site scripting or session fixation. After Suzy logs in, the attacker now has the same increased privileges on the system that Suzy does, because the attacker has Suzy’s session ID.

One countermeasure to session fixation attacks is to change Suzy’s session ID whenever her privileges change. When she logs in, assign her a new session ID. When she logs out, assign her a new session ID. Each time Suzy gets a new session ID, the old session is invalidated and the attacker — who has the old session ID — is left with an useless (nonexistent) session.

PHP’s session_regenerate_id() does this transparently, copying over session data to the new session each time (although the implementation is not without its problems). The truly crazy can even do this on each request. If I’m not mistaken, ASP.NET 2.0 also offers a means to generate a new session ID, though maybe only with cookie-less sessions. Java EE does not appear to at all. Maybe some servlet containers do, but it’s not in the spec so far as I can see.

Yes, I can always create a new session and copy over all the attributes to the new session before invalidating the old one. No, it isn’t that hard. I’m just wondering if there isn’t a way for the container to do this instead. Anyone know?

Security

OWASP meeting August 8

Next Tuesday, August 8, the Minneapolis / Saint Paul OWASP chapter is meeting at Metro State Management Education Center in downtown Minneapolis, right near MCTC. Directions are on the chapter page.

I’ll be presenting a short talk on Ajax security. Then Pete Palmer from Wells Fargo will talk about “Making the Most of Apache Web Server Security”. I don’t know Pete or anything more about him than his brief bio, but I’ve been told that he’s an engaging speaker.

There will be pizza.

If you’re in town, I hope you can make it. The group is really starting to come together.

« Prev - Next »