One of the problems in writing secure software is that security is too infrequently considered as part of the requirements or design phase, while of course it needs to be planned throughout the software development process. I have never worked on a project in which there were clearly documented security requirements. Security ends up being part of the process, but only because I’m a bit of a freak about these things. The danger is, of course, just what you always hear: that security is considered so late in the process that it entails a re-architecting and missed deadlines. If deadlines cannot slip and security is an afterthought … well, the problem with that should be obvious.

Off the top of my head, a few ways that identifying security requirements can help:

  • Identify vulnerabilities in standard security mechanisms (e.g. an authorized user can still do Bad Things).
  • With clear requirements, developers can implement consistent, centralized approaches to security. If developers just have to make their best guess as to what it means to be secure, you’ll end up with 1) widely varied practices because everyone’s off doing their own thing, and 2) inconsistent quality because, frankly, some developers don’t know how to think through security issues. Even if they do, they may be too close to the code to consider its assumptions objectively.
  • Staff development: as mentioned, too many developers are ignorant of secure programming techniques and possible attack vectors. Addressing security early may force the issue.
  • Raise awareness of security concerns among users / customers.
  • Ensure that requirements for a given project are consistent with policies, requirements, implementations already in use.
  • Notice how I slid the dreadful word “policy” in there? I believe that security design and implementation should flow from — or at least be traceable to — up-to-date security policy that sets a clear and reasonable direction. By identifying where requirements may be out of sync with policy, we can ensure that policy remain current and meaningful (assuming that policy is flexible and responsive :-).
  • And of course, assure that security is addressed throughout development.

Lately I’ve been knocking around the idea of misuse cases as a way to elicit security requirements. I was introduced to the concept by a series of articles by Gunnar Peterson outlining a secure development process (PDF: parts one, two, three). You may already be familiar with use cases, a technique for identifying and describing functional requirements of a system, what the software should do. Misuse cases describe what a system should not do. For each feature or use case, a development team explores how that feature could be deliberately abused or misused, and from these explorations develop misuse cases and security requirements.

Here’s a basic use case diagram; a misuse case is identified with inverted colors:

basic use case diagram for 'Add Comment' with an 'Add Comment Spam' misuse case and two mitigating use cases

In this basic diagram, I started with a use case, “Add Comment.” An obvious (and frustrating) abuse of the system is comment spam. This prompted the creation of two new use cases, “Moderate Comments” and “Run IP Blacklist,” to prevent the Add Comment Spam misuse case. Already, just by identifying potential misuses of a system, we’ve built out the requirements to make the system more sound.

Granted, this is neither the best example of a use case nor of a security concern that I could have come up with, but you get the idea.

The heart of a use case is not the diagram, but the textual description. Guttern Sindre and Andreas L. Opdahl, among the first to formally describe misuse cases, suggest a template (PDF) adapted from popular use case formats. It’s worth reviewing.

Here’s the problem, though: I am wary of creating excessive documentation and worry that misuse cases could be taken too far without improving security. Of course, any documentation can be carried to an extreme and prevent actual development getting done. If a project calls for use cases, then I think that misuse cases can help identify security requirements early in the development process and keep them there throughout. I think I’ll hang onto the idea.