How Involved Should a Developer Be in Designing a Security Policy?

I'm also a developer, and my passion for security often makes the dynamic a little unusual when dealing with other non-security-focused developers.

There are three main areas of constraint when it comes to security policy:

  • Usability: Making sure that the user can actually use the software, and that they don't get overly irritated or delayed by security measures.
  • Technical: Making sure than security mechanisms are properly implemented, and that they don't affect uptime, maintainability, or other critical technical factors.
  • Financial: Making sure that the security measures don't cost too much, whether it be through direct financial costs (e.g. buying black boxes with blinky lights) or additional man-hours.

Your job should involve advising on the technical factors, and suggesting ways that you might mitigate any usability impacts. However, you should almost always defer to the technical judgement of a competent security consultant (competent being the operative word) if they disagree with you on a technical security matter. It's your job to integrate the necessities into a product in an optimal fashion, whilst overcoming the difficult hurdles.

In terms of direct involvement in security policy for software development, it largely depends on:

  • Whether you have a dedicated security person or team.
  • What sector you work in (means you might need to deal with PII or HIPAA data)
  • Where you are in the organisation, and whether you're in a position to effect change.
  • How security-focused the company is.

The interesting part of this question comes when you consider that most software organisations don't have a dedicated security person, let alone department. Regardless of whether or not your company does, you have a duty to learn about the types of security issues and mechanisms that apply to your work. If your company doesn't have a "security guy", that responsibility is even greater. Part of your job is to implement working and reliable software, which includes security, but another part involves relaying technical information to management in a way that they can understand. You can't do that effectively unless you understand the issues at hand.

In short, there's no simple answer. You're required to make some security decisions and implement certain mechanisms as part of your job, because that's what a developer does. As for how far you go, it entirely depends on you as a person and the organisation you work in. In order to make reasonable judgements when no overriding directive is provided, you need to put in the work to understand the issues.


I am all in favor of pushing people to better security standards, however I like to control my own destiny. My preference would be for you to develop options to give administrators the flexibility to make their own choices, but set the defaults to a high security level. That way if someone takes an interest they can make things conform to their own internal security policy, and if they use it out of the box then it has high standards.


IMO, Even in a "perfect world" a programmer should be involved. The programmer should be knowledgeable enough to translate business requirements and security lingo into developer-speak and vice/versa.

Put simply, only a developer understands exactly how the software works, and it's possible that something got "lost in translation". At the very minimum, a developer (or code reviewer - someone familiar with the actual code) should sit in and ensure that the people making the policies aren't doing so out of a misunderstanding of how something is implemented under the hood.

Not only does it help to make sure the others on the team understand and avoid mistakes, instituting such a policy/strategy helps to ensure that the programmer is fully vested in the security of his system. They just don't teach secure coding practices the way they should. Schools, or online tutorials all start out by showing you how to do things, and there's so much to learn that they seldom bother with how not to do things.

Every single security flaw in the IT world is the result of software somewhere. At the operating system level, in APIs that you're calling, your own code, somewhere there's a flaw that's allowing the bad behavior to happen. It could be a bug in the code, or a foolish requirement, but even poor requirements are flaws in the code that's built to the specification of those requirements.

So it makes sense to make sure that developers are involved - to get them thinking about security, and to ensure there are no misunderstandings that would lead to problems.