In this 2-part blog series, we take a look at security from a developer’s point of view.

Nobody wants to write insecure software

Unless you’re paid by a competitor or a foreign hostile government to compromise the security of a product, for most of us development means fixing things. Making them better, safer, and more scalable. But, if everyone is already working to improve quality, why isn’t the number of security issues due to software bugs decreasing?

The key problem is complexity. The human brain hasn’t changed in the past hundred years, whereas technology inherently grows in complexity in an exponential fashion. More transistors per unit area, more software libraries, more data abstractions stacked on top of each other; all of them somewhat orchestrated to work together in an ever-changing technological ecosystem.

This growth in complexity means that a programmer today must keep in mind a great deal of information about the systems they use, and how they interact to deliver the expected results.

“Developers with a clean, expressive set of specific security requirements can build a very tight machine. They don’t have to be security gurus, but they do have to understand what they’re trying to build and how it should work.” [Rick Smith]

Good security is a habit

Having practical experience is paramount to managing this cognitive overload. Just like a professional chess player doesn’t have to research how each piece moves every time, but can rely on this knowledge to be at their fingertips, a career programmer should not have to think about the theory of security whenever they write a line of code.

A good security posture should be almost like a reflex; an innate habit.

For those reasons, knowing how to write code securely should be a practical skill, not a theoretical burden, and our Secure Coding Workshop aims to deliver exactly this: practical security.

In fact, those who attend our workshops are led not only to hack applications written in a language and using frameworks they are familiar with, but also to fix the security issues they find. We engage the group by discussing the benefits and drawbacks of the possible approaches used and review the most recent ‘best practices’.

“Investment in software quality will reduce the incidence of computer security problems, regardless of whether security was a target of the quality program or not; and that the most effective quality measure from the security point of view is the code walk-through.”[PJF Fillery, AN Chandler]

Most security issues are found in legacy software or ‘forgotten‘ systems

The good news is that software, like any technology, always gets better rather then worse. Recent libraries and frameworks are often designed with a ‘secure by default‘ mindset.

Examples include modern web framework and back-end libraries that provide efficient defences, activated by default, against attacks such as SQL injection, Cross-site request forgeries (CSRF) or Cross-site scripting (XSS) attacks. To use such libraries in an insecure manner, a developer must make an explicit effort; in other words, the path of least resistance naturally leads to a secure solution.

The bad news is that starting a brand new project is a luxury that doesn’t happen very often in the career of a programmer. Typically a programmer’s day job is spent maintaining someone else’s work. This involves reading, understanding and often fixing code written by other individuals with different skill sets, under different pressure constraints, perhaps at a time when security was an after-thought (if a concern at all). This is when security becomes hard to implement without breaking existing features or disrupting a business-critical system.

In this context, a significant asset is the ability for a developer to spot a security bug at a glance in someone else’s work. Rather than waiting for a compromise to happen or commission an expensive security audit, why not leverage the eyeballs of someone who is already familiar with the product? If “given enough eyes all bugs are shallow”, then having more eyes trained in finding security bugs is nothing but a good ROI, and an excellent asset for every team.

Our Secure Coding Workshop gives precisely this edge, by giving practical examples of the most common security issues in the programming language or framework of choice – such as .NET, Java or PHP.

The participants are then asked to find, fix and prove the validity of their fixes in an existing codebase. Each solution is presented using the bare-bone language itself (e.g. core Java or .NET), as well as some of the most commonly used frameworks (e.g. Spring), to be as close as possible to situations emerging during their day-to-day work.

In part 2 of our mini series we’ll take a look at the following points:

  • Fixing security bugs might be simpler than expected
  • Don’t blame the developer
  • Even basic, practical security training can yield immediate benefit
  • Good process creates good software
  • Legacy does not necessarily mean it needs replacing

Latest

The Role of AI in Cybersecurity Friend or Foe

In this article, we'll explore the role of AI in Cybersecurity the potential benefits it provides, a...

Consulting on IoT and PSTI for manufacturers

IOT Self-Statement of Compliance for PSTI?

Often when our IoT consultants find themselves deep in conversation about the Product Security and T...

fintech penetration testing

Understanding the Importance of Fintech Penetration Testing

Fintech Penetration testing aids in the identification and remediation of vulnerabilities within an ...