If you think you’re hearing about a company getting hacked almost every day, that’s because you’re correct: there were over 1,300 significantly damaging breaches of large businesses last year. That’s more than 3 per day on average, and that’s only counting the ones that were reported publicly. Unfortunately, hacks are occurring at an ever-increasing rate.
Whether you work in the technology sector or not, you may be rightly wondering when the software profession will fix this problem and finally start to secure your personal data. It’s an understandable expectation, and fixing it will first require a hard look at the root causes of this worsening situation.
Computing degrees offer students the chance to learn many of the skills necessary to be a professional software engineer. Topics like algorithms, data structures, and discrete mathematics are found in almost every software-focused college course.
However, almost all students in these kinds of courses will graduate without any knowledge of how to write secure software. It’s simply not taught in most third-level schools. If it is taught, it’s only at an introductory level.
Shortly after a computer science student leaves school, they’ll likely get a job in a firm where they could be responsible for building software that handles your personal information. Making matters worse, very few companies invest in the training required for their employees to practice secure software development.
Colleges teaching software development should make security a core part of their syllabi. CS professors could even perform basic security checks on student programming assignments. If students saw their grades drop if they submitted work that was vulnerable to injection attacks or buffer overflow, we might see good security habits formed before graduation, not after.
Risk mitigation is a tough sell
Almost all technology companies are trying to accelerate their growth, all the time. An early-stage startup will spend its time building new features in its search for product/market fit before it runs out of their capital. Larger publicly-traded companies have to meet their quarterly revenue goals or watch their share price dip.
When companies are so focused on earnings and market success, it can be tough to convince them to invest in things that don’t directly contribute to increased revenue. It can be a hard sell to convince companies to deal with basic operational inefficiencies, let alone to make significant investments in preventative security.
The potential impact of a hypothetical breach is always debatable ahead of time: will it be just some hard-to-quantify damage to the company’s reputation, or will it be real dollars stolen from their accounts? Or will it be something trivial that doesn’t even have to be disclosed to the public? Overconfidence and other human biases can cause the best case outcomes being favored, and worst case outcomes being ignored.
This is an economic dilemma, and so it has an actuarial solution (based on the risk of potential breaches, the negative impact such breaches would have, and the cost of preventative measures). For example, if a company is insecurely storing lots of Personally Identifiable Information (PII) and could go quickly out of business if that PII was stolen, it should be easy to justify investing in security to prevent a breach.
More companies should start analyzing their risk around the economics of potential breaches, and plan accordingly. It can’t always be the loudest customer or prospect driving a product’s feature set.
If someone is successful in penetrating a system, they stand to make much more money through extortion or theft than they spent on the hack itself. If they can’t hack into a system, they stand to make nothing. The difference is stark: a great reward for a successful hack, a net loss for a failure. Incentives are very strong for a hacker to make even small investments in their attempts.
If a company invests no money or effort in preventing hacks of their systems and nobody ends up hacking them, they will continue to make profits off their regular business endeavors. That’s not much of an incentive for them to change course.
With these incentive mismatches, it’s no surprise that hackers are far more motivated to penetrate systems than companies are to protect themselves. A solid and objective risk analysis will help any company match the right security controls appropriate for them.
Technology and techniques change rapidly
For as long as locks have existed, lock pickers and lock inventors have been in an arms race.
Reading any popular security blog (such as Krebs on Security, or the CyberArk Threat Research blog) should give you a sense of how difficult it is for software engineers to keep up with the changes in their toolkits, or defend against attacks on the innumerable technologies their applications rely on.
The solution to this problem is, counterintuitively, not to stop adopting new technology. Rather, a careful and judicious adoption of new technology can help defend against threats. Hackers always prefer going after old technology with well-established vulnerabilities available for them to exploit.
Software engineers are wrongly expected to be security experts
Speaking of the woes of software engineers, the software industry is moving in a worrying direction by expecting application engineers to be security experts too.
Programmers and engineers already have more than enough difficult technical subjects to master just to be able to create applications that are usable, accessible, responsive, and resilient. Expecting them to become experts in yet another domain is a recipe for failure.
This isn’t meant to absolve engineers of their responsibilities in security: they should be aware of common security risks (such as the OWASP Top 10) and how to avoid them, but expecting them to be leaders in the field of security is unfair. True security experts train for years in their field and have a depth of knowledge in that area that few programmers do.
Engineers and security experts should instead work together to build tools that allow applications to be built more safely, with less chance of things being done the wrong way. For example, no reasonably skilled engineer will try to create their own version of the TLS security protocols which are used for secure communications across the internet. Those protocols are difficult enough to understand, let alone interact with using code. That’s why almost all engineers will use an off-the-shelf, trusted implementation of those protocols rather than try to build their own.
The software industry just needs to figure out a similar approach for engineers to use when storing passwords and their user’s private information. Many solutions exist, but there are no standards. The best approach will be one that takes this task off the engineer’s plate entirely, allowing them to build application features without worrying about common security issues.
If a system can be used, it can be hacked
The only safe computer system is one that is disconnected from the network, unplugged, thrown in a dumpster, put through an industrial shredder, loaded into an incinerator, and turned into ash.
As glib as that statement may seem, it is sadly quite true. If data exists on a hard drive, there’s a chance that it can be accessed by an attacker. That chance increases greatly if the system is powered on and is reachable on a network. Even if a computer system is completely powered off and physically disconnected, a human could still be bribed or blackmailed into accessing it by a determined hacker.
There is no magic solution to this problem, though. As a society we have accepted this risk because of the benefits afforded to us by turning on all of our computers and making them work for us. All we can do is work to keep them secure, and train our fellow humans how not to get duped or tricked into helping hackers get access.
So let’s get our software engineering students well trained in the basics of security, improve the technology that makes application development safer, and teach companies to better analyze their risks. The safety of our data depends on it.