A DECADE AGO someone realised that the locks keeping swathes of the internet secure were not working. OpenSSL, a tool used to encrypt anything from social-media passwords to e-commerce purchases, had a fatal flaw that made the information it was supposed to protect visible to potential hackers. The discovery was unsurprising to anyone who knew about the team behind OpenSSL. The software, used by almost 20% of websites—including tech companies making billions of dollars in annual profits—was largely run by two men named Steve, who worked on it in their spare time. Comments on the code contained admissions of potential weaknesses, such as “EEK! Experimental code starts.”

After the flaw, which came to be known as “Heartbleed”, was discovered, tech companies pledged millions of dollars to expand OpenSSL’s team. The hobbyists would become paid staff, better able to secure the web. But last month another hole in the internet’s infrastructure was discovered: a volunteer who for two years had helped run XZ Utils, a piece of software used to compress and decompress data on Linux, an operating system used in key parts of the internet’s infrastructure, had smuggled malware into the code, allowing hackers to send nefarious commands that would otherwise have been prevented. Once again a volunteer-run project had been breached—this time, deliberately. In 2021 Log4j, a tool that records computer errors, faced a similar vulnerability. Given the frequency with which these breaches occur, why is so much critical software maintained by hobbyists?

Leave a Reply

Your email address will not be published. Required fields are marked *