Worms and Wildly Insecure Software: The Untold Story of Microsoft Cybersecurity in the Early 2000s

Microsoft has published an oral history of the factors that pushed it into radically overhauling its coding approach
Microsoft has published an oral history of the factors that pushed it into radically overhauling its coding approach

These heady days are the focus of a new Microsoft publication detailing how the company developed its now-entrenched and now-mandatory Security Development Lifecycle (SDL) process for writing more secure software. To accurately capture the times, the company has published an oral history of the factors that pushed it into radically overhauling its coding approach. 

It reads like something out of a spy thriller: “It was 2 a.m. on Saturday, July 13, 2001, when Microsoft’s then head of security response, Steve Lipner, awoke to a call from cybersecurity specialist Russ Cooper. Lipner was told a nasty piece of malware called ‘Code Red’ was spreading at an astonishing rate. Code Red was a worm – a malicious computer program that spreads quickly by copying itself to other computers across the Internet. And it was vicious.”

At the time, ABC News reported that, in just two weeks, more than 300,000 computers around the world were infected with Code Red – including some at the US Department of Defense and Department of Justice.

Clearly, something had to be done, and the company subsequently embarked on a journey that included the busing of engineers to the customer support call center to keep up with high call volumes coming in as a result of security incidents, and a February 2002 shutdown of the entire Windows division, to divert all 9,000 developers to focus on security.

There were highs and lows along the way: In 2003, the SQL Slammer worm was launched, infecting tens of thousands of machines around the world within minutes – in spite of the fact that a patch that could prevent the infection had been available for months.

Eventually, Microsoft created a team capable of finding new classes of vulnerabilities and building tools to help eradicate them – the beginnings of the company’s “security science” capabilities that continue to this day. To measure progress, the security audit function was created, a group independent of the product teams to review and assess security. By late 2003, early versions of Microsoft’s SDL began to take shape. In 2004, the security team took the official proposal for the SDL to senior leadership, and got approval to integrate the approach on a mandatory basis.

And now, 10 years later, the company is taking stock of the history and the challenges ahead. At the site, anecdotes and interviews covering the work of early security teams is augmented with video footage and photos from many of the SDL’s key players, including: Lipner; Matt Thomlinson, vice president of security; Glenn Pittaway, director of software security; Michael Howard, principal consultant cybersecurity; Tim Rains, director, Trustworthy Computing; and David LeBlanc, principal software development engineer, Windows.

“For Microsoft, through pain came progress,” a company spokesperson said in an email. “Companies can learn from its experience and avoid some of that pain themselves. The threat landscape has never been more contentious. The time is now to stop treating security as an afterthought.”

Given that we live in a world where, for the first time, more people shopped online on Cyber Monday than shopped in retail stores on Black Friday last year, Thomlinson is under no illusions that the work is done. Microsoft’s steps 10 years ago should continue to evolve into an ongoing expansion of vigilance.

“While there is progress, it’s far from universal,” he said. “Collectively we need to do more – especially in today’s landscape. Security needs to be a staple, from what we teach in schools to the culture we’re fostering in business.”

What’s hot on Infosecurity Magazine?