Despite current security concerns, speculative execution has powered the computing revolution

The recently discovered Spectre and Meltdown vulnerabilities, which affect microprocessors in the majority of the world’s computers, have dominated tech news in the last two weeks. Though there is no current evidence that hackers have successfully exploited these vulnerabilities, Spectre and Meltdown make it possible for bad actors to gain access to stored information. Security researchers around the world have been working on fixes.

Yet there is more to the story: speculative execution, the hardware feature that has led to this security vulnerability, also provides significant performance benefits and has been instrumental in the continued increase in microprocessor performance for the past couple of decades.  

Because of their significant performance advantage, speculative execution techniques, developed by researchers at the University of Wisconsin-Madison, have been in use in billions of microprocessors worldwide for the past couple of decades.

These techniques, together with enabling approaches like branch prediction, allow a computer chip to make “educated guesses” regarding the commands it will be doing in the near future, so that it can get a head start in performing these commands.  This leads to significantly increased overlap, or parallelism, in performing the commands.  Resulting performance gains have made possible countless things that consumers and businesses now take for granted: fast video streaming, online payment systems, cloud computing and much more.

Following up on pioneering work on branch prediction by Jim Smith in the early 1980s, and other work by Smith and Andy Pleszkun in the mid-1980s, a model for a speculative execution microprocessor was proposed in the mid- to late 1980s in work done by Guri Sohi.  This was years before the proliferation of the Internet, the early Internet worms, and the first Web browser.

It is critical to note that it is not the concept of speculative execution that creates security vulnerabilities, but rather how the approach is implemented by microprocessor designers.

“Different implementations of the speculative execution model carry different risks,” says Sohi, chair of the Computer Sciences Department (pictured at left).  Sohi is also Vilas Research Professor, John P. Morgridge Professor and E. David Cronon Professor of Computer Sciences.

Because speculative execution makes guesses about what work a program needs to do, it brings in information that may ultimately not be needed. “There’s information kept around as a result of speculative execution that would not normally be accessible under legitimate circumstances,” says Sohi.  The problem is that hackers, in very clever and indirect ways, using sophisticated knowledge of the way speculative execution works, can then deduce discarded information for malicious purposes.

Lost in the current news cycle, says Sohi, is the fact that “techniques like speculative execution have resulted in significant performance increases that have enabled so many other advances in computing, which we take for granted today.”

And because those advances take place “under the hood,” at the level of the processing chip, software developers don’t need to do anything additional to take advantage of those significant gains in speed, making the advances in software applications easier than they would have been otherwise.