By Joseph Blumberg, Dartmouth News:

Dartmouth computer science graduate students are applying their research techniques to fundamental security flaws recently found in nearly every computer chip manufactured in the last 20 years. Until new designs are implemented, an interim solution devised at Dartmouth can fill the breach.

The security defects they investigated are known by ominous names such as Spectre, Meltdown, and Foreshadow. They are buried deep in the design of the chips now running in almost all digital devices, including laptops, tablets, cell phones, and servers in the cloud. These vulnerabilities were inadvertently introduced by chipmakers while pursuing designs that enabled greater computing speeds.

The research team includes two PhD students, Prashant Anantharaman and Ira Ray Jenkins, and Rebecca Shapiro, Guarini ’18, who received her PhD this past spring. Professor of Computer Science Sean Smith and Research Associate Professor Sergey Bratus advised the team.

“Everyone should feel vulnerable to Spectre,” says Anantharaman, a member of the team that focused on one specific variant of Spectre. “Microprocessors from Intel, AMD, and ARM that are used in devices worldwide are vulnerable, and a hacker exploiting this vulnerability could do anything from stealing personal passwords to getting control of critical infrastructure, like a power system.”

When the processor is running, executing instructions it has been given, it leaves behind a trail of “bread crumbs”—virtual footprints of where it has been and what it has been doing. “If you know how to look for these clues to what the processor has done, it is possible to back-trace information that passed through the processor,” says Jenkins. The assumption in industry had been that no one would see those clues, that no one had the ability to find them—but in the last decade, new techniques have been developed that can allow unauthorized access, say the researchers.

“We know that it is now possible be able to ‘come behind’ the processor and look for these clues,” says Jenkins. “I don’t think we have seen evidence of it happening yet, but that doesn’t mean that they haven’t done it. It just means that we don’t know that they have.”

“Our techniques use existing mechanisms—hardware and software that is out there—and use the tools at hand to fix the problem,” says Jenkins. “We basically set up permissions for how a program can run, how a program can organize itself.”

The team used ELFbac software, developed in 2012 by the Department of Computer Science’s TrustLab—a group that investigates how to build systems that are secure in the real world. The EFLbac mission is the prevention of large classes of cyberattacks that could expose sensitive information. It effectively inserts barriers into a program that isolates secret information and makes it available only to authorized software. “Without ELFbac, this kind of separation within a single program is very difficult to achieve,” says Jenkins.

“Our approach to the problem was to give the power back to the programmer by designing policies [rules] in ELFbac that would enforce the programmers’ intent with respect to the code and data,” says Anantharaman. “Although we demonstrate how ELFbac is effective against one Spectre variant, this is a generic technique that is effective against a larger class of intra-process memory attacks.”

Jenkins says their software fixes, while effective, are only stopgaps. “What we are suggesting is, if you start at level zero, where you design your software using our techniques as we suggest, then your software is just not vulnerable to these sorts of attacks, even if new versions come along in the future.”

“The success we’ve had using ELFbac to secure data is exciting and demonstrates the wide potential of this policy mechanism,” says Smith. “Spectre is just one high-profile example of what you can do with ELFbac.”

 

Pin It on Pinterest