By Marlene Cimons, National Science Foundation
Throughout the 50 or more years of the modern computing era, virtually all machines ranging from laptops to smart phones have had one feature in common: a fixed, conventional “one size fits all” processor. This means that software developers must craft applications to match the inflexible design of the processor, rather than the other way around.
Within the next 10 to 20 years, however, scientists hope to have many computers moving in a new direction, one that will enable machines to adapt easily to a wide range of software applications. The researchers like to call it “morphing.” It is an advance that will make computers run faster, and save energy as well.
“It’s a new and innovative way to build and use computers,” says Alan George, professor of electrical and computer engineering at the University of Florida and director of the Center for High-Performance Reconfigurable Computing, or CHREC (pronounced “shreck”), a research center based at four major universities with more than two dozen industry and government partners.
“With reconfigurable computing, the architecture of the processor is adaptive, and thus can be customized to match the unique needs of each application,” he adds. “By changing the mindset of computing, from processor-centric to application-centric, reconfigurable computing can perform at a fraction of the time and cost of traditional servers or supercomputers.”
The new design, for example, ultimately could transform computational biology, providing computers that could analyze DNA in a matter of seconds or minutes, rather than hours or days, or sequence genes in order to detect disease, or identify the best treatments for individual patients.
“It won’t replace conventional computing, but it will speed up certain aspects of computing, with much less energy consumption,” says Herman Lam, associate professor of electrical and computing engineering at the University of Florida.
Lam, who helped develop the center’s Novo-G supercomputer, believed to be the world’s most powerful reconfigurable computer, says the impact likely will be greatest in data analysis.
“Five or ten years ago, the big bottleneck in many science domains was the inability to collect or generate enough data,” he says. “Today, machines are efficient at producing data, and the bottleneck now is that there is too much data, and computers can’t analyze it quickly. That’s what’s going to change.”
The National Science Foundation currently provides the University of Florida, the lead institution for CHREC, with $116,000 annually over five years, and $66,000 to each of its three research partners, which include Brigham Young University, George Washington University and Virginia Tech. Most of the center’s funding, however, comes from industry support, a consortium that includes such companies as processor giant Intel, The Boeing Co., Monsanto, Honeywell Aerospace and Lockheed Martin, among many others.
Traditional computers with fixed processors can perform many tasks. But they need a substantial amount of overhead in space and energy. On the other hand, special-purpose, or “domain specific” computers, can perform certain tasks very well, but cannot adapt to new ones.
Reconfigurable computers make the best of both worlds. Using devices such as field-programmable gate arrays, or FPGAs, they allow the processor to “morph,” that is, rearrange its internal circuitry, sort of like children’s Legos, to create the right architecture’ for each job that comes along.
“Rather than coming up with one architecture that’s good for many applications, we’ve come up with one that’s adaptable,” George says. “We configure it one way for one application, then reconfigure it for another. It’s like Lego blocks. Instead of giving a child a toy truck or a cowboy figure, give him a bunch of Lego blocks in different sizes and shapes and he can make whatever he wants, and then five minutes later, he can make something else. One minute it’s this, and then five minutes later, it’s something else.”