Researchers are developing computers capable of “approximate computing” to perform calculations good enough for certain tasks that don’t require perfect accuracy, potentially doubling efficiency and reducing energy consumption.
“The need for approximate computing is driven by two factors: a fundamental shift in the nature of computing workloads, and the need for new sources of efficiency,” said Anand Raghunathan, a Purdue Professor of Electrical and Computer Engineering, who has been working in the field for about five years. “Computers were first designed to be precise calculators that solved problems where they were expected to produce an exact numerical value. However, the demand for computing today is driven by very different applications. Mobile and embedded devices need to process richer media, and are getting smarter – understanding us, being more context-aware and having more natural user interfaces. On the other hand, there is an explosion in digital data searched, interpreted, and mined by data centers.”
A growing number of applications are designed to tolerate “noisy” real-world inputs and use statistical or probabilistic types of computations.
“The nature of these computations is different from the traditional computations where you need a precise answer,” said Srimat Chakradhar, department head for Computing Systems Architecture at NEC Laboratories America, who collaborated with the Purdue team. “Here, you are looking for the best match since there is no golden answer, or you are trying to provide results that are of acceptable quality, but you are not trying to be perfect.”
However, today’s computers are designed to compute precise results even when it is not necessary. Approximate computing could endow computers with a capability similar to the human brain’s ability to scale the degree of accuracy needed for a given task. New findings were detailed in research presented during the IEEE/ACM International Symposium on Microarchitecture, Dec. 7-11 at the University of California, Davis.
The inability to perform to the required level of accuracy is inherently inefficient and saps energy.
“If I asked you to divide 500 by 21 and I asked you whether the answer is greater than one, you would say yes right away,” Raghunathan said. “You are doing division but not to the full accuracy. If I asked you whether it is greater than 30, you would probably take a little longer, but if I ask you if it’s greater than 23, you might have to think even harder. The application context dictates different levels of effort, and humans are capable of this scalable approach, but computer software and hardware are not like that. They often compute to the same level of accuracy all the time.”
Purdue researchers have developed a range of hardware techniques to demonstrate approximate computing, showing a potential for improvements in energy efficiency.
The research paper presented during the IEEE/ACM International Symposium on Microarchitecture was authored by doctoral student Swagath Venkataramani; former Purdue doctoral student Vinay K. Chippa; Chakradhar; Kaushik Roy, Purdue’s Edward G. Tiedemann Jr. Distinguished Professor of Electrical and Computer Engineering; and Raghunathan.
Recently, the researchers have shown how to apply approximate computing to programmable processors, which are ubiquitous in computers, servers and consumer electronics.
“In order to have a broad impact we need to be able to apply this technology to programmable processors,” Roy said. “And now we have shown how to design a programmable processor to perform approximate computing.”
The Latest on: Approximate computing
via Google News
The Latest on: Approximate computing
- Starting in 2022, All New Hyundai Models Will Feature DRIVE IVI Platformon November 20, 2020 at 7:57 am
Seamless computing, which provides uninterrupted ... achieve read compatibility and address data retention to approximate NOR flash. Let’s go a bit deeper on the subject of continuous read.
- ‘Software is eating more of the world, faster’: Pandemic spotlights enterprise tech startupson November 19, 2020 at 10:27 am
They range from stalwarts such as Microsoft and F5 Networks, to a bevy of smaller startups developing cutting-edge software in cloud computing, cybersecurity, and other industries. As the COVID-19 ...
- How Fast Is Apple's M1 Chip? It Depends on the Appon November 19, 2020 at 9:42 am
Apple's M1 CPU, debuting in its latest MacBooks and Mac mini, offers a groundbreaking blend of performance and efficiency...at least in theory. Should you bite? Let's break down the early results.
- Apple’s New M1 SoC Looks Great, Is Not Faster Than 98 Percent of PC Laptopson November 11, 2020 at 10:15 am
Performance tests are conducted using specific computer systems and reflect the approximate performance of MacBook Pro. Which benchmarks? We don’t know. Which PCs, specifically, did Apple ...
- Low Power Universal Gates for Approximate Computingon November 8, 2020 at 4:00 pm
Approximate Computing is a technique which can perform calculations good enough for certain tasks that don’t require perfect accuracy, potentially doubling efficiency and reducing energy consumption.
- CERN Offers Free Online Course on Quantum Computing: All You Need to Knowon November 5, 2020 at 9:07 pm
Recent applications of quantum computing in the fields of optimisation and simulation will be addressed. Special emphasis will be paid on the use of quantum annealing, the quantum approximate ...
- Quantum computing: Here's a free online course on the basics for business proson November 5, 2020 at 11:40 am
The free courses also will explain how quantum computing is relevant to optimization and simulation, with an emphasis on quantum annealing, the quantum approximate optimization algorithm ...
- A Better Box Is Saving Millions of Dollars at Intelon October 31, 2020 at 5:00 pm
For 50 years, Intel Corp. has been perfecting the silicon chip, one of the tech industry’s most complex devices, and a key component required for all the big ideas of the future such as cloud ...
via Bing News