By shifting a fundamental property of computation, Princeton researchers have built a new type of computer chip that boosts the performance and slashes the energy demands of systems used for artificial intelligence.
The chip, which works with standard programming languages, could be particularly useful on phones, watches or other devices that rely on high-performance computing and have limited battery life.
The chip, based on a technique called in-memory computing, is designed to clear a primary computational bottleneck that forces computer processors to expend time and energy fetching data from stored memory. In-memory computing performs computation directly in the storage, allowing for greater speed and efficiency.
The announcement of the new chip, along with a system to program it, follows closely on an earlier report that the researchers in collaboration with Analog Devices Inc. had fabricated circuitry for in-memory computing. Lab tests of the circuitry demonstrated that the chip would perform tens to hundreds of times faster than comparable chips. However, the initial chip did not include all the components of the most recent version, so its capability was limited.
In the new announcement, researchers in the lab of Naveen Verma, an associate professor of electrical engineering, report that they have integrated the in-memory circuitry into a programmable processor architecture. The chip now works with common computer languages such as C.
“The previous chip was a strong and powerful engine,” said Hongyang Jia, a graduate student in Verma’s group and one of the chip designers. “This chip is the whole car.”
Although it could operate with a broad range of systems, the Princeton chip is intended to support systems designed for deep-learning inference — algorithms that allow computers to make decisions and perform complex tasks by learning from data sets. Deep learning systems direct such things as self-driving cars, facial recognition systems and medical diagnostic software.
Verma said that for many applications, the chip’s energy savings would be as critical as the performance boost. That is because many AI applications are expected to operate on devices driven by batteries such as mobile phones or wearable medical sensors. The Apple iPhone X, for example, already has an AI chip as part of its circuitry. But, both the energy savings and performance boosts are only of use if they can be accessed by the broad base of applications that need them — that is where the need for programmability comes in.
“The classic computer architecture separates the central processor, which crunches the data, from the memory, which stores the data,” Verma said. “A lot of the computer’s energy is used in moving data back and forth.”
In part, the new chip is a response to the slowing promise of Moore’s Law. In 1965, Intel founder Gordon Moore observed that the number of transistors on integrated circuits doubled about every year, and the industry also noted that those transistors became faster and more energy efficient in the process. For decades, these observations, which became known as Moore’s Law, underpinned a transformation in which computers became ever more powerful. But in recent years, transistors have not kept improving as in the past, running into fundamental limitations of their physics.
Verma, who specializes in circuit and system design, thought about ways around this squeeze on the architectural level rather than the transistor level. The computation needed by AI would be much more efficient if it could be done at the same location as the computer’s memory because it would eliminate the time and energy used to fetch data stored far away. That would make the computer faster without upgrading the transistors. But creating such a system posed a challenge. Memory circuits are designed as densely as possible in order to pack in large amounts of data. Computation, on the other hand, requires that space be devoted for additional transistors.
One option was to substitute electrical components called capacitors for the transistors. Transistors are essentially switches that use voltage changes to stand for the 1s and 0s that make up binary computer signals. They can do all sorts of calculations using arrays of 1 and 0 digits, which is why the systems are called digital. Capacitors store and release electrical charge, so they can represent any number, not just 1s and 0s. Verma realized that with capacitors he could perform calculations in a much denser space than he could with transistors.
Capacitors also can be made very precisely on a chip, much more so than transistors. The new design pairs capacitors with conventional cells of static random access memory (SRAM) on a chip. The combination of capacitors and SRAM is used to perform computations on the data in the analog (not digital) domain, yet in ways that are reliable and amenable to including programmability features. Now, the memory circuits can perform calculations in ways directed by the chip’s central processing unit.
“In-memory computing has been showing a lot of promise in recent years, in really addressing the energy and speed of computing systems,” said Verma. “But the big question has been whether that promise would scale and be usable by system designers towards all of the AI applications we really care about. That makes programmability necessary.”
The Latest on: In-memory computing
via Google News
The Latest on: In-memory computing
- ScaleMP to Showcase its Software-Defined Server Solutions at SC19on November 19, 2019 at 7:35 am
The ServerONE demonstrations will show aggregation of many AMD EPYC/Rome systems, capable of more than 4000 cores, with benchmark results placing a 1536-core (3072 CPUs) system at the top of the ...
- In Memory Computing Market Growth, Emerging Trends and Forecast By 2022on November 18, 2019 at 10:00 pm
In-Memory Computing (IMC) is a technology which helps in storing data in Random Access Memory (RAM) of the server rather than in complicated relational databases operating on slow disk drives. This ...
- ScaleMP to Showcase SMP Supremacy and Composable Computing Innovations at SC19on November 18, 2019 at 4:06 pm
OpenMP shared-memory workloads will greatly benefit from this solution, as well as large in-memory computing and simulations. The MemoryONE demonstrations will show how customers can leverage ...
- In-Memory Computingon November 18, 2019 at 6:29 am
But we intentionally put off discussion of a rather different way of solving this problem: in-memory computing. In a more traditional architecture, you have some arrangement of multiply-accumulate ...
- GridGain Systems Named to Deloitte 2019 Technology Fast 500(TM) for Second Consecutive Yearon November 18, 2019 at 4:00 am
FOSTER CITY, Calif., Nov 18, 2019 (GLOBE NEWSWIRE via COMTEX) -- FOSTER CITY, Calif., Nov. 18, 2019 (GLOBE NEWSWIRE) -- GridGain(R) Systems, provider of enterprise-grade in-memory computing solutions ...
- GridGain Systems Named to Deloitte 2019 Technology Fast 500™ for Second Consecutive Yearon November 18, 2019 at 4:00 am
FOSTER CITY, Calif., Nov. 18, 2019 (GLOBE NEWSWIRE) -- GridGain ® Systems, provider of enterprise-grade in-memory computing solutions based on Apache ® Ignite™, today announced it ranked number 147 on ...
- GridGain and Azul Systems Collaborate to Enable Java for Low-Latency Use Cases at Massive Scaleon November 15, 2019 at 7:59 am
FOSTER CITY and SUNNYVALE, Calif., Nov. 15, 2019 – GridGain Systems, provider of enterprise-grade in-memory computing solutions based on Apache Ignite, and Azul Systems, the leader in Java runtime ...
- Getting Data Scientists to Live in an IT Worldon November 14, 2019 at 12:42 pm
Dale is responsible for product and go-to-market strategy for the in-memory computing platform. His background includes technical and management roles at IT companies in areas such as relational ...
- GridGain Experts Help Businesses Harness the Power of In-Memory Computing at Multiple Industry Eventson November 12, 2019 at 4:06 pm
FOSTER CITY, Calif., Nov. 12, 2019 (GLOBE NEWSWIRE) -- GridGain ® Systems, provider of enterprise-grade in-memory computing solutions based on Apache ® Ignite™, today announced its participation in ...
via Bing News