Home / Technology / Electronics & EW / DARPA LTLT program developing Cryogenic microelectronics for future Supercomputers and data centres

DARPA LTLT program developing Cryogenic microelectronics for future Supercomputers and data centres

Over the last few decades, we have seen tremendous improvements in the performance and energy efficiency of computing and memory systems. With the rise in cloud computing, mobile devices and data volume, the demand for large data centres and supercomputers is continuing to grow as a result.

 

In 1965 R&D Director at Fairchild (and later Intel co-founder) Gordon Moore predicted continued systemic declines in cost and increase in performance of integrated circuits in his paper “Cramming more components onto integrated circuits.”Moore’s Law which stated that the number of transistors on a chip will double approximately every two years has been the driver of semiconductor industry in boosting the complexity, computational performance and energy efficiency while reducing cost. It has led to substantial improvements in economic productivity and overall quality of life through proliferation of computers, communication, and other industrial and consumer electronics. Microelectronics and solid state components have also been the backbone of the military systems and were main contributors in advancement of radar, communication and electronic warfare systems.

 

However, Moore’s Law is becoming more and more difficult. Transistors smaller than 7 nm will experience quantum tunneling through their logic gates. Due to the costs involved in the development, 5 nm is predicted to take longer to reach the market than the 2 years estimated by Moore’s law. The vision of “More Moore” Technologies is to continue to follow the exponential reduction in size of electronic devices by migrating from charge to non-charge based devices i.e. based on spin, molecular state, photons, phonons, nanostructures, mechanical state, resistance, quantum state (including phase) and magnetic flux.

 

Researchers are now developing many technologies to extend the Moore’s law, the  possible candidates include vortex laser, MOSFET-BJT dual-mode transistor, 3D packaging, microfluidic cooling, PCMOS, vacuum transistors,t-rays, extreme ultraviolet lithography, carbon nanotube transistors,silicon photonics, graphene,  phosphorene, organic semiconductors, gallium arsenide, indium gallium arsenide, nano-patterning,and reconfigurable chaos-based microchips.

 

Cold Computing

When it comes to computing, hardware manufacturers are always looking for new ways to keep their chips running at lower temperatures. This lets them get even more performance using the same chips while using less energy and producing even less heat. Generally speaking, cold computing is the idea of decreasing the operating temperature of a computing system to increase its computational efficiency, energy efficiency or density. The most significant impact occurs when you run computing systems at cryogenic temperatures. To give you an idea of what this looks like – conventional processor and memory based data centres operate at temperatures well above room temperature, at around 295k (21 C), but we’re looking at operating memory systems in liquid nitrogen at 77K (-250 C).

 

However, as outlined by Intelligence Advanced Research Projects Activity (IARPA), power and cooling for large-scale computing systems and data centres is becoming increasingly unmanageable. Conventional computing systems, which are based on complementary metal-oxide-semiconductor (CMOS) switching devices and normal metal interconnects, are struggling to keep up with increasing demands for faster and more dense computational power in an energy efficient way.

 

But the other limitation to packing more transistors onto to a chip is a physical limitation called Dennard scaling– as transistors get smaller, their power density stays constant, so that the power use stays in proportion with area. Historically, Dennard scaling has facilitated more dense and energy-efficient memory and computational system but Dennard scaling has slowed dramatically. This basic law of physics has created a “Power Wall” — a barrier to clock speed — that has limited microprocessor frequency to around 4 GHz since 2005. This is where cold computing can have a significant impact, enabling organisations to build higher performance computers that use less power and at a low cost, all by reducing the temperature of the system.

 

There has actually been interest in cold computing for several decades, with an example of early experimentation in the 1990s at IBM in a group including Gary Bronner when he was working at IBM before joining Rambus. While this work showed significant potential, it became evident that traditional CMOS scaling was able to keep pace with industry requirements. In this research, Bronner and his colleagues found that low temperate DRAMs operated three times faster than conventional DRAM.

 

Since then, cold computing research has continued to develop and we’ve seen a lot of discussion around quantum computers too, which is at the extreme ends of cold computing. However, most quantum machines need a conventional error correction processor near them, so it is likely that machines operating between 77K and 4K (-195C) will be necessary before quantum computers come into use.

 

Getting circuits to perform at these temperatures requires more engineering work before the technology can become practical. At 77k temperatures digital functions translate well, but the problems come with the analog functions which don’t work as they used to. So, analog and mixed signal parts of circuits may need to be redesigned in order to operate at cryogenic temperatures. Currently, the development of logic functions using superconducting switches is in its early stages. While there is significant research being conducted, there is still a lot of work to be done. But here’s the good news – once a standard set of logic functions is defined, translating processor architectures and the software that runs on them should be fairly straightforward, writes Craig Hampel Chief Scientist at Rambus.

 

Currently, there are multiple research projects around cold computing and cold memory as well as quantum computing. The studies are showing promising progress of high speed systems capable of processing and analysing large volumes of data at substantially better energy efficiency. In the US, IARPA is working on something called the Cryogenic Computing Complexity (C3) initiative, which is looking to establish superconducting computing as a long-term solution to the power problem and a successor to conventional temperature CMOS for high performance computing (HPC).

 

At 77k we believe we can get DRAM operating voltages down to between 0.4 to 0.6V, meaning substantially less power consumption; and at this temperature and voltage, the leakage goes away, so we hope to get perhaps four to ten additional years of scaling in memory performance and power. Cooling systems however will become more expensive and require more power to maintain temperatures and remove heat than conventional air-cooled systems. It’s a classic engineering trade off to optimize this system and achieve total power savings. We believe that after optimizing this cold computing system, power savings on the order of two order of magnitude may be possible, writes Craig Hampel in Tech Radar.

 

Currently, a practical example of cold operation is Microsoft’s Project Natick, where a portion of a data centre was sunk off the coast of Scotland’s Orkney Islands. This is likely to be the first of many projects to find ways to advance processing power, while efficiently and sustainably powering data centres at a lower cost.

 

While Moore’s Law will slow as conventional data centers at room temperature become obsolete, cold computing could expand computing capacity exponentially, making superconducting and quantum computing the future of supercomputers and HPC.

 

Low Temperature Logic Technology (LTLT) program

To overcome the barriers to thermal and power density scaling in HPC systems, The Defense Advanced Research Projects Agency (DARPA) launched a program in April 2021  that focuses on expanding the computational power capacity of high-performance computing systems in line with energy efficiency standards. DARPA said Thursday that the Low-Temperature Logic Technology (LTLT) program is focused on developing a device/circuit capability that can achieve a performance and power improvement of 25 times compared to room-temperature central processing units.

 

Many important semiconductor parameters are functions of temperature and there is evidence that such a high performance, low power technology is possible. For example, Both absolute delays and delay variations of nanoscale CMOS logic circuits, more than ever, are so heavily dependent on the thermal environment; a logic circuit operating at a high temperature, say 90°C, can see its speed drop to just half of that at 0°C. However, this has not yet been realized due to the challenges that this program seeks to address.

 

“Today, we’re aggressively reaching the end of Moore’s Law scaling and are faced with the inability to scale power density much further in order to improve computing performance,” said Jason Woo, a program manager in DARPA’s Microsystems Technology Office (MTO). “A viable solution is cold computing. While microelectronics is typically designed to operate at room temperature, we know that device characteristics improve significantly at reduced temperatures. Very low temperature devices – those operating at 77K or below – have the potential to overcome the power scaling limit, but challenges exist when you apply them to very large scale integration.”

 

Specifically, DARPA seeks to develop low-temperature fin field-effect transistors (FinFETs) based on complementary metal-oxide-semiconductors (CMOS) to support Very Large Scaled Integration (VLSI) functionalities.

 

To achieve the program’s objectives, LTLT aims to exploit the unique device/material characteristics and performance of today’s advanced nodes FinFETs operating at very low temperatures to develop transistors and memory cells with superior performance/power than is realizable by simply cooling current SOA VLSI technologies.

 

The program is broken out into two separate research areas. The first will focus on researching, developing, and delivering a fabrication technology for highly integrated, advanced node CMOS operating at 77K, with low supply voltage and high performance. The target technology will be able to integrate low temperature transistors, SRAM cells with 25X lower switching energy at 77K, and a supporting circuit/system design.

 

The second area in the program will explore advanced research concepts focused on high-risk/high payoff FinFET VLSI-compatible solutions for individual technical challenges at 77K. Three specific challenges will be explored, which include ultra-low power, high-speed scaled transistors with new switching or transport mechanisms; compact, high speed, low energy SRAM cells; and new circuit techniques that utilize novel LTLT transistors and memory cells to achieve a 45X performance/power improvement.

 

The LTLT program will also utilize the benefits of DARPA’s recently unveiled Toolbox Initiative. The DARPA Toolbox provides open licensing opportunities with commercial technology vendors to the researchers behind the Agency’s programs. Through this initiative, DARPA researchers – or performers – are provided easy, low-cost, scalable access to state-of-the-art tools and intellectual property (IP) under predictable legal terms and streamlined acquisition procedures.

 

LTLT will develop key low temperature (LT) optimized components operating at low power supply voltage and will culminate in compelling integrated circuit demonstrations at 77K. Proposed research should investigate innovative approaches to enable revolutionary advances in science, devices, or systems. Specifically excluded is research that primarily results in evolutionary improvements to the existing state of practice. As the LTLT program is expected to be realizable on advanced node VLSI manufacturing platforms, technologies such as superconducting electronics are not within scope of this program

 

The agency also intends to produce and test a static random-access memory (SRAM) cell with a compact framework and the capacity to handle foundational circuit elements for HPC engines. LTLT will additionally leverage DARPA’s Toolbox Initiative which provides licensing support for proprietary commercial technologies.

 

“While microelectronics is typically designed to operate at room temperature, we know that device characteristics improve significantly at reduced temperatures,” said Jason Woo, a program manager for the Microsystems Technology Office at DARPA.

 

References and Resources also include:

https://www.techradar.com/news/taking-compute-performance-to-the-next-level-with-cold-computing

About Rajesh Uppal

Check Also

Revolutionizing Earth Observation: The Power of Multi-Sensor Satellite Technology

Introduction In the ever-evolving realm of space technology, satellites have become our silent sentinels, providing …

error: Content is protected !!