Home / Technology / AI & IT / Datacenter-on-chip technology that compacts a huge data center on single chip will play big role in Big Data analytics for US Army

Datacenter-on-chip technology that compacts a huge data center on single chip will play big role in Big Data analytics for US Army

The technology industry is entering a new era of computing that requires IT systems and cloud computing services to process and analyze huge volumes of Big Data in real time. Current Data centers (DC) and high performance computing clusters are dominated by power, thermal, and area constraints.

 

Data centers consumed about 91 billion kilowatt-hours of electricity in the U.S. in 2013, which is equivalent to the output of 34 large, coal-fired power plants, according to the National Resources Defense Council. They occupy large spaces and necessitate sophisticated cooling mechanisms to sustain the required performance levels. Sustainable computing has become of increasing interest to researchers, industry leaders and the public.

 

Datacenter-on-chip technology puts the equivalent of a huge data center, which uses enormous amounts of energy in large facilities to crunch data for companies like Google or Amazon, on a single computer chip. The chip includes thousands of processors, or cores.

 

Diana Marculescu and Radu Marculescu have been awarded an NSF grant to develop a new Datacenter-on-a-Chip (DoC) design consisting of thousands of cores that can run compute- and data-intensive applications more efficiently compared to existing platforms. Our research will impact numerous areas,” says Diana Marculescu. “Big data applications like social computing, life sciences, networking, and entertainment will benefit immensely from this new design paradigm that aims at achieving server-scale performance from hand-held devices.”

 

The U.S. Army is interested in such many core platforms for large real-time battle simulations and for battle information management software. Many military applications such as electronics for war fighters and unmanned aerial vehicles also require both high performance computing and low power consumption.

 

US  Army Research Office has provided grant to  Washington State University and Carnegie Mellon University team  to develop a novel computing platform for emerging big data applications. The researchers, including Partha Pande and Jana Doppa, professors in the School of Electrical Engineering and Computer Science, and professors Radu Marculescu and Diana Marculescu from Carnegie Mellon, that are designing datacenter-on-chip (DoC) technology for faster and more energy-efficient data processing and better performance for big data applications.

 

US Army expects Big Data analytics to play a key role in maintaining information dominance, according to the Army Research Laboratory’s Technical Implementation Plan for 2015-2019. Termed “Very Large Scale Computational Analytics” in the plan, the initiative “will aid in the U. S. Army’s information supremacy by pursuing concepts that enable analysis of big data in realistic timeframes, limit tactical surprise, improve situational awareness, and facilitate intelligence for autonomy,” the plan reads.

 

Datacenter-on-chip: Researchers target a new paradigm for big data computing

Diana Marculescu and Radu Marculescu have been awarded an NSF grant to develop a new paradigm for Big Data computing. Specifically, this project focuses on a new Datacenter-on-a-Chip (DoC) design consisting of thousands of cores that can run compute- and data-intensive applications more efficiently compared to existing platforms

 

This project relies on emerging many-core processors that can run Big Data applications while provisioning the system resources for the necessary power/performance/thermal trade-offs. Consequently, various big data applications like social computing, life sciences, networking, or entertainment, can benefit immensely from this new design paradigm that aims at achieving server-scale performance from hand-held devices.

 

The proposed work introduces a new direction in networked system design enabled by the emerging wireless on-chip interconnect paradigm. Indeed, achieving DoC level of massive integration requires significant innovation at multiple levels of abstraction, ranging from the design of the on-chip network and associated physical layer, all the way to mapping and runtime management of various applications.

 

To this end, the research goals include:

– Design of a small-world wireless network architecture as a communication backbone for many core-enabled wireless DoC (WiDoC)

– Design methods at physical layer for highly-integrated 3D wireless DoC suitable for low latency data communication

– Evaluation of various latency-power-thermal (LPT) trade-offs for the proposed WiDoC platform with relevant big data applications.

 

While most of the prior work considers 2D many-core platforms where communication happens through wired links, this proposal redefines the very foundations of on-chip communication via 3D small-world network architecture with sub-THZ wireless links in the planar layers and inductive coupling-based wireless interfaces in the vertical direction; this allows for full flexibility and reconfiguration and makes such platforms suitable for processing big data applications at unprecedented levels of energy efficiency.

 

The proposed research is unique as it brings together highly novel and interdisciplinary concepts from network-on-chip (NoC), wireless, and complex networks, communication circuits, and optimization techniques aimed at single chip solutions for achieving data center-scale performance. The educational contribution of this work will help establishing an interdisciplinary research-based curriculum for high performance many-core system design meant to increase the number of students attracted to this area of engineering.

 

WSU developing big data technology for U.S. Army

Datacenter-on-chip technology puts the equivalent of a huge data center, which uses enormous amounts of energy in large facilities to crunch data for companies like Google or Amazon, on a single computer chip. The chip includes thousands of processors, or cores.

The researchers are working on data-center-on-chip technology that is specifically optimized for the big data application known as deep learning. Deep learning techniques have been used increasingly in the past few years in areas such as natural language processing, computer vision and self-driving cars.

“Our proposed computing designs are aimed at improving the speed of processing big data while reducing the overall power consumption,” said Pande.

Since deep learning techniques are very data intensive, the researchers will use graphics processing units (GPUs) on the chip to speed up processing in addition to central processing units (CPUs), creating what is known as a heterogeneous platform.

 

CPU+GPU on single chip

“The main challenge is to integrate CPUs and GPUs into a single chip with an efficient communication network between them, since their characteristics vary a lot,” Pande said. Pande and his team are considering hybrid communication networks consisting of both wired and wireless connections to address this important challenge.

The team also is working to optimize the overall design to achieve the required trade-off between performance, power consumption and thermal properties for big data applications.

“Traditional optimization algorithms scale very poorly on these problems. We are developing novel machine learning based search algorithms to significantly improve the overall design time,’’ said Doppa.

“There were many technical challenges in front of us to create a better power management strategy,” Pande said. “We are confident that our proposed chip designs will prove to be useful to advance the research and development of deep learning and other emerging big data applications around the world.”

 

3D chip three times more efficient

Unlike portable devices that have gone wireless, data farms that provide instant availability to text messages, video downloads and more still use conventional metal wires on computer chips, which are wasteful for relatively long-range data exchange.

Most data centers are made up of several processing cores. One of their major performance limitations stems from the multi-hop nature of data exchange. That is, data has to move around several cores through wires, slowing down the processor and wasting energy.

Pande’s group in recent years designed a wireless network on a computer chip. Similar to the way a cell phone works, the system includes a tiny, low-power transceiver, on-chip antennas and communication protocols that enable wireless shortcuts.

The new work expands these capabilities for a wireless data-center-on-a-chip. In particular, the researchers are moving from two-dimensional chips to a highly integrated, three-dimensional, wireless chip at the nano- and microscales that can move data more quickly and efficiently.

For instance, the researchers will be able to run big data applications on their wireless system three times more efficiently than the best data center servers.

 

 

 

References and Resources also include:

https://www.nsf.gov/awardsearch/showAward?AWD_ID=1564022

https://news.wsu.edu/2017/10/18/big-data-technology-u-s-army/

About Rajesh Uppal

Check Also

The Age of Onboard AI: Revolutionizing Space and Satellite Missions

Introduction The exploration of outer space has long been a testament to human curiosity, innovation, …

error: Content is protected !!