Home / Technology / AI & IT / Emerging Data Center Trends, Innovations, and New Technologies

Emerging Data Center Trends, Innovations, and New Technologies

All data centers are essentially buildings that provides space, power and cooling for network infrastructure. A data center is also known as a datacenter or data centre  is a repository that houses computing facilities like servers, routers, switches and firewalls as well as supporting components like backup equipment, fire suppression facilities and air conditioning.

 

Data centers are simply centralized locations where computing and networking equipment is concentrated for the purpose of collecting, storing, processing, distributing or allowing access to large amounts of data. These centers can store and serve up Web sites, run e-mail and instant messaging (IM) services, provide cloud storage and applications, enable e-commerce transactions, power online gaming communities and do a host of other things that require the wholesale crunching of zeroes and ones.

 

Data center components often make up the core of an organization’s information system (IS). Thus, these critical data center facilities usually require a significant investment of supporting systems, including air conditioning/climate control systems, fire suppression/smoke detection, secure entry and identification and raised floors for easy cabling and water damage prevention.

 

Data centers are becoming even more important due to massive data demands that will spike due to the explosion of new Internet of Things devices and edge computing needs.

 

In the days of the room-sized behemoths that were our early computers, a data center might have had one supercomputer. As equipment got smaller and cheaper, and data processing needs began to increase — and they have increased exponentially — we started networking multiple servers  together to increase processing power. We connect them to communication networks so that people can access them, or the information on them, remotely. Large numbers of these clustered servers and related equipment can be housed in a room, an entire building or groups of buildings. Today’s data center is likely to have thousands of very powerful and very small servers running 24/7.

 

Data centers are server farms that facilitate communication between users and web services, and are some of the most energy-consuming facilities in the world. In them, thousands of power-hungry servers store user data, and separate servers run app services that access that data. Other servers sometimes facilitate the computation between those two server clusters.

 

Data Centre Types

With different data centers come very different needs and network architecture types. When data centers are shared, virtual data center access often makes more sense than granting total physical access to various organizations and personnel. Shared data centers are usually owned and maintained by one organization that leases center partitions (virtual or physical) to other client organizations. Often, client/leasing organizations are small companies without the financial and technical resources required for dedicated data center maintenance. The leasing option allows smaller organizations to obtain professional data center advantages without heavy capital expenditure.

Hyperscale

A Hyperscale (or Enterprise Hyperscale) data center is a facility owned and operated by the company it supports. This includes companies such as AWS, Microsoft, Google, and Apple.
They offer robust, scalable applications and storage portfolio of services to individuals or businesses. Hyperscale computing is necessary for cloud and big data storage.

Hyperscale data centers are significantly larger than enterprise data centers, and because of the advantages of economies of scale and custom engineering, they significantly outperform them, too. Not by any means an official definition, a hyperscale data center should exceed 5,000 servers and 10,000 square feet. What further distinguishes hyperscale data centers is the volume of data, compute, and storage services they process. In a survey, 93% of hyperscale companies expect to have 40 GigaBytes per second (Gbps) or faster network connections.

Colocation Data Center

Colocation Data Centers consist of one data center owner selling space, power and cooling to multiple enterprise and hyperscale customers in a specific location. Interconnection is a large driver for businesses. Colocation data centers offer interconnection to Software as a Service (SaaS) such as Salesforce, or Platform as a service (PaaS) like Azure. This enables businesses to scale and grow their business with minimum complexity at a low cost.

Depending on the size of your network requirement, you can rent 1 Cabinet to 100 Cabinets, in some cases ¼ or ½ a cabinet is available. A colocation data center can house 100s if not 1000s of individual customers.

Enterprise Data Center

An enterprise data center is a facility owned and operated by the company it supports and is often built on site but can be off site in certain cases also. May have certain sections of the data center caged off to separate different sections of the business. Commonly outsources maintenance for the M&E but runs the white space themselves via the IT team.

Telecom Data Center

A telecom data center is a facility owned and operated by a Telecommunications or Service Provider company such as BT, AT&T or Verizon. These types of data centers require very high connectivity and are mainly responsible for driving content delivery, mobile services, and cloud services. Typically the telecom data center uses 2 post or 4 post racks, to house IT infrastructure, however cabinets are becoming more prevalent.

Soon there will be another classification of data center. The Edge data center. Early indications show Edge data centers will support IoT, autonomous vehicles and move content closer to users, with 5G networks supporting much higher data transport requirements. It is expected Hyperscale and Telecom companies will largely push or compete for the emerging business. It is too early to predict the detailed shape and scale of Edge computing but we do know that some form of Edge computing will evolve and that there will be lots of fiber involved.

According to AFCOM’s recent 2021 State of the Data Center Industry study, more than 40 percent of respondents said they’d be deploying robotics and automation for data center monitoring and maintenance over the next three years.

Data Centre Trends

Data-center owners and operators face increasing complexity and operational challenges as they look to improve IT resiliency, build out capacity at the edge, and retain skilled staff in a tight labor market.

 

In its annual survey, Uptime looks at the number and seriousness of outages over a three-year period. In terms of overall outage numbers, 69% of owners and operators surveyed in 2021 had some sort of outage in the past three years, a fall from 78% in 2020.

 

“The recent improvement may be partially attributed to the impact of COVID-19, which, despite expectations, led to fewer major data-center outages in 2020. This was likely due to reduced enterprise data-center activity, fewer people on-site, fewer upgrades, and reduced workload/traffic levels in many organizations—coupled with an increase in cloud/public internet-based application use,” Uptime reports.

 

In terms of seriousness of outages, roughly half of all data center outages cause significant revenue, time, and reputational damage, according to Uptime. In this year’s report, 20% of outages were deemed severe or serious by the organizations that reported them. Roughly six in 10 major outages in the 2021 survey cost more than $100,000. Power remains the leading cause of major outages, responsible for 43% of outages in 2021, followed by network issues (14%) cooling failures (14%), and software/IT systems error (14%).

 

While mega data centers—on the scale of Google, Microsoft, Apple, and Facebook—get the majority of attention, smaller data centers are being built in regional locations to bring services and compute closer to the customer. With this re-location, IoT devices collecting data have shorter backhaul and less latency. And as the IoT continues to permeate our lives, collecting massive amounts of data at the edge, data centers will continue to scale, but they will scale out, not up.

 

While high-performance computing (HPC) has become available as a public cloud service, the increase in artificial intelligence and machine learning-based applications means HPC availability will become critical for businesses looking to maintain a cutting-edge advantage. While prototyping and trials may be done using a public cloud infrastructure, large enterprises are likely to want complete, end-to-end control as AI and ML applications become a significant business differentiator. This can be most easily provided in a corporate data center.

 

Security will continue to be a major issue with data centers specifically and IT as a whole. Data centers must address many issues around physical security as well as security of their IT workload. 2019 will continue to bring a tightening of security standards and a higher profile for the rapid adoption of leading-edge security techniques, tools, and software across the data center industry.

 

Worldwide, data centers consume an estimated 200 terawatt hours per year. So far, sustainable data centers have redesigned their facilities to greatly reduce water consumption using new technologies for water-cooling. New battery technologies enable them to use fewer, and longer-lasting, batteries. They also work closely with local power utilities to explore ways to increase the mix of green power they run on.

 

Sustainable data centers also work to minimize the amount of energy it takes to diffuse heat created by hardware. Anyone who knows servers knows that heat is the enemy, and it has historically taken a great deal of power to cool them to the point where they function properly. More efficient cooling technology enables data centers to slash their energy consumption – reducing costs as well as becoming more sustainable.

 

Data Centers are also increasingly replacing wasteful, traditional water evaporation cooling systems with innovative closed-loop systems. These systems utilize recycled water rather than fresh in order to reduce the burden on local water systems.

 

Industry

The global data center systems market will reach $237 billion in 2021, representing an increase of more than 7 percent year over year, according to IT research firm Gartner’s most recent IT spending forecast.

The number of large data centers operated by hyperscale providers like AWS, Microsoft and Google increased to nearly 600 by the end of 2020, twice as many as there were in 2015. The COVID-19 pandemic has spurred record-breaking data center spending levels led by AWS, Microsoft and Google, reaching $37 billion alone in the third quarter of 2020. In fact, Amazon, Microsoft and Google now collectively account for more than 50 percent of the world’s largest data centers.

 

In one of the boldest data centers plans in history, Microsoft unveiled its bullish plan to build 50 to 100 new data centers each year, including a $1 billion investment to build several hyperscale data center regions in Malaysia.

 

Intel, the longtime leader in data center server CPUs, is now facing stiff competition with AMD on a global scale. AMD recently saw its largest microprocessor share gain yet against Intel in the server market with EPYC processors in the first quarter of 2021, according to the latest x86 CPU market share report from Mercury Research.

 

Earlier this year, Intel launched its third-generation Intel Xeon Scalable CPUs, code name Ice Lake. Likewise, AMD launched its third-generation EPYC Milan processors this year, dubbing it as the highest-performance server processor in the industry.

 

In April 2021, Nvidia unveiled an Arm-based data center CPU for AI and high-performance computing it says will provide 10 times faster AI performance than one of AMD’s fastest EPYC CPUs, a move that will give the Nvidia control over compute, acceleration and networking components in servers. The new data center CPU, named Grace, creates new competition for x86 CPU rivals Intel and AMD when it arrives in early 2023.

 

The global pandemic has accelerated the need to make data center operations less reliant on human intervention aided by the influx of innovation around software automation and artificial intelligence. The data center industry is seeing the benefit of leveraging more intelligent, autonomous systems for simple tasks and distributed environments designed to increase capabilities. According to AFCOM’s recent 2021 State of the Data Center Industry study, more than 40 percent of respondents said they’d be deploying robotics and automation for data center monitoring and maintenance over the next three years.

Technique Halves Energy, Space Required To Store and Manage User Data

Most storage servers today use solid-state drives (SSDs), which use flash storage — electronically programmable and erasable memory microchips with no moving parts — to handle high-throughput data requests at high speeds.

 

A major efficiency issue with today’s data centers is that the architecture hasn’t changed to accommodate flash storage. Years ago, data-storage servers consisted of relatively slow hard disks, along with lots of dynamic random-access memory circuits (DRAM) and central processing units (CPU) that help quickly process all the data pouring in from the app servers. Today, however, hard disks have mostly been replaced with much faster flash drives. “People just plugged flash into where the hard disks used to be, without changing anything else,” Chung says. “If you can just connect flash drives directly to a network, you won’t need these expensive storage servers at all.”

 

In a paper being presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems, the researchers describe a new system called LightStore that modifies SSDs to connect directly to a data center’s network — without needing any other components — and to support computationally simpler and more efficient data-storage operations. Further software and hardware innovations seamlessly integrate the system into existing data center infrastructure.

 

For LightStore, the researchers first modified SSDs to be accessed in terms of “key-value pairs,” a very simple and efficient protocol for retrieving data. Basically, user requests appear as keys, like a string of numbers. Keys are sent to a server, which releases the data (value) associated with that key.

 

The concept is simple, but keys can be extremely large, so computing (searching and inserting) them solely in SSD requires a lot of computation power, which is used up by traditional “flash translation layer.” This fairly complex software runs on a separate module on a flash drive to manage and move around data. The researchers used certain data-structuring techniques to run this flash management software using only a fraction of computing power. In doing so, they offloaded the software entirely onto a tiny circuit in the flash drive that runs far more efficiently.

 

That offloading frees up separate CPUs already on the drive — which are designed to simplify and more quickly execute computation — to run custom LightStore software. This software uses data-structuring techniques to efficiently process key-value pair requests. Essentially, without changing the architecture, the researchers converted a traditional flash drive into a key-value drive. “So, we are adding this new feature for flash — but we are really adding nothing at all,” Arvind says.

 

In experiments, the researchers found a cluster of four LightStore units, called storage nodes, ran twice as efficiently as traditional storage servers, measured by the power consumption needed to field data requests. The cluster also required less than half the physical space occupied by existing servers.

 

The researchers broke down energy savings by individual data storage operations, as a way to better capture the system’s full energy savings. In “random writing” data, for instance, which is the most computationally intensive operation in flash memory, LightStore operated nearly eight times more efficiently than traditional servers.

 

Adapting and scaling

The challenge was then ensuring app servers could access data in LightStore nodes. In data centers, apps access data through a variety of structural protocols, such as file systems, databases, and other formats. Traditional storage servers run sophisticated software that provides the app servers access via all of these protocols. But this uses a good amount of computation energy and isn’t suitable to run on LightStore, which relies on limited computational resources.

The researchers designed very computationally light software, called an “adapter,” which translates all user requests from app services into key-value pairs. The adapters use mathematical functions to convert information about the requested data — such as commands from the specific protocols and identification numbers of the app server — into a key. It then sends that key to the appropriate LightStore node, which finds and releases the paired data. Because this software is computationally simpler, it can be installed directly onto app servers.

 

“Whatever data you access, we do some translation that tells me the key and the value associated with it. In doing so, I’m also taking some complexity away from the storage servers,” Arvind says.

 

One final innovation is that adding LightStore nodes to a cluster scales linearly with data throughput — the rate at which data can be processed. Traditionally, people stack SSDs in data centers to tackle higher throughput. But, while data storage capacity may grow, the throughput plateaus after only a few additional drives. In experiments, the researchers found that four LightStore nodes surpass throughput levels by the same amount of SSDs.

 

 

The hope is that, one day, LightStore nodes could replace power-hungry servers in data centers. “We are replacing this architecture with a simpler, cheaper storage solution … that’s going to take half as much space and half the power, yet provide the same throughput capacity performance,” says co-author Arvind, the Johnson Professor in Computer Science Engineering and a researcher in the Computer Science and Artificial Intelligence Laboratory. “That will help you in operational expenditure, as it consumes less power, and capital expenditure, because energy savings in data centers translate directly to money savings.”

 

 

 

References and Resources also include:

https://www.techopedia.com/definition/349/data-center

https://scienceblog.com/507074/technique-halves-energy-space-required-to-store-and-manage-user-

https://www.hpe.com/us/en/insights/articles/top-data-center-trends-to-watch-in-2019-1812.html

https://www.networkworld.com/article/3635138/6-data-center-trends-to-watch.html

https://www.crn.com/slide-shows/data-center/10-hot-data-center-technologies-and-trends-to-watch-in-2021?itc=refresh

About Rajesh Uppal

Check Also

DARPA ONISQ to exploit quantum computers for improving artificial intelligence (AI), enhancing distributed sensing and improving military Logistics

Quantum technologies offer ultra-secure communications, sensors of unprecedented precision, and computers that are exponentially more …

error: Content is protected !!