Living in an ever-evolving digital era, where data traffic continues to grow by leaps and bounds, organizations across the globe are currently challenged to store, manage and retrieve this massively growing amount of data. To seamlessly respond and cater to this escalating business need, companies are increasingly turning to hyperscale computing.
Administrators are looking for ways to enhance their data centers to accommodate increasing workloads. The naïve approach of adding expensive equipment and increasing the data center footprint is neither sustainable nor scalable, and this practice will quickly sink the ability to compete and remain profitable. Smarter, faster networking solutions are the key to running today’s applications and processing, providing the ability to analyze and utilize large data sets efficiently and effectively, without increasing data center footprint.
Hyperscale computing refers to the facilities and provisioning required in distributed computing environments to efficiently scale from a few servers to thousands of servers. Hyperscale computing architectures expand and contract based on the needs of an organization and involve hundreds of thousands of individual servers that work together via a high-speed network. Hyperscale computing is usually used in environments such as big data and cloud computing.
The largest cloud providers, such as AWS and Microsoft Azure, are key enablers of hyperscale computing and their continued growth is fueling the expansion of hyperscale data centers. Also, it is generally connected to platforms like Apache Hadoop. These efforts are critical to lower the capex and opex of hyperscale companies, in which each watt of power, dollar of infrastructure, and square foot of space is multiplied a million-fold in mega-datacenters.
Cisco estimates that by 2021, traffic within hyperscale data centers will quadruple, and hyperscale data centers will account for 55% of all data center traffic by 2021. This growth will only continue as the hyperscale data center market is expected to reach $80.65 billion by 2022, according to MarketsandMarkets.
Facebook is Building $750M Data Center in Huntsville, Alabama – The city’s tech-job market is growing, while the cost of living and energy rates are low. Google is Switching to a Self-Driving Data Center Management System – Machine-learning algorithms are now adjusting cooling-plant settings automatically, in real-time, on a continuous basis. Oracle to Launch 12 Cloud Data Centers Around the World – The biggest expansion is planned for Asia, but Europe, the Middle East, and North America are also on the list.
Commercial initiatives are driving SWAP+C innovation and then becoming available for military adoption. In July 2019, A federal judge has ruled against Oracle’s motion to block the Pentagon’s 10-year, $10 billion cloud computing contract, siding with the Defense Department and Amazon Web Services in a contentious eight-month-long legal battle. The Friday decision in the Court of Federal Claims should clear the way for the Pentagon to award the long-awaited contract to either Amazon or Microsoft, the two companies it says are eligible
The contract, called the Joint Enterprise Defense Infrastructure, or “JEDI” for short, would create a departmentwide cloud computing infrastructure for military agencies. It is meant to serve as a springboard for artificial intelligence applications and make it easier for military agencies to share classified information with deployed soldiers, sailors and Marines.
“DOD has an urgent need to get these critical capabilities in place to support the warfighter and we have multiple military services and Combatant Commands waiting on the availability of JEDI.”
“We look forward to working with the Department of Defense, the Intelligence Community, and other public sector agencies to deploy modern, secure hyperscale cloud solutions that meet their needs,” the Oracle spokeswoman said.
Architecture and Structural design
Hyperscale architecture includes key features, such as horizontal scalability intended for improved performance and high throughput as well as redundancy intended for fault tolerance and high availability. Well-designed applications coupled with an efficient hyperscale architecture offers enterprises a potent tool to control an agile business, allowing them to gain an edge over their competitors.
The structural design of hyperscale computing is often different from conventional computing. In the hyperscale design, high-grade computing constructs, such as those usually found in blade systems, are typically abandoned. Hyperscale favors stripped-down product design that is extremely cost effective. This minimal level of investment in hardware makes it easier to fund the system’s software requirements.
High-Density Servers: Hyperscale data centers have a range of computing applications to support high volumes of data, together with high-density server configurations. Such facilities have a robust architecture, which knits together thousands of singular servers or nodes (also known as “vanity-free servers”), offering storage and computing resources. These nodes are further connected by resilient and high-speed networks. The intent is to design a powerful infrastructure capable of optimizing performance, while cutting down on software and other operating costs.
Portable Applications: Running cluster-aware applications that can easily distribute workloads across a grid of cluster nodes is essential for hyperscale environments. For this reason, modern data centers deploy highly portable cloud applications, so that if one server fails, parallel nodes can easily share the workload. In contrast, in a traditional data center, the server supporting a critical application needs to be fixed to effectively run the application again.
High-Density Cooling: Compared to traditional models, hyperscale data centers are located in colder zones to save on cooling costs. The facilities are constantly handling increasing data traffic, which expands IP connections and increases demand for storage needs, so it is imperative for them to deploy high-density cooling elements. These include customized air handlers, liquid cooling and water-chilled large metal boxes equipped with blowers to enable servers to run at ambient temperatures.
Renewable Power: In a hyperscale design, lithium ion-based batteries are used to power cabinets that pack a considerable amount of energy in a smaller footprint to avoid any disruptions. This provides a secure and stable power supply that is easy to implement with high-energy production. Moreover, it is believed that the future hyperscale data center will fuel their infrastructure facilities with renewable sources of energy. With this strategy, we are reducing operational expenses while at the same time, minimizing the impact to the environment by reducing carbon emissions. Traditional facilities power their infrastructure from UPS or valve-regulated lead-acid batteries.
24×7 Support: Since hyperscalers deploy vanity-free servers, their manpower ratios can vary drastically, compared to an average data center facility. To help businesses stay up and working, hyperscale operators have a team of dedicated employees working round-the-clock to maintain servers and provide support at granular levels.
The following design elements are discontinued in hyperscale computing:
- Superior array storage networks are replaced with locally connected and network-connected storage.
- Dedicated computing, management and storage networks are replaced with virtual LANs.
- Network switching is replaced with commodity network elements.
- Blade systems are replaced with commodity computing components.
- Hardware devices meant for tracking and supervision are replaced with software programs and carefully designed applications.
- Hot-swappable devices intended for high availability are substituted in favor of efficient hardware configuration.
- Obsolete power supplies are removed.