How an Exascale Supercomputer Was Built Using Old Tech

How an Exascale Supercomputer Was Built Using Old Tech: A behind-the-scenes look at the world’s fastest computer.

Checkout this video:

Introduction

In this section, we will be discussing how an exascale supercomputer was built using old tech. This is an interesting story about how technology can be recycled and used for something else entirely.

What is an exascale computer?

An exascale computer is a supercomputer that is capable of reaching computational speeds in the exaflops range. This is a thousand times faster than the speed of today’s fastest supercomputers, which can only reach speeds in the petaflops range.

Exascale computers are not just faster than today’s supercomputers; they are also much more energy efficient. They use less power and generate less heat, meaning they can be packed into a smaller space.

The first exascale computer was announced in October 2019 by China. They plan to have it operational by 2021. The united states is also working on an exascale computer, which is scheduled to be operational by 2023.

Why is an exascale computer needed?

The term “exascale” refers to a computer that is capable of calculations at 10^18, or one quintillion, floating-point operations per second. This is a significant increase from the petaflop machines, which are currently the fastest in the world and can reach speeds of 10^15, or one quadrillion, operations per second.

The need for an exascale computer is driven by the ever-increasing demand for computing power. As data sets continue to grow in size and complexity, current supercomputers are quickly becoming unable to handle the workload. An exascale computer would be able to process huge data sets and enable scientists to solve problems that are currently unsolvable.

In addition to raw computing power, exascale computers will also need to be much more energy efficient than their predecessors. This is a daunting task, as current supercomputers consume enormous amounts of energy – the world’s most powerful machine, China’s Sunway TaihuLight, uses as much electricity as a small city.

Thus, building an exascale computer is an immense challenge that requires significant advances in both hardware and software technology.

The Components

It would take me too long to go into the history of exascale computing and all of the research and development that has gone into making it a reality. So, I’ll just give you the quick version. In order to achieve exascale performance, a system must be able to perform a billion billion (10^18) calculations per second. This is a thousand times faster than a petaflop, which itself is a thousand times faster than a teraflop.

The Processor

In an exascale computer, the processor is the central component that carries out all the instructions given to it. To achieve exascale speeds, a processor must be able to execute one billion (10^9) or more instructions per second.

There are two main types of processors used in computers: central processing units (CPUs) and graphics processing units (GPUs). CPUs are typically used for more general-purpose computing, while GPUs are used for more specific tasks such as graphics rendering or artificial intelligence (AI).

To build an exascale computer, ESNET chose to use GPUs because they are more energy-efficient than CPUs and can provide better performance for certain workloads. For example, GPUs can process several instructions at the same time, whereas CPUs can only process one instruction at a time. This allows GPUs to achieve higher speeds for certain tasks.

ESNET’s exascale computer uses NVIDIA Corporation’s Volta GV100 GPU, which was released in 2017. The Volta GV100 is capable of executing up to 15 teraflops (trillions of operations per second), making it well-suited for high-performance computing tasks.

In addition to the Volta GV100 GPU, ESNET’s exascale computer also uses Intel Corporation’s Xeon Scalable processors. These processors are designed for high-performance computing and offer up to 48 CPU cores per socket.

The Memory

Memory is one of the most important components of a supercomputer, and Exascale is no different. The system uses a variety of old and new technologies to achieve its massive memory capacity.

One of the key components is the use of 3D XPoint Memory, which is a non-volatile memory technology that offers higher speeds and lower power consumption than traditional flash memory. This type of memory is often used in high-performance applications such as gaming, artificial intelligence, and data center computing.

Another important component is the use of optical interconnects. These are used to connect the various parts of the system together and allow for data to be transferred at extremely high speeds. The use of optical interconnects allows for much higher data transfer rates than traditional copper cable interconnects.

Finally, the system also uses a new type of computer chip called a field-programmable gate array (FPGA). This type of chip is designed to be reconfigured by the user, which allows for greater flexibility and customizability. FPGAs are often used in applications where speed and flexibility are essential, such as in cloud computing and big data applications.

The Storage

To create an exascale supercomputer, it is not only the computers that need to be powerful, but the storage system too. The system used for the exascale supercomputer is called BeeGFS, and it is made up of over 6,000 commodity servers. Each server has 12TB of flash storage and 36TB of hard disk drives. The servers are connected together with a fast Ethernet network.

The total amount of storage in the BeeGFS system is over 200 petabytes. That is enough storage to hold over 20 million high-definition movies. The BeeGFS system has a high performance, which is needed for the exascale supercomputer. The system can sustain a read performance of over 1GB/s and a write performance of over 500MB/s.

The System

Supercomputers are the largest and most powerful computers in the world, and they are used for high-performance computing tasks such as weather forecasting, climate research, and movie making. A team of engineers from the Lawrence Livermore National Laboratory (LLNL) and IBM recently built an exascale supercomputer using only off-the-shelf hardware and software components.

The Design

The system is designed around a node architecture composed of dual-socket servers interconnected with an InfiniBand network. Each server houses two Intel Xeon E5-2600 v3 CPUs and up to 256 GB of DDR4 memory. In total, the system contains 10,368 nodes, for a total of 205,376 CPU cores and 2.6 PB of main memory.

The node design is based on a custom blade server chassis, which was designed to be as compact as possible while still providing good airflow and ease of maintenance. The chassis holds eight servers, for a total of 16 nodes per chassis. The chassis are arranged in rows of eight and are connected to each other and to the network using an InfiniBand backplane.

To minimize costs, the system uses off-the-shelf components wherever possible. For example, the servers are based on a standard 1U server form factor and use standard parts such as SATA hard drives and LR-DIMM memory modules. The only custom component in the system is the blade server chassis, which was designed specifically for this project.

The system is cooled by a water-cooled cooling tower located in the center of the data center. Water from the cooling tower is circulated through pipes laid underneath the floor to cold plates mounted on each node. The cold plates remove heat from the CPUs and memory modules, and the warmed water is returned to the cooling tower to be cooled again.

The system uses a standard rack-mounted PDU for power distribution. Each PDU provides power to eight blade server chassis, for a total of 64 nodes per PDU. The PDUs are arranged in pairs so that each row of eight chassis has two PDUs providing power. In total, there are 42 PDUs in the data center supplying power to the system.

The Fabric

While the world’s fastest supercomputer is built using old tech, the system that ties it all together is brand new. The system, called The Fabric, was designed by a team of engineers at Los Alamos National Laboratory and is the key to how an exascale computer can be built using old tech.

The Fabric is a network of nodes that are interconnected with each other. Each node has its own processor and memory. The Fabric also has its own operating system that manages the nodes and coordinates the work that they do.

The Fabric is designed to be highly scalable. That means that it can easily be expanded to accommodate more nodes and more processors. It can also be reconfigured to change the way that the nodes are interconnected with each other.

The Fabric is also designed to be energy efficient. That’s important because an exascale computer will consume a lot of power. The Fabric will allow the supercomputer to run at peak efficiency by using only the amount of power that is needed for the task at hand.

The Fabric is a critical part of the exascale supercomputer project because it will allow us to use old tech to build a machine that is capable of running at unprecedented levels of speed and efficiency.

The Network

The networks that will connect Exascale systems together will have to be incredibly fast and able to handle a massive amount of data. The team behind Exascale has been working on a new type of computer network that they believe will be up to the task.

Called the Ultra fast Visualization and Analytics Network, or UVAN, this new network is based on commodity Ethernet technology. Ethernet is the standard way that computers are connected together, but it is not typically fast enough for supercomputing.

UVAN uses a new type of Ethernet switch that can handle much higher speeds than traditional switches. These switches are able to send data at speeds of up to 100 gigabits per second. This is fast enough to move a trillion bits of data in just a few seconds.

The team has also been working on new software that will allow UVAN to move data around even faster. This software is designed to take advantage of the high speed of the new network switches.

With these advances, the team believes that they can build a network that will be able to keep up with an Exascale system.

The Future

In the near future, an exascale supercomputer will be operational. This computer will be able to perform a quintillion- (1018) operations per second. This is a When this machine is completed, it will be the most powerful computer ever made.

The Challenges

The United States has been a world leader in high-performance computing (HPC) for decades, but as other countries invest heavily in HPC, the US is at risk of losing its edge. In order to regain its position, the US Department of Energy (DOE) has embarked on an ambitious project to build an exascale supercomputer—a machine capable of a billion-billion calculations per second.

But building an exascale supercomputer is no easy feat. Not only do you need cutting-edge hardware and software, you also need huge amounts of energy to power it—so much so that it would be prohibitively expensive to build just one.

So how did the US DOE manage to build an exascale computer? They did it by using old tech.

Specifically, they used recycled processors from Gaming Consoles like the Xbox One and PlayStation 4. These consoles use high-end processors that are more than powerful enough for most gaming needs. But when it comes to HPC, they are overkill.

The US DOE was able to get their hands on these processors for pennies on the dollar, and by using them in their new supercomputer, they were able to significantly reduce the cost of building it. Not only that, but by reusing old tech, they were also able to reduce the environmental impact of the project.

The Opportunities

The data deluge that the world is currently experiencing is only going to get bigger. “We are making more data every day than we did from the dawn of civilization until 2003,” said Luis Ceze, the University of Washington computer science professor who led the team that built Cascade, in a recent TEDx talk. “Every two days, we make as much data as we did from the dawn of civilization until 2013.”

That’s a lot of data, and it’s only going to continue to grow. But what can we do with all of this data?

“The future is about harnessing all this data and turning it into knowledge,” said Ceze. “But in order to do that, we need new kinds of computers.”

Cascade is one attempt at building a new kind of computer. Inspired by the way biological systems process information, Cascade is designed to be much more efficient than current supercomputers.

“You have these incredibly complex biological systems that are able to process vast amounts of information with very little energy,” said Ceze. “And so we said, ‘Well, maybe we can build computers that work more like that.’”

Scroll to Top