How to Build Exascale Computers Out of Old Technology

A blog about how to build exascale computers out of old technology.

Checkout this video:

Introduction

In this day and age, technology is constantly becoming outdated. What was once the latest and greatest can become outdated in just a few short years. This can be frustrating for those who have invested a lot of money into the latest technology, only to find that it’s no longer relevant. However, there is a silver lining to this story.

What is an Exascale Computer?

An Exascale computer is a type of computer that can perform at least one exaflop, which is a billion billion (10^18) operations per second. This would be a significant increase over the current speed of the fastest supercomputer, which is able to perform about one petaflop, or one quadrillion (10^15) operations per second.

There are many challenges associated with building an Exascale computer. One challenge is that the hardware needs to be able to handle the extremely large amount of data that would be generated by such a fast processor. Another challenge is finding an energy source that can power the computer without overheating it.

The U.S. Department of Energy has set a goal of building an Exascale computer by 2025. However, it is not clear if this goal will be met due to the challenges mentioned above.

Why do we need Exascale Computers?

In 2013, the most powerful supercomputer in the world could perform around 33 quadrillion floating-point operations per second, or 33 petaflops. This is an incredible number by any standards, but it is just a fraction of what will be required in the near future.

The same year, a team of researchers from Lawrence Livermore National Laboratory (LLNL) published a paper estimating that, by 2025, the world will need exascale computing resources — machines capable of at least one quintillion (a million trillion) operations per second — to meet the demand for simulation and data analysis in fields such as energy, national security, pharmaceuticals and finance.

There are many challenges to building an exascale computer. One is simply the sheer size and complexity of such a machine. Another is the fact that current computer architectures are not well suited to this task. And then there is the challenge of power consumption; an exascale machine would require so much power that it would be impractical or even impossible to run it using current technology.

These challenges are not insurmountable, but they do require us to think outside the box — and to consider using old technology in new ways. In this article, we will take a look at some of the innovative approaches that are being proposed for building exascale computers

Current State of Technology

We are currently in an age where technology is changing and advancing at an unprecedented rate. The way we live, work, and play has been transformed by technology and it continues to do so at an ever-increasing pace. This article discusses the current state of technology and how it is impacting our lives.

CPU’s

As computing needs have grown, so have the capabilities of CPUs. The first personal computers had CPUs that could only handle a few basic operations. Today, CPUs can handle billions of operations per second. But even the most powerful CPUs can’t keep up with the demands of some applications, like real-time video rendering or large-scale data analysis. That’s where GPUs come in.

GPUs are specialized chips that were originally designed to accelerate 3D graphics rendering. But they can also be used for other compute-intensive tasks. By tapering their design to fit a specific workload, GPUs can be much more efficient at certain types of calculations than CPUs.

The most common type of GPU is the graphics processing unit (GPU), which is used in computers for general purpose gaming and accelerated graphics rendering. GPUs are also used in servers, workstations, and supercomputers for a variety of purposes including artificial intelligence, deep learning, and big data analytics.

Memory

A lot has changed in the world of computer memory since the first exascale computers were built. For one thing, the capacity of memory chips has increased dramatically. The first exascale computers had a total of about a terabyte of memory, while current-generation exascale computers have over a petabyte of memory. This increase in capacity has been matched by an increase in the speed at which memory chips can operate. The first exascale computers had a maximum memory bandwidth of about 10 gigabits per second, while current-generation exascale computers have a maximum bandwidth of over 100 gigabits per second.

Another big change is the move from single-core to multicore processors. The first exascale computers had only one processor, while current-generation exascale computers have hundreds or even thousands of processors. This increase in the number of processors has meant that the overall design of exascale computers has had to change. In particular, developers have had to find ways to ensure that all of the processors can access all of the computer’smemory. One way to do this is to use a technology called “3D-stacked memory.” 3D-stacked memory is a type of computer memory that is stacked vertically in layers. This allows each processor to access any partof thecomputer’smemory without having to go through any other processor.

The last major change is the move from disk storage to flash storage. The first exascale computers used disk drives for storage, but these were replaced by flash drives in subsequent generations. Flash drives are much faster than disk drives, and they consume less power. This change has had a significant impact on the overall designofexa scalecomputers, as developers have needed to find ways to minimize or eliminate the use of disk drives altogether.

Storage

There are many ways to store data, but the most common type of storage is a hard disk drive (HDD). HDDs are made up of spinning disks that store data on magnetic media. When you want to access data on an HDD, the disk spins and the read/write head moves to the correct location.

HDDs are very inexpensive and have large capacities, but they are also slow. For example, an HDD can take up to 10 seconds to start up and access data. This is why HDDs are often used for long-term storage, while faster types of storage, such as solid state drives (SSDs), are used for short-term storage or for storing data that needs to be accessed quickly.

SSDs are similar to HDDs in that they store data on spinning disks, but instead of using magnetic media, SSDs use flash memory. Flash memory is much faster than magnetic media, so SSDs can start up and access data much faster than HDDs. However, SSDs are more expensive than HDDs and have lower capacities.

Another type of storage that is becoming more popular is called NVMe (Non-Volatile Memory Express). NVMe uses flash memory like SSDs, but it is even faster because it connects directly to the CPU using a high-speed bus. NVMe drives are more expensive than SSDs, but they offer the best performance for applications that need fast access to data.

Networking

The networking fabrics that will be used in exascale computers are being designed and built now. In order to achieve the necessary levels of performance, these fabrics must be incredibly fast and able to support a very large number of nodes. They must also be able to scale up or down as needed, without requiring a complete overhaul of the system.

There are two main approaches to building these fabrics: using commodity components or using purpose-built components. Commodity components are less expensive and can be easier to find, but they are not always able to achieve the same levels of performance as purpose-built components.

The decision of which approach to take will ultimately come down to cost and performance requirements.

Building Exascale Computers

The term exascale refers to a level of computing performance that is roughly 10 times faster than that of the current fastest supercomputer. Exascale computing will require a major increase in computing power and efficiency to meet the needs of applications that require processing large amounts of data in real time.

Using Old Technology

The pace of technology change is amazing. What is state of the art today will be considered old technology in a few years. But what if we could build something using only old technology? Something big, like an exascale computer?

It might seem impossible, but it could be done. You would need to find a way to miniaturize the components and then put them all together in a very dense package. But it could be done.

The first step would be to find a way to miniaturize the components. This could be done by using older technology, like vacuum tubes or even relays. The next step would be to pack them all together in a very dense package. This could be done by using newer technology, like 3D printing or nanotechnology.

It might seem like a crazy idea, but it is possible to build exascale computers out of old technology. With enough time and effort, it could even be done relatively cheaply.

New Technology

New technology has led to the development of exascale computers. These computers are designed to operate at speeds of up to one exaflops, which is equivalent to one million trillion operations per second. They are also said to be up to one hundred times more powerful than the current most powerful supercomputer. Exascale computers will be used for a variety of purposes, including weather prediction, climate research, and large-scale simulations.

Conclusion

We have seen that it is possible to build exascale computers out of old technology, but there are some challenges that need to be overcome. One of the biggest challenges is the power requirements needed to run such a system. Another challenge is the need for a high-speed communication network to connect all of the parts of the system.

Despite these challenges, we believe that it is possible to build an exascale system using old technology. With the right design and implementation, such a system could be used to revolutionize our understanding of complex problems and speed up the discovery of new scientific breakthroughs.

Scroll to Top