Exascale system expected to be world’s most powerful computer for science and innovation.
OAK RIDGE, TN – The U.S. Department of Energy today announced a contract with Cray Inc. to build the Frontier supercomputer at Oak Ridge National Laboratory, which is anticipated to debut in 2021 as the world’s most powerful computer with a performance of greater than 1.5 exaflops.

Frontier supercomputer. Photo provided by Penske Media Corporation
Scheduled for delivery in 2021, Frontier will accelerate innovation in science and technology and maintain U.S. leadership in high-performance computing and artificial intelligence. The total contract award is valued at more than $600 million for the system and technology development. The system will be based on Cray’s new Shasta architecture and Slingshot interconnect and will feature high-performance AMD EPYC CPU and AMD Radeon Instinct GPU technology.
“Frontier’s record-breaking performance will ensure our country’s ability to lead the world in science that improves the lives and economic prosperity of all Americans and the entire world,” said U.S. Secretary of Energy Rick Perry. “Frontier will accelerate innovation in AI by giving American researchers world-class data and computing resources to ensure the next great inventions are made in the United States.”
By solving calculations up to 50 times faster than today’s top supercomputers —exceeding a quintillion, or 1018, calculations per second—Frontier will enable researchers to deliver breakthroughs in scientific discovery, energy assurance, economic competitiveness, and national security. As a second-generation AI system – following the world-leading Summit system deployed at ORNL in 2018 – Frontier will provide new capabilities for deep learning, machine learning and data analytics for applications ranging from manufacturing to human health.
Since 2005, Oak Ridge National Laboratory has deployed Jaguar, Titan, and Summit, each the world’s fastest computer in its time. The combination of traditional processors with graphics processing units to accelerate the performance of leadership-class scientific supercomputers is an approach pioneered by ORNL and its partners and successfully demonstrated through ORNL’s No.1 ranked Titan and Summit supercomputers.
“ORNL’s vision is to sustain the nation’s preeminence in science and technology by developing and deploying leadership computing for research and innovation at an unprecedented scale,” said ORNL Director Thomas Zacharia. “Frontier follows the well-established computing path charted by ORNL and its partners that will provide the research community with an exascale system ready for science on day one.”
Researchers with DOE’s Exascale Computing Project are developing exascale scientific applications today on ORNL’s 200-petaflop Summit system and will seamlessly transition their scientific applications to Frontier in 2021.
Frontier will offer best-in-class traditional scientific modeling and simulation capabilities while also leading the world in artificial intelligence and data analytics. Closely integrating artificial intelligence with data analytics and modeling and simulation will drastically reduce the time to discovery by automatically recognizing patterns in data and guiding simulations beyond the limits of traditional approaches.
“We are honored to be part of this historic moment as we embark on supporting extreme-scale scientific endeavors to deliver the next U.S. exascale supercomputer to the Department of Energy and ORNL,” said Peter Ungaro, president and CEO of Cray. “Frontier will incorporate foundational new technologies from Cray and AMD that will enable the new exascale era — characterized by data-intensive workloads and the convergence of modeling, simulation, analytics, and AI for scientific discovery, engineering and digital transformation.”
Frontier will incorporate several novel technologies co-designed specifically to deliver a balanced scientific capability for the user community. The system will be composed of more than 100 Cray Shasta cabinets with high density compute blades powered by HPC and AI- optimized AMD EPYC processors and Radeon Instinct GPU accelerators purpose-built for the needs of exascale computing. The new accelerator-centric compute blades will support a 4:1 GPU to CPU ratio with high speed AMD Infinity Fabric links and coherent memory between them within the node. Each node will have one Cray Slingshot interconnect network port for every GPU with streamlined communication between the GPUs and network to enable optimal performance for high-performance computing and AI workloads at exascale.
To make this performance seamless to consume by developers, Cray and AMD are co-designing and developing enhanced GPU programming tools optimized for performance, productivity and portability. This will include new capabilities in the Cray Programming Environment and AMD’s ROCm open compute platform that will be integrated together into the Cray Shasta software stack for Frontier.
“AMD is proud to be working with Cray, Oak Ridge National Laboratory and the Department of Energy to push the boundaries of high performance computing with Frontier,” said Lisa Su, AMD president and CEO. “Today’s announcement represents the power of collaboration between private industry and public research institutions to deliver groundbreaking innovations that scientists can use to solve some of the world’s biggest problems.”
ORNL’s Center for Accelerated Application Readiness is now accepting proposals from scientists to prepare their codes to run on Frontier. Visit the Frontier website to learn more about what researchers plan to accomplish in these and other scientific fields.
Frontier will be part of the Oak Ridge Leadership Computing Facility, a DOE Office of Science User Facility. For more information, please visit https://science.energy.gov/.
###
FYI:
Exascale computing refers to computing systems capable of at least one exaFLOPS, or a billion billion (i.e. a quintillion) calculations per second. Such capacity represents a thousandfold increase over the first petascale computer that came into operation in 2008.
Got an idea? Submit a proposal here.
Discover more from Watts Up With That?
Subscribe to get the latest posts sent to your email.
Judging by the accuracy of the computer climate modelling so far I hope someone is working frenetically on developing extra high speed toilet paper to keep up with their ever faster output.
Wow! That’s a lot of flops! I bet all of the cows in America can’t produce that many flops in a month!
This computing power could have enormous benefits but only if well design strategies are applied. Climate models are clearly inadequate, mainly because they are constructed based on politically convenient assumptions rather than with respect for real observations and trends. Using this computing ability along with AI data analytics abilities would be a great way to discover what is wrong with current models, whether bottom up or top down model construction is best approach or whether extending observable trends and knowable cycles into the future is a better way to predict coming climate trends. Even just taking what we have measured and inputting it, one could search for the useful patterns that should have been incorporated in climate models in the first place but which were ignored. Regardless of all that a computer, the models programmed into it and the outputs are not reality, and can only be judged in terms of their reflection of reality by real world validation.
Maybe now researchers will be able to figure out what makes a Supernova explode (those 3D simulations take a lot of flops)! /sarc
Wake me up when they reach yottaflops powered by a series of fusion reactors.
When the models can get even one year right, give me a call…
Careful. Unless you have an accurate model with good data, all that is going to happen is that you will get [pruned] forecasts even faster.
[Avoid such language here. It is not needed. .mod]
A 50x increase in speed would allow grid resolution to be about 4x better in each dimension. This is a good thing, but is still not enough to allow modeling of what regulates climate (local albedo)
Wow now they can be wrong more often.
It’s got AI! Which means it will be able to “learn” what results the programer wants and deliver them every time.
Somehow the term, “exaflops”, seems quite appropriately applied to climate-modeling calculations.
No matter what computer capacity we might be talking about, climate models seem to be extreme flops.
It’s sort of poetic, then, when models that are extreme flops potentially have access to computers of exaflops computing speed.
Now climate alarmists can make themselves look even more convincing by making such claims as, … using the world’s fastest computers ever, … as if this has anything to do with the usefulness of the models.
For the sake of the rest of the scientific community, I think that I would have chosen a different word than “exaflops” to describe computing power. It’s like naming your child “Dolt” or “Binge” or “Crud” — good names for a lawyers’ firm maybe — Dolt, Binge & Crud.
CMIP5 typical model resolution was 2.5 degrees or 280 km at the equator. Finest resolution was 110km. The resolution needed to resolve crucial convection cells (tstorms) is at or below 4km. The rule of thumb at NCAR is that doubling resolution by halving grid size requires 10x computing power (a function of the CFL constraint on PDE solutions). So it is either a 6 orders of magnitude or a 5 orders of magnitude computationally intractable problem. Forcing parameterization that drags in the attribution problem.
This new superduper supercomputer is 1.5 orders of magnitude faster. Leaving a 4 or 5 order of magnitude computationally intractable problem. Climate modelers still cannot model convection cells. They still have to parameterize.
See previous guest posts ‘The trouble with models’ and ‘Why models run hot’ for details.
This reality is bad for funding, Rud I. If the funding agencies don’t know this reality, then the climate “scientists” can keep their gravy train secured.
It is more likely it will be utilized for Nuclear Fusion research.
This computer is so fast, it can now predict global warming AND global cooling catastrophes simultaneously, covering all options. The 12 year deadline has now been refined to within a specific day and hour….
Go to minute 9 of this talk on Quantum Computing.
It explains why many problems are, and will remain, intractable using classical computers.
(For instance, calculating the energy levels of the molecule FeS).
In any case, it does not matter what computer you have if the model you are using does not model reality.
I suppose this new, super gizmo, could be used to mine Bitcoin but I suspect the cost of operating the beast would be greater than the value of the Bitcoin. You know, someone, somewhere, crunched that number as a potential ROI on 600 million.
Personally, I’m not sure how the increase in processing power would actually improve the accuracy or quality of the results in any calculation. Get an answer faster? Sure. Any more correct? Who knows? If you’re talking about climate you have to wait a while (decades or longer) before you have an answer key. And, if you’re talking about calculating the potential yield of an, aging, mushroom cloud generator, you hope to never find out if the answer was correct.
As an aside on the subject of forecasting future climate. If we ever reach the capability to, accurately, forecast the future climate of the Earth it will be because we also have the capability to control the climate of the Earth. So, never, seems a good bet here.
Cheers
Max
But can it run Metro Exodus on extreme?
Ok, you just have being given a Quantum process protocol “experience”, like now here in full proper running Metro Exodus on extreme…in front of your eyes and senses… did you see or detected anything… nope …zero… it just did happen…or maybe not… but not actually to be considered within the means of interfacing in proper visual or sound or 2D or 3D for you to detected, as in real time space detection meaning… as there no consideration of space and time addressing applied, for you to have any detection of it….but in the very basics of Quantum it may have just happened, regardless of your time space proposition experience…
weird is not it????
You never will be able to say that it did not happen in the consideration of Quantum processes and protocols…because of no detection or affirmation of/from senses in the consideration of time space metric.
Just how it may be….too weird… within the Quantum principle….Quantum.
Still, the most of your brain, or mine or any one else is inactive,
as so some clever and highly learned people say….!!!!
Most probably I should not have done this.
cheers
[?? .mod]
Fifty times faster is derisory. The really big problem with climate models is that they can’t deal with convection, the main heat-transport mechanism in the atmosphere, in a realistic manner. Convection cells vary in size from tens of meters to tens of kilometers. Climate models typically use 100×100 km cells.
To handle convection realistically from basic physics probably requires cells as small as 100x100x100 meters. This would require 1,000 x 1,000 x 1,000 x 1,000 = 10^12 times more calculating capacity, to handle the 1,000 times finer cell size in three dimension and the similarly shortened time steps (which must change in proportion to cell size).
By the way you will also need to have data on the thermal characteristics of the ground/sea with the same resolution.
Is a tornado a great example of convection?
Back in the early 1970s when I programmed for NOAA in Boulder, Colorado, my boss told me that to have accurate weather forecasts two weeks into the future would require monitoring all convection down to the scale of a small dust devil everywhere around the world — and the required ubiquitous monitoring equipment itself would necessarily change the phenomena being measured.
Making a faster computer…
megaflops => gigaflops => teraflops => petaflops => exaflops
The classic “make it go faster” excuse where technicians and engineers design and build faster computers advancing technology faster and faster on a one lane highway.
Leaving the code designers to dream of faster code compilations, program runtimes and just as slow database inputs/outputs as always.
The programs are not designed from the ground up to be intuitive or comprehensive thinking logic. They are simple mathematical calculations using arrays.
Which describes moving the same old program code to a machine that runs the kernel and code instructions faster choked ever more by inputs/outputs.
Think “Harry read Me” programs moved from slower machines to faster machines.
Greater and faster memory access allows processing of larger arrays which, theoretically allows finer resolution. It does not improve the logic or results.
I am reminded of hormonal young teens, (“Researchers with DOE’s Exascale Computing Project”), oohing and wowing drag race cars that go fast fairly well with quite a few drag racers attaining failure faster and sometimes catastrophically.
This is cool!
Why the “hate by so many commenters?
This advancement has nothing to do with our task at hand; Fighting back at the Jacobins using a trace gas against us.
Let yourself marvel at the fruits of our human-driven achievements!!!
In other words: “Lighten up, Fransis”….(Francis’s?)
My guess is that there aren’t many people here with experience in high performance computing, those that do have little to say, this is just another (big) step along the curve. The rest only know of past efforts with new systems at other weather and climate forecasting sites.
I mostly left the HPC arena in the 1990s, but I like to keep up with the Top500 list and other tech, like M.2 SSDs. While the supercomputer arena is focused on computes, we’ve always had to worry about I/O speeds keeping up.
I remember testing NFS file transfer on quad CPU systems that were often used as building blocks for big systems used at PSC and national labs.
Last month I moved to a town with 1800 residents. 1000 Mbps internet, a free upgrade from 300 Mbps, their minimal offering. The cable guy demonstrated it with a speed test server from his laptop (he had to boot into Linux, Windows couldn’t run that fast). I had to transfer a file system from a 7 year old failing system to a 2 year old system that night – 800 Mbps, with no real tuning other than buying one of the cheaper M.2 SSDs.
All pretty amazing. Been a great career.
The big advantages will be spent on compute-hungry tasks like protein folding, drug and enzyme design, and all sorts of stuff that have nothing to do with climate or things that go Boom!
A great example of the waste and opportunity cost relating to climate alarmism.
Just imagine if that computing power could be used to solve real problems affecting the real world.
If incorporates AI do they not run the risk of their own computer telling them that they are wrong? I would love to be a fly on the wall when that happens.
Here am I, brain the size of a planet, and you ask me to run climate models?
Why is the thing so big? It looks like a computer from the 1980’s
Just string a bunch of computer phones together and it would fit on your side table…or coffee table…
Its so big because quantum computing has been an abject failure. In fact the failure is so bad, that they still are in denial, and call some computers quantum computers 😀
The only way to make computers really powerful today, is to make them bigger and bigger and bigger, which is an advance in finance not computing
Building a bigger machine to make more calculations is not a technical advancement by any meaning of the words.
They increased capacity, something we’ve been doing in IT for a long long time.
I wonder how much processing got done by the SETI project which distributed processing to anyone who would install the client around the world. 🙂
Per https://en.wikipedia.org/wiki/SETI@home#Statistics :
1 Exaflop is 1,000,000 Teraflops. So considerably more.
Note that projects like SETI are suitable for distributed computing because you can ship out subsets and don’t need the results back quickly. Other problems, e.g. coarse grain 3D modeling, need to be able to get intermediate results to CPUs as fast as possible. While it’s not quite so bad with fine grain modeling, the overall bandwidth required to keep the CPUs busy is phenomenal and hence also unsuited for distributed computation.
Note that for things like weather, while you can pour a lot of computes in to a fairly small volume when air movement is the only concern to satisfy that part of the problem, once you include radiative transfer issues that are moving energy at the speed of light, all of a sudden your CPUs have got to get to some “distant” memory ASAP.
Bottom line: SETI is no help for the problems big supercomputers are used for. Sorta like getting the Miata fan club involved with moving enough coal and oil to power an industrial state.
People really, really, want to keep supercomputers small. The speed of light imposes massive limits on performance of plain old wires. However, you need a lot of processor cores to deal with the challenges of making chips faster (and hence hotter). The most recent supercomputer leader, at ORNL, https://www.top500.org/system/179397 has 2.4 million cores, that’s some 150,000 CPUs. Also about that many memory boards.
They literally made it as small as they could.
And as low power as the could – 10 MW is pretty good for a machine that size!
The more things change…
Way back in the 1960’s, following a disastrous attempt to computerise Avro’s ‘Weights’ department, I wrote a folk song (yes, that’s me in beard, flares, and carrying a guitar). The last verse (referring to the top-of-the line British mainframe computer, the ICT 1900) goes:
I’m a number nineteen hundred, and I’m only three years old
But now I’m getting past it, as I’ve recently been told
The nineteen-oh-four’s better, all the programmers agree
The bloody thing can make mistakes ten times as fast as me!
Just to be pedantic (and correct), the term “incorrect” is not relevant in predictive models. Models are tested by cross validation using data (in the case of climate models, data from the past). If they can retro-cast the data with good accuracy (not a good criteria for very low probability events!), then they can be used to predict the outcome of the hypothesis they are designed to detect (e.g., that temps will increase at a certain rate given future inputs (e.g., CO2 levels). In other words, _assuming_ the model captures the hypothesized causality, inputs of X will yield results Y with probability p.
In summary, “correct” and “incorrect” don’t really apply to probabilistic statements.
But that ability to predict the future correctly is inversely proportional to the ammount of “Tuning” required to accurately model the past
Wow, 1.5 exaflops is leally leally uge, too uge from a point of view or a world view point of little guy like me, really really huge capacity there…A wonder, in it’s own stand as put or claimed!
cheers