- Aurora, the supercomputer from Argonne National Laboratory, achieves a groundbreaking HPL score of 1.012 exaflops, surpassing its previous performance by a significant margin.
- Despite being only 87% operational, Aurora secures the top spot on the HPL-MxP benchmark, demonstrating unparalleled AI performance.
- The distinction between HPL and HPL-MxP benchmarks lies in their approach to precision, with HPL-MxP prioritizing speed over absolute accuracy, aligning with the demands of AI and machine learning applications.
- Aurora’s AI capabilities position it as a formidable force in addressing scientific computing challenges and advancing AI-driven research, particularly in computational drug discovery and mapping neurons in the brain.
- The supercomputer’s potential extends to cosmological simulations, enabling scientists to gain deeper insights into the universe’s dynamics and structure.
Main AI News:
In the most recent rendition of the TOP500, Aurora, the supercomputer from Argonne National Laboratory, has successfully shattered the exascale threshold with an HPL score of 1.012 exaflops. This remarkable advancement surpasses the machine’s previous score of 585.34 petaflops and stands as a significant triumph for the Argonne Leadership Computing Facility (ALCF) team.
Aurora’s journey is far from over; this milestone was attained with only 87% of the system operational. Boasting 9,264,128 total cores, Aurora operates on the HPE Cray EX – Intel Exascale Computer Blade architecture, utilizing Intel Xeon CPU Max series processors, Intel Data Center GPU Max Series accelerators, and a Slingshot-11 interconnect.
Although securing the second position overall, Aurora clinched the lead on the HPL-MxP mixed-precision benchmark with 10.6 exaflops of AI prowess, outshining Frontier’s 10.2 exaflops.
As the prominence of AI continues to permeate discussions surrounding High-Performance Computing (HPC) and computing in general, the significance of the HPL-MxP score is on the rise. However, delineating this benchmark from the traditional HPL necessitates comprehension of how supercomputer performance is evaluated.
Differentiating HPL and HPL-MxP in Business Terms
At its essence, all computing revolves around mathematics. Whether perusing adorable cat images on social media or delving into pharmaceutical discoveries, computers excel at solving math problems in precise and practical ways. Thus, assessing these machines involves presenting them with math problems and gauging their efficiency in solving them.
This forms the foundation of the HPL benchmark, which stands for High-Performance Linpack. Part of the LINPACK Benchmarks family, HPL evaluates how swiftly a computer can solve a dense n by n system of linear equations.
HPL constitutes a software package that tackles a random dense linear system, tasking machines with resolving a substantial mathematical problem using 64-bit numbers, ensuring highly accurate solutions.
In contrast, HPL-MxP mandates the machine to solve the same significant mathematical problem as HPL but with a crucial distinction. Instead of employing solely precise 64-bit numbers, systems evaluated by HPL-MxP initially execute most calculations using smaller 16-bit or 32-bit numbers, facilitating faster computation albeit with reduced precision. Subsequently, HPL-MxP employs a specialized technique to refine the machine’s output to full 64-bit precision.
The rationale behind this approach lies in testing AI capabilities, as AI and machine learning often prioritize speed over absolute precision. Moreover, the GPUs propelling the AI revolution excel in swiftly processing calculations with smaller, less precise numbers. As AI initiatives burgeon and the necessity for full 64-bit precision wanes in many real-world scenarios, HPL-MxP’s relevance is poised to soar.
Envisioning Aurora’s AI Utilization in the Corporate Realm
Aurora was conceptualized as an AI-centric system from its inception, and its triumph on the HPL-MxP benchmark attests to its prowess as an AI powerhouse. Indeed, with 63,744 GPUs, Aurora stands as the world’s largest GPU-powered system, as affirmed by the ALCF.
However, notwithstanding the formidable hardware encapsulated within Aurora, its true value manifests only when tasked with real-world problems. Fortunately, the ALCF harbors ambitious plans for this cutting-edge system.
“Aurora’s hardware excels in addressing both conventional scientific computing challenges and AI-driven research,” articulated Rick Stevens, Argonne’s associate lab director for Computing, Environment, and Life Sciences, in an article featured by the ALCF. “As AI continues to reshape the scientific landscape, Aurora provides us with a platform to develop novel tools and methodologies that will substantially accelerate the pace of research.”
Amidst the backdrop of the COVID-19 pandemic, computational drug discovery has emerged as a paramount concern within the HPC community. Aurora’s AI capabilities position it ideally for drug discovery endeavors, and the ALCF team is actively leveraging the system for this purpose. Researchers are devising AI workflows to leverage Aurora in sifting through vast repositories of chemical compounds, aiming to identify potential medicines for combating some of the most pernicious diseases.
The team achieved a screening rate of 11 billion drug molecules per hour using 128 nodes of Aurora, subsequently doubling the node count to 256, thereby showcasing linear scalability and achieving a screening rate of 22 billion molecules per hour. While progress is ongoing, the ALCF aspires to screen 1 trillion candidates per hour once Aurora reaches full operational capacity.
In a similar vein of computational biology, ALCF scientists are harnessing Aurora’s capabilities to develop deep learning models aimed at advancing research focused on mapping neurons in the brain alongside their myriad connections. Initial iterations of this project have yielded promising results, with the team envisioning the reconstruction of brain segments using datasets magnitudes larger than their initial computations. The computational methodologies employed by the researchers pave the way for transitioning from the current mapping of cubic millimeters of brain tissue to the comprehensive mapping of a mouse brain’s cubic centimeter on Aurora and other supercomputing platforms in the future.
Beyond microscopic endeavors, researchers are employing Aurora to model some of the most expansive cosmological systems known to humanity. With Aurora, scientists have the opportunity to infuse greater intricacy and complexity into their cosmological models, potentially yielding fresh insights into the dynamics and structure of the universe.
An initial scientific endeavor deploying around 2,000 Aurora nodes has yielded simulations and images depicting the large-scale structure of the universe. These endeavors have demonstrated exceptional single-GPU performance and exhibited nearly flawless scalability across the entire system. Exascale simulations generated by the researchers are poised to play a pivotal role in validating and enhancing our comprehension of cosmic evolution.
Even in its current state, Aurora stands as a testament to human ingenuity and technological advancement. As the HPC community eagerly anticipates the realization of Aurora’s full potential, its significance within AI applications remains unparalleled.
Conclusion:
Aurora’s remarkable performance and AI-centric design underscore its pivotal role in driving innovation within the supercomputing market. As AI continues to reshape various industries, Aurora’s capabilities position it as a catalyst for accelerating scientific research and unlocking new frontiers in computational exploration. Businesses operating in fields such as pharmaceuticals, neuroscience, and cosmology stand to benefit significantly from Aurora’s unparalleled AI power and computational prowess.