In the ever-evolving field of high-performance computing (HPC), American computer scientist and mathematician Jack Dongarra has made many pioneering contributions, from numerical algorithms and parallel computing architectures, to performance benchmarking. His most recognized achievement, co-creating the TOP500 list in 1993, has become the gold standard for evaluating supercomputers globally, while his LINPACK benchmark remains the cornerstone for measuring computational performance.
The making of a computing pioneer
Dongarra's journey began with a strong foundation in mathematics. After completing his bachelor's degree at Chicago State University and a master's at the Illinois Institute of Technology, he pursued a Ph.D. in applied mathematics at the University of New Mexico, graduating in 1980. His doctoral research focused on numerical algorithms in linear algebra — work that would later form the bedrock of his contributions to HPC. Today Dongarra holds an appointment at the University of Tennessee, Oak Ridge National Laboratory, and the University of Manchester.
"My early fascination with numerical methods stemmed from their ability to solve complex, real-world problems," Dongarra said. This passion led him to Argonne National Laboratory, where he contributed to the development of EISPACK and LINPACK, two revolutionary software libraries that transformed numerical linear algebra computations. These experiences cemented his belief in the transformative power of HPC.
Architecting the foundations of modern HPC
As computing hardware advanced exponentially under Moore's Law throughout the 1980s and 1990s, Dongarra recognized a critical challenge: software development wasn't keeping pace. His response was to create fundamental tools that would bridge this gap and standardize HPC.
The Basic Linear Algebra Subprograms (BLAS) developed by Dongarra and his colleagues provided a standardized interface for basic vector and matrix operations, enabling software portability across different computer architectures. This was followed by the Linear Algebra PACKage (LAPACK), which implemented more sophisticated algorithms for solving systems of linear equations and eigenvalue problems.
Perhaps his most impactful standardization effort was the Message Passing Interface (MPI), which became the de facto standard for parallel programming. "Before MPI, every supercomputer required its own communication protocol," Dongarra explained. "MPI allowed researchers to write portable parallel code that could run efficiently on any system."
The TOP500 revolution
In 1993, Dongarra partnered with Hans Meuer and Erich Strohmaier to launch the TOP500 project, which ranks the world's most powerful supercomputers based on their LINPACK benchmark performance. What began as a modest effort to track supercomputing trends has evolved into the most authoritative global benchmark for computational power.
"The TOP500 serves multiple purposes," Dongarra noted. "It drives healthy competition among nations and institutions, provides a historical record of technological progress and helps identify emerging trends in supercomputer architecture."
Dongarra's contributions were recognized with the 2021 ACM Turing Award, often described as the "Nobel Prize of Computing." The award was given for his pioneering contributions to numerical algorithms and libraries, that enabled high-performance computational software to keep pace with exponential hardware improvements for over four decades.
"This honor isn't just about my work," Dongarra emphasized. "It's recognition of how fundamental software infrastructure enables scientific discovery."
Transforming science and industry
The impact of Dongarra's work extends far beyond computer science laboratories. In biomedicine, HPC enables genome sequencing, protein folding simulations and drug discovery. During the COVID-19 pandemic, supercomputers using Dongarra's tools helped researchers model the SARS-CoV-2 virus's structure and simulate potential drug interactions at unprecedented speed.
Climate science has similarly benefited. Modern weather prediction and climate modeling rely on HPC to process enormous datasets from satellites and sensors. "The climate models running on today's supercomputers are orders of magnitude more detailed than what we could achieve just twenty years ago," Dongarra said.
Future challenges and opportunities
Looking ahead, Dongarra identifies several critical challenges for HPC. Energy efficiency has become paramount as supercomputers grow more powerful.
He's particularly excited about neuromorphic computing, which mimics the brain's neural structure to create ultra-efficient AI systems. "The human brain operates on about 20 watts-imagine achieving similar computational density in silicon! As we push performance boundaries, we must consider the environmental impact. This means rethinking everything from chip design to data center cooling," he said.
Dongarra has watched with interest as China emerged as a supercomputing powerhouse. "China's systematic investment in HPC infrastructure over the past 15 years has been remarkable. They've not just caught up, [but] moreover, they're leading in certain areas." This global competition, he argues, benefits the entire field by driving innovation.
As for the future of HPC, Dongarra remains characteristically optimistic. "We're just scratching the surface of what's possible. The next decade will see computing become even more powerful, efficient and integrated into every aspect of science and society."
This article was edited based on the original Chinese version written by ZHANG Xinxin from Cover News based in Chengdu, Sichuan province.
Source: Science and Technology Daily
Tel:86-10-65363107, 86-10-65368220, 86-10-65363106