
Future Computing
Technologies
Future Computing
Technologies
Meeting challenges with
new computational capabilities
Meeting challenges with
new computational capabilities
Our computing research is enabled, in part, by the Constance computing cluster. The system is a workhorse for parallel applications, including those in molecular dynamics, climate, and fluid flow calculations, or for high-throughput computing uses such as in high-energy physics or machine learning.
Andrea Starr | Pacific Northwest National Laboratory
At Pacific Northwest National Laboratory (PNNL), future computing technologies encompass fundamental computer science research areas across the hardware and software stack, impacting the future of high-performance computing, edge and distributed computing, and emerging computing paradigms such as quantum, analog, and neuromorphic computing.
PNNL provides science, technologies, and leadership for creating and enabling new computational capabilities to solve challenges using extreme-scale simulation, data analytics, and machine learning. We deliver the computer science, mathematics, computer architecture, and algorithmic advances that enable integration of extreme-scale modeling and simulation with knowledge discovery and model inference from petabytes of data.
Our research covers a multitude of areas, including advanced computer architectures, system software and runtime systems, performance modeling and analysis, quantum computing, high-performance data analytics, and machine learning techniques at scale. Our integrated computational tools enable domain science researchers to analyze, model, simulate, and predict complex phenomena in areas ranging from molecular, biological, subsurface, and atmospheric sciences to complex networked systems. For example, the Scalable High-Performance Algorithms and Data-Structures library—or SHAD—can provide scalability and performance to support different application domains, including graph processing, machine learning, and data mining.
We have recognized expertise in the area of evaluation and capability prediction for both current and emerging large-scale system architectures. The Center for Advanced Technology Evaluation (CENATE) project provided evaluations for emerging hardware technologies for use in future systems, exploring both the performance and security ramifications of novel architectural designs and features. Continuing the success of CENATE, the End-to-end Co-design for Performance, Energy Efficiency, and Security in AI-enabled Computational Science (ENCODE) project designs, deploys, and operates advanced architecture testbeds for the assessment of novel hardware designs.
Our researchers also lead efforts to prepare the Department of Energy for future eras of advanced computing. We are developing software tools, such as the Global Arrays Toolkit, the Lamellar HPC Runtime, and the COMET compiler framework, which provide a high-level, easy-to-use programming model with abstractions suitable across science domains. We are also innovating in areas of data-model convergence and charting a new path in integrating elements of high-performance computing with data analytics to enable new scientific discoveries and computational capabilities.