Nicholas Malaya PhD
AMD Fellow in HPC and Technical Lead for Exascale Application Performance
AMD
Nicholas Malaya is an AMD Fellow in High Performance Computing and is AMD’s technical lead for exascale application performance. Nick’s research interests include HPC, computational fluid dynamics, Bayesian inference, and ML/AI. He received his PhD from the University of Texas. Before that, he double majored in Physics and Mathematics at Georgetown University, where he received the Treado medal. In his copious spare time, he enjoys long distance running, wine, and spending time with his wife and children.
Presentation Title:
The Unreasonable Effectiveness of FP64 Precision Arithmetic
Presentation Abstract:
Double precision datatypes, also known as FP64, has been a mainstay of high-performance computing (HPC) for decades. Recent advances in AI have extensively leveraged reduced precision, such as FP16, or more recently, FP8 for Deepseek. Many HPC teams are now exploring mixed and reduced precision to see if significant speed-ups are possible in traditional scientific applications, including methods such as the Ozaki scheme for emulating FP64 matrix multiplications with INT8 datatypes. In this talk, we will discuss the opportunities, and significant challenges, in migrating from double precision to reduced precision. Ultimately, AMD believes a spectrum of precisions are necessary to support the full range of computational motifs in HPC, and that native FP64 remains necessary in the near future.
