Download the Current Issue
Dear ComputingEdge reader:
New Ideas in Supercomputing: High-performance computing (HPC) typically involves high-precision arithmetic and can achieve speeds of several petaflops. But emerging hardware technologies — such as tensor cores and resistive memory—could soon open the floodgates to new possibilities in HPC. This issue of ComputingEdge highlights the work of computational scientists and researchers who are utilizing these cutting-edge technologies to bring HPC to the next level.
Speed and Search
Speed and search are two things that have always been part of computer science. Before the ENIAC engineers had even completed their computing machine, John von Neumann was asking, “How can we make this device faster?,” and Thomas J. Watson Jr. of IBM was asking, “How can we use this technology to search through company data?” These two topics are at the core of the November issue of ComputingEdge.
As we’ve developed more and more machine-learning applications, we have renewed our interest in floating-point accelerators (ancillary processors that perform calculations). Jonathan Hines, in his paper “Mixed Precision: A Strategy for New Science Opportunities,” looks at a new trend in this field: low-precision arithmetic. You can make the processor faster if you don’t have to manipulate as many bits. Apparently, machine learning can function fairly well with low-precision arithmetic. “Low-precision calculations have the potential to benefit not only deep-learning and data-science applications,” writes Hines, “but also modeling and simulation.”
In our search for accelerators, we’re returning to another old idea—analogue computing—though we are approaching it with new technologies and new ideas. In “Memristive Accelerators for Dense and Sparse Linear Algebra: From Machine Learning to High-Performance Scientific Computing,” author Engin Ipek describes a process that exploits “the analog properties of a resistive memory array” from a computing element called a memristor. It is an interesting approach because it allows the machine to address one of the big challenges of computing: the expense of moving data from one place to another.
Finally, on the subject of speed, Rachel Harken talks about how she and her team created a “realistic solution of a nuclear 100-body problem” with the high-performance computers at Oak Ridge National Laboratory (“Uncovering Magic Isotopes with the Power of HPC”).
Search—the second main topic of the issue—sometimes involves speed, but it also involves being clever, as the current issue shows. If you’re looking for clever solutions, you should turn to the article by Guo and Bigham (“Making Everyday Interfaces Accessible: Tactile Overlays by and for Blind People”), which helps blind people interpret and understand photographic data. A second solution, equally clever, comes from Jacqueline Cole. In “Data-Driven Molecular Engineering of Solar-Powered Windows,” she shows how to gather data and search that data to find the right molecules to build windows—windows for buildings—that generate electricity.
There are other subjects, of course, treated in the current issue of ComputingEdge. Many have to do with the evolving nature of information technology. I’ll discuss those in another note.
—David Alan Grier for ComputingEdge