Is HPE’s “Machine” the Novel Architecture to Fit Exascale Bill?

June 18, 2017

The exascale effort in the U.S. got a fresh injection with R&D funding set to course through six HPC vendors to develop scalable, reliable, and efficient architectures and components for new systems in the post-2020 timeframe.

However, this investment, coming rather late in the game for machines that need hit sustained exaflop performance in a 20-30 megawatt envelope in less than five years, raises a few questions about potential shifts in what the Department of Energy (DoE) is looking for in next-generation architectures. From changes in the exascale timeline and new focal points on “novel architectures” to solve exascale challenges, and of course, to questions about planned pre-exascale machines like Aurora, it is clear there is a shakeup. As we noted in the PathForward funding announcement today, this represents a recognition that architectures and applications are changing quickly and the DoE wants to invest in systems that will be viable for the long-haul, but it also causes us to circle back to the idea of how important a novel approach to extreme scale computing fits into the bigger 2021 picture.

Among the six vendors selected for the PathForward funding, three are full systems companies—Cray, IBM, and HPE. For these, the emphasis will be building a hardware and software stack with all components that can scale reliably and efficiently and provide a programmable springboard for exascale application developers.

Unlike Cray and IBM, which have been building custom-engineered HPC systems for many years, HPE is unique on this list in that The Machine architecture (which we have detailed extensively architecturally) is not rooted in HPC (rather it was designed for pooled memory across large datasets with random access patterns in mind). To be fair, HPE is the top systems supplier in HPC, at least according to…

Full article from the Source…

Back to Top