Memory-driven computing: Has HPE nailed it?
FYI, this story is more than a year old
In what is considered a major milestone, Hewlett Packard Enterprise has successfully demonstrated memory-driven computing architecture, a move the company says is a world first.
Memory-Driven Computing is a concept that puts memory, not processing, at the centre of the computing platform to realise performance and efficiency gains not possible today.
Developed as part of The Machine research program, HPE says its proof-of-concept prototype represents a ‘major milestone’ in the company’s efforts to “transform the fundamental architecture on which all computers have been built for the past 60 years”.
The company says it is committed to rapidly commercialising the technologies developed under The Machine.
Gartner predicts that by 2020, the number of connected devices will reach 20.8 billion and generate an unprecedented volume of data, which is growing at a faster rate than the ability to process, store, manage and secure it with existing computing architectures.
“We have achieved a major milestone with The Machine research project – one of the largest and most complex research projects in our company’s history,” says Antonio Neri, executive vice president and general manager of the Enterprise Group at HPE.
“With this prototype, we have demonstrated the potential of Memory-Driven Computing and also opened the door to immediate innovation,” he says.
“Our customers and the industry as a whole can expect to benefit from these advancements as we continue our pursuit of game-changing technologies,” adds Neri.
According to a statement, HPE is committed to rapidly commercialising the technologies developed under The Machine research project into new and existing products.
These technologies currently fall into four categories: Non-volatile memory, fabric (including photonics), ecosystem enablement and security.
Neri says the proof-of-concept prototype, which was brought online in October, shows the fundamental building blocks of the new architecture working together, just as they had been designed by researchers at HPE and its research arm, Hewlett Packard Labs. HPE has demonstrated:
- Compute nodes accessing a shared pool of Fabric-Attached Memory;
- An optimised Linux-based operating system (OS) running on a customised System on a Chip (SOC);
- Photonics/Optical communication links, including the new X1 photonics module, are online and operational; and
- New software programming tools designed to take advantage of abundant persistent memory.
According to HPE, during the design phase of the prototype, simulations predicted the speed of this architecture would improve current computing by multiple orders of magnitude.
The company has run new software programming tools on existing products, illustrating improved execution speeds of up to 8,000 times on a variety of workloads, it says.
“HPE expects to achieve similar results as it expands the capacity of the prototype with more nodes and memory,” the company says.
In addition to bringing added capacity online, The Machine research project will increase focus on exascale computing, HPE says.
Exascale is a developing area of High Performance Computing (HPC) that aims to create computers several orders of magnitude more powerful than any system online today.
“HPE’s Memory-Driven Computing architecture is incredibly scalable, from tiny IoT devices to the exascale, making it an ideal foundation for a wide range of emerging high-performance compute and data intensive workloads, including big data analytics,” the company explains.