Deep learning, self-driving cars, and AI are all huge topics these days, with companies like Nvidia, IBM, AMD, and Intel all throwing their hats into the ring. Now Cray, which helped pioneer the very concept of a supercomputer, is also bringing its own solutions to market.
Cray announced a pair of new systems: the Cray CS-Storm 500GT, and the CS-Storm 500NX. Both are designed to work with Nvidia’s Pascal-based Tesla GPUs, but they offer different feature sets and capabilities. the CS-Storm 500GT supports up to 8x 450W or 10x 400W accelerators, including Nvidia’s Tesla P40 or P100 GPU accelerators. Add-in boards like Intel’s Knights Landing and FPGAs built by Nallatech are also supported in this system, which uses PCI Express for its peripheral interconnect. The 500GT platform uses Intel’s Skylake Xeon processors.
The Cray CS-Storm 500GT supports up to 10 P40 or P100 GPUs and taps Nvidia’s NVLink connector rather than PCI Express. Xeon Phi and Nallatech devices aren’t listed as being compatible with this system architecture. Full specs on each are listed below:
The CS-Storm 500NX uses NVLink, which is why Cray can list it as supporting up to eight P100 SMX2 GPUs, without having eighth PCIe 3.0 slots (just in case that was unclear).
“Customer demand for AI-capable infrastructure is growing quickly, and the introduction of our new CS-Storm systems will give our customers a powerful solution for tackling a broad range of deep learning and machine learning workloads at scale with the power of a Cray supercomputer,” said Fred Kohout, Cray’s senior vice president of products and chief marketing officer. “The exponential growth of data sizes, coupled with the need for faster time-to-solutions in AI, dictates the need for a highly-scalable and tuned infrastructure.”
The surge in self-driving cars, AI, and deep learning technology could be a huge boon to companies like Cray, which once dominated the supercomputing industry. Cray went from an early leader in the space to a shadow of its former self after a string of acquisitions and unsuccessful products in the late 1990s and early 2000s. From 2004 forwards the company has enjoyed more success, with multiple high-profile design wins using AMD, Intel, and Nvidia hardware.
So far, Nvidia has emerged as the overall leader in HPC workload accelerators. Of the 86 systems listed as using an accelerator at the TOP500 list, 60 of them use Fermi, Kepler, or Pascal (Kepler is the clear winner, with 50 designs). The next-closest hybrid is Intel, which has 21 Xeon Phi wins.
AMD has made plans to enter these markets with deep learning accelerators based on its Polaris and Vega architectures, but those chips haven’t actually launched in-market yet. By all accounts, these are the killer growth markets for the industry as a whole, and they help explain why even some game developers like Blizzard want to get in on the AI craze. As compute resources shift towards Amazon, Microsoft, and other cloud service providers, the companies that can provide the hardware these workloads run on will be best positioned for the future. Smartphones and tablets didn’t really work for Nvidia or Intel–making AMD’s decision to stay out of those markets retrospectively look very, very wise–but both are positioned well to capitalize on these new dense server trends. AMD is obviously playing catch-up on the CPU and GPU front, but Ryzen should deliver strong server performance when Naples launches later this quarter.
Let’s block ads! (Why?)