AMAX Launches The World’s First Elastic Deep Learning Cloud At GTC 2017
FREMONT CA, May 8th, 2017 - AMAX, a leading provider of Cloud/IaaS, Deep Learning, Enterprise computing infrastructure solutions, today announced the release of the MATRIX, the world’s first elastic on premise deep learning cloud platform developed for AI, Machine Learning and HPC. AMAX will be showcasing the MATRIX platforms and technology in Booth #400 at the GPU Technology Conference (GTC) 2017 between May 9th and May 11th.
The MATRIX combines AMAX’s award-winning deep learning platforms with first-in-industry GPU over Fabrics technology. The GPU over Fabrics technology was developed to transcend physical system limitations by aggregating and sharing GPU resources across multiple nodes within a single network. The MATRIX breaks the current limitations of CUDA to maximize GPU utilization, consolidate GPU compute power on demand, and spin up elastic on premise GPU clouds for unlimited resource distribution and flexibility.
“The MATRIX unleashes GPU computing in the same way VMware revolutionized general computing years ago,” said Dr. Rene Meyer, VP of Technology, AMAX. “The MATRIX takes the high performance capabilities of GPU computing one step further by removing restrictions to resource distribution while eliminating processing inefficiencies, with cost savings features thrown in for extra value.”
How The MATRIX Works
Using NVIDIA GPUs with CUDA, applications communicate with GPUs via CUDA APIs, which call CUDA libraries to execute kernels on local GPUs. The MATRIX GPU virtualization framework replaces CUDA APIs with MATRIX APIs to reroute API calls through high speed Ethernet or Infiniband fabrics to one or several remote GPU host servers (GPU over Fabrics). To the client application, MATRIX presents virtual (remote) GPUs as local. The framework supports as many as 64 vGPUs per client with the limitation being the network bandwidth. GPU sharing among multiple kernels and vGPU overprovisioning within VMware are also supported through the resource manager for dynamic resource allocation. MATRIX features include:
- Increase hardware resource utilization across multiple jobs and users.
- Supports major virtualization frameworks like VMware and Docker.
- Unprecedented flexibility in GPU allocation to clients & virtual machines.
- Dynamic concurrent GPU access across multiple users.
- Create virtual GPU clusters on demand using workstations and servers.
- Easily upgrade non-GPU clusters to virtual GPU clusters via GPU over Fabrics.
- Less than 5% overhead penalty for network traffic when using high speed networks (10Gb and above).
- Reduce training & inference times to accelerate Deep Learning development.
- Reduce processing time for HPC applications (Monte Carlo, Gene Sequencing, etc.).
- Minimize infrastructure costs through improved resource efficiency.
The MATRIX product line includes workstations ideal for startups, incubators and universities, allowing developers to work as individual pods, yet leverage collective resources to increase compute power on demand. The MATRIX also includes high performance servers and the Machine Learning [SMART]Rack—a data-center ready rack-scale Machine Learning and Analytics platform featuring 64x NVIDIA Tesla P100 cards per rack, as well as All-Flash storage, 25Gb high speed networking, [SMART]DC Data Center Manager and an in-rack battery for graceful shutdown in power loss scenarios.
All MATRIX solutions come pre-bundled with Deep Learning software tools and libraries, as well as a one-year subscription to the MATRIX GPU Virtualization software. AMAX will be hosting a Presenter Series on various topics around GPU and Cloud Computing for AI/Machine Learning & HPC throughout GTC 2017. To learn more about AMAX MATRIX GPU virtualization solution or the Presenter Series schedule, please visit Booth #400.
About AMAX AI
AMAX AI specializes in AI / Deep Learning infrastructure design and solutions. AMAX AI works with today’s most cutting edge AI startups as well as global enterprises and leading universities to build out AI infrastructures for development, training and inference at scale. As an NVIDIA Preferred Partner, AMAX AI focuses on high-performance GPU solutions ranging from rackscale DL-as-a-Service platforms to compute-dense servers and dev workstations. The MATRIX Deep-Learning-in-a-Box product line features containerized Deep Learning solutions bundled with everything you need to fast track your development and deployment, and can be combined as MATRIX building blocks to build a highly elastic, self-service on-premise Deep Learning cloud.