AMAX GPU Solutions Based On Latest NVIDIA Tesla P100 GPU Accelerators Available To Ship
FREMONT, CA, September 8, 2016 - AMAX, a leading provider of Cloud/IaaS, GPU, HPC and Server Appliance platforms, today announced that its GPU Solutions and HPC Clusters are now available integrated with the latest NVIDIA® Tesla P100 GPU accelerator for PCIe, powered by the new NVIDIA® Pascal architecture.
AMAX's Tesla P100 for PCIe-based computing solutions are the perfect fit for data analytics and deep learning applications running at scale in data centers. The solutions utilize the latest 22-core Intel® Xeon® E5-2600 v4 processor series and DDR4 2400/2133 MHz memory, along with 15.3 billion transistors per Tesla P100 GPU, enabling a single node to replace up to a half rack of commodity CPU nodes by delivering lightning fast performance in a broad range of HPC applications. Handling the same workload with far fewer nodes means customers can save up to 70% in overall data center costs, delivering the best performance and power efficiency for applications across many industries.
"We are very excited about the new Pascal architecture and what it means for deep learning and rendering workloads," said James Huang, Product Marketing Manager, AMAX. "This product further closes the gap between HPC and the data center, bringing significantly more compute power to large data centers within a smaller footprint."
"Deep learning is the new computing model for the modern data center, where HPC and AI intersect,” said Roy Kim, Tesla Product Lead for the Accelerated Computing Group at NVIDIA. "AMAX's server platforms designed for Tesla P100 are optimized for deep learning and data analytics applications to run at scale, delivering the fastest performance and highest efficiency."
NVIDIA® Tesla P100 accelerators are the world's most advanced data center GPUs ever built, designed to boost throughput and save money for HPC and hyperscale data centers. The P100 features four technology breakthroughs:
- New Pascal Architecture: Delivering 4.7 and 9.3 TeraFLOPS of double and single precision for HPC, 18.7 TeraFLOPS of FP16 for deep learning, the new architecture achieves over 10x performance increase for neural network training compared to Maxwell-based solutions.
- 16nm FinFET: The Pascal architecture features the largest FinFET chip ever built, featuring 15.3 billion transistors built on 16 nanometer FinFET fabrication technology. This translates to both high performance and energy efficiency.
- CoWoS® with HBM2: Unifying data and compute into single package for up to 3X memory bandwidth over prior-generation solution.
- Optimized AI Algorithms: New half-precision instructions allow cards to achieve more than 18 TeraFLOPS of peak performance for deep learning applications.
As an Elite member of the NVIDIA Partner Network Program, AMAX is stringent in providing cutting-edge technologies, delivering enhanced, energy-efficient performance for the HPC and parallel computing industries featuring NVIDIA® DGX-1, Tesla K80, and Tesla K40 series. AMAX is now accepting pre-orders for the NVIDIA® DGX-1, and quotes and consultations for Tesla P100. To learn more about AMAX, please visit www.amax.ai or contact AMAX.
About AMAX AI
AMAX AI specializes in AI / Deep Learning infrastructure design and solutions. AMAX AI works with today’s most cutting edge AI startups as well as global enterprises and leading universities to build out AI infrastructures for development, training and inference at scale. As an NVIDIA Preferred Partner, AMAX AI focuses on high-performance GPU solutions ranging from rackscale DL-as-a-Service platforms to compute-dense servers and dev workstations. The MATRIX Deep-Learning-in-a-Box product line features containerized Deep Learning solutions bundled with everything you need to fast track your development and deployment, and can be combined as MATRIX building blocks to build a highly elastic, self-service on-premise Deep Learning cloud.