Deep Learning Everywhere, For Everyone

Get Started with NVIDIA GPU Cloud and Amazon EC2

NVIDIA GPU Cloud (NGC) is a GPU-accelerated cloud platform that makes it easy to get started quickly with the top deep learning frameworks on-premises or on Amazon Elastic Compute Cloud (Amazon EC2) and other cloud providers coming soon.

Watch Video >
Innovate in Minutes, not Weeks

Innovate in minutes, not weeks

Pre-integrated, performance-optimized, and containerized deep learning frameworks.

Deep Learning Across Platform

Deep learning across platform

Works on the latest NVIDIA GPUs—on the desktop, in the datacenter, and in the cloud with AWS and other cloud providers coming soon.

Always up to Date

up to date

Updates delivered monthly, to continually optimize libraries, drivers, and containers.

Three Reasons Why: NVIDIA GPU Cloud


    Data scientists and researchers can quickly tap into the power of NVIDIA AI with the world’s fastest GPU architecture and optimized deep learning framework containers available on NVIDIA GPU Cloud. NVIDIA AI is helping to solve some of the most complex problems facing humankind. Providing early detection and finding cures for infectious diseases, reducing traffic fatalities, finding imperfections in critical infrastructure, delivering deep business insights from large-scale data, and much more. Every industry from automotive and healthcare to fintech is being transformed by NVIDIA AI.

    NVIDIA GPU Cloud empowers AI researchers with performance-engineered containers featuring deep learning frameworks such as TensorFlow, PyTorch, MXNet, and more. These pre-integrated, GPU-accelerated frameworks, including all necessary dependencies like CUDA runtime, NVIDIA libraries, and an operating system, are tuned, tested, and certified by NVIDIA to run on Amazon EC2 P3 instances with NVIDIA Volta and NVIDIA DGX-1™ and NVIDIA DGX Station™. This eliminates time-consuming and difficult do-it-yourself software integration, while allowing users to tackle challenges with AI that were once thought to be impossible.

    The catalog of optimized deep learning framework containers on NGC is available for everyone at no cost, for use on participating cloud service providers and NVIDIA DGX™ Systems. Containerized software allows for portability of deep learning jobs across environments, reducing the overhead typically required to scale AI workloads. Developers and data scientists experimenting on an NVIDIA DGX Station, enterprises with NVIDIA DGX-1 in the datacenter, or organizations using NVIDIA GPUs in the cloud now have access to a consistent, optimized set of tools. With NGC, researchers can spend less time on IT and more time experimenting, gaining insights, and driving results with deep learning.

Truly scalable deep learning


NGC offers performance and flexibility for the evolving needs of deep learning projects and gives immediate access to the power of NVIDIA AI for every industry. Now, scientists and researchers can rapidly build, train, and deploy neural network models to address the most complex AI challenges. NGC manages a catalog of fully integrated and optimized deep learning framework containers that take full advantage of NVIDIA GPUs in the cloud or on-premises.

Frequently Asked Questions

  • What do I get by signing up for NGC?

    You get access to a comprehensive catalog of fully integrated and optimized deep learning framework containers, at no cost. 

  • Who can access the containers on NGC?

    The containers are available to anyone who signs up for an account on NGC. By maintaining an account, users can download and use the latest versions of the GPU optimized containers on supported platforms.

  • What is in the containers?

    Each container has a pre-integrated stack of software optimized for deep learning on NVIDIA GPUs, including a Linux OS, CUDA runtime, required libraries, and the chosen framework (NVCaffe, TensorFlow, etc.). These pieces are all tuned to work together immediately without additional setup work.

  • Which frameworks are available on NGC?

    The NGC container registry has NVIDIA GPU accelerated releases of the most popular frameworks: NVCaffe, Caffe2, Microsoft Cognitive Toolkit (CNTK), Digits, MXNet, PyTorch, TensorFlow, Theano, Torch, CUDA (base level container for developers).

  • Where can I run the containers?

    The GPU accelerated containers are tuned, tested, and certified by NVIDIA to run on Amazon EC2 P3 instances with NVIDIA Volta and NVIDIA DGX Systems. Support for additional cloud service providers coming soon.

  • Can I run containers from NGC on my PC with a Titan Xp or GeForce 1080 Ti?

    Yes, the terms of use allow the NGC framework containers to be used on desktop PCs running Pascal or Volta based GPUs.

  • How do I run these containers on Amazon EC2?

    NVIDIA created a free Amazon Machine Image (AMI) called NVIDIA Volta Deep Learning AMI for NGC, available from the AWS marketplace. This AMI is an optimized environment for running the deep learning frameworks available from the NGC container registry. You simply create an instance of the AMI, and pull the desired framework from NGC into your instance, and can instantly get started running deep learning jobs. Incidentally, there are other AMIs for deep learning on the AWS Marketplace, but they are not tested or optimized by NVIDIA.

  • How often are the containers and frameworks updated?

    Monthly. The containers on NGC benefit from continuous R&D investment by NVIDIA and joint engineering with framework engineers to ensure each deep learning framework is tuned for the fastest training possible. NVIDIA engineers continually optimize the software, delivering monthly container updates to ensure that your deep learning investment reaps greater returns over time.

  • What kind of support does NVIDIA offer for these containers?

    All NGC users get access to the NVIDIA DevTalk Developer Forum https://devtalk.nvidia.com/. The NVIDIA DevTalk Developer Forum is supported by a large community of deep learning and GPU experts from the NVIDIA customer, partner, and employee ecosystem.

  • Why is NVIDIA providing these containers?

    NVIDIA is accelerating the democratization of AI by giving deep learning researchers and developers simplified access to GPU-accelerated deep learning frameworks, making it easy for them to run these optimized frameworks on Volta-enabled cloud providers or locally on systems with the latest NVIDIA GPUs.

  • So, there is no charge for the containers, do I pay for compute time?

    There is no charge for the containers from the NGC container registry (subject to the terms of the TOU). However, each cloud service provider will have their own pricing for accelerated computing services

Download the NGC Deep Learning Frameworks Brief

Download the NGC Deep Learning Frameworks Brief

Learn more about the optimization of the top deep learning frameworks in this brief. Get started with NGC and all the major frameworks including TensorFlow, PyTorch, MXnet, Theano, Caffe2, Microsoft Cognitive Toolkit (CNTK), and more.

Get access to performance-engineered deep learning frameworks with NGC