NVIDIA Tesla V100

The most advanced data centre GPU ever built

Welcome to the Era of AI

Every industry wants intelligence. Within their ever-growing lakes of data lie insights that can provide the opportunity to revolutionise entire industries. From personalised cancer therapy to predicting the next big hurricane to virtual personal assistants conversing naturally; these opportunities can become a reality when data scientists are given the tool they need to realise their life’s work.

NVIDIA® Tesla® V100 is the world’s most advanced data centre GPU ever built to accelerate AI, HPC, and Graphics. Powered by the latest GPU architecture NVIDIA Volta, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.



AI Training

From recognising speech to training virtual personal assistants to converse naturally; from detecting lanes on the road to teaching autonomous cars to drive; data scientists are taking on increasingly complex challenges with AI. Solving these kinds of problems requires training deep learning models that are growing in complexity exponentially, in a practical amount of time.

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance. The next generation of NVIDIA NVLink™ connects multiple V100 GPUs at up to 300 GB/s to create the world’s most powerful computing servers. AI models that would consume weeks of computing resources on previous systems can now be trained in a few days. With this dramatic reduction in training time, a whole new world of problems will now be solvable with AI.


AI Inference

To connect us with the most relevant information, services, and products hyperscale companies have started to tap into AI. However, keeping up with user demand is a daunting challenge. For example, the world’s largest hyperscale company recently estimated that they would need to double their data centre capacity if every user spent just three minutes a day using their speech recognition service.

Tesla V100 is engineered to provide maximum performance in existing hyperscale server racks. With AI at its core, one 12kW server rack with Tesla V100 GPUs delivers the same deep learning inference performance as 40 racks of CPU servers. This giant leap in throughput and efficiency will make the scale-out of AI services practical.



High Performance Computing (HPC)

HPC is a fundamental pillar of modern science. From predicting weather to discovering drugs to finding new energy sources, researchers use large computing systems to simulate and predict our world. AI extends traditional HPC by allowing researchers to analyse large volumes of data for rapid insights where simulation alone cannot fully predict the real world.

Tesla V100 is engineered for the convergence of HPC and AI. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads. Every researcher and engineer can now afford an AI supercomputer to tackle their most challenging work.




Ultimate performance for deep learning



Highest versatility for all workloads

NVIDIA Tesla V100 Specifications

Tesla V100 NVLink

with NVIDIA GPU Boost™


7.5 TeraFLOPS


15 TeraFLOPS

Deep Learning

120 TeraFLOPS

Bi-Directional with PCIe Gen3

300 GB/s

CoWoS Stacked HBM2


16 GB/s


900 GB/s

Max Consumption

300 W

powered by Volta

Sign up for the
deep learning newsletter