Graphic processors, or GPUs, are used in various fields: cryptocurrency mining, data analysis and rendering. One of the applications of video cards is training of neural networks using the deep learning technology. Let’s figure out how exactly GPUs are used for Machine Learning and which video cards are suitable for neural networks training.

How are video cards used in ML?

Today, machine learning of neural networks is widely used, since large amounts of data have become available, and their processing has become more efficient: GPU is a good tool in this issue.

In the field of machine learning, GPUs allow us to classify images, recognize speech and process texts.

GPUs train neural networks using large training sequences for a short time. Video cards are also capable of reproducing ML learning models to perform classification tasks.

Advantages of graphics processors for deep learning

GPUs have high performance: thanks to the architecture of their core, GPUs effectively cope with a large number of simple tasks of the same type. This is one of the differences between GPUs and CPUs.

Neural networks training using CPU takes months, and GPUs can handle the task in a few days. However, they consume less energy due to the smaller infrastructure.

In addition, GPUs take advantage of parallelism. Neural networks are parallel algorithms, so GPUs are well suited for machine learning.

Video cards are also optimized for matrix operations and accelerate them: they are necessary for neural networks to get results.

AI Conference: Machine learning using video cards: how are GPUs used in ML? 1

Which GPU to choose to train neural network?

Since different video cards are aimed at different tasks, it is necessary to choose graphics processors suitable for Machine and Deep Learning to train neural networks effectively.

  • GeForce GTX 1080 Ti

NVIDIA GeForce GTX 1080 Ti is an effective solution for machine learning. According to the manufacturer’s website, the GPU has a new generation of GDDR5X memory and a bandwidth of 11 Gb/s.

According to the manufacturer, the GeForce GTX 1080 Ti is the fastest of all NVIDIA video cards released. Such a graphics processor shows a performance of 35% higher than the model GeForce GTX 1080.

The video card is even faster than the NVIDIA Titan X Pascal model, designed for machine learning and AI systems.

  • GeForce GTX Titan X

This graphics processor from NVIDIA is designed specifically for quick learning of deep neural networks.

It has a memory of 12 GB and 3072 cores, so that the video card makes efficient single-precision calculations.

  • Tesla P100

NVIDIA’s Tesla P100 graphics processor is suitable if machine learning requires a large amount of memory – 16 GB is available here.

The processor is used for single and double precision computing. Common applications include financial computing and CFD modeling.

Video cards significantly speed up the process of neural network deep learning: their computing power allows processing big data effectively, while reducing energy consumption.


Machine learning technologies will be discussed at AI Conference in Moscow.

Details and registration