List of NVIDIA Desktop Graphics Card Models for Building Deep Learning AI System

Last update: 16 November 2023

If you are doing deep learning AI research and/or development with GPUs, big chance you will be using graphics card from NVIDIA to perform the deep learning tasks. A vantage point with GPU computing is related with the fact that the graphics card occupies the PCI / PCIe slot. From the frugality point of view, it may be a brilliant idea to scavenge unused graphics cards from the fading PC world and line them up on another unused desktop motherboard to create a somewhat powerful compute node for AI tasks. Maybe not.

With the increasing popularity of container-based deployment, a system architect may consider creating several containers with each running different AI tasks. This means that that the underlying GPU resources should then be shared among the containers. NVIDIA provides a utility called NVIDIA Docker or nvidia-docker2 that enables the containerization of a GPU-accelerated machine. As the name suggests, the utility targets Docker container type.

This utility, however, cannot be immediately usable for all NVIDIA graphics card models. Only graphics card having GPUs with architecture newer than Fermi can benefit from this feature. In 2023, this means that NVIDIA GPUs with the following architecture support containerization of GPU resources:

  • Kepler architecture
  • Maxwell architecture
  • Pascal architecture
  • Volta architecture
  • Turing architecture
  • Ampere architecture
  • Ada Lovelace architecture

CUDA-enabled vs Deep-Learning Ready GPUs

It is important to note that not all CUDA-enabled GPUs can perform deep learning tasks. NVIDIA introduced a terminology called CUDA compute capability that refers to the general specifications and available features of a CUDA-enabled GPU. The GPUs built with Fermi architecture has a maximum compute capability of 2.1, while the Kepler architecture has a minimum compute capability of 3.0.

The deep learning frameworks that rely on CUDA for GPU computing operate by invoking CUDA-specific GPU-accelerated deep learning methods to speed up the computation. These methods are provided in the cuDNN library. The library itself is compatible with CUDA-enabled GPUs with compute capability at least 3.0. It is suffice to say that only GPUs with Kepler architecture or newer are capable of accomplishing the deep learning tasks, or in other words, deep-learning ready.

Desktop GPUs vs Datacenter GPUs

When purchasing a GPU, especially NVDIA GPU, you may need to make sure the intended use of the GPU. Each GPU model has specific environment where it is supposed to be running. Desktop GPUs shall only be installed on PCs. Installing NVIDIA desktop GPUs (i.e. GeForce GPUs) on rack servers in a data center is not permitted and may render the GPU warranty void.

Datacenter GPUs come with higher specs and hence higher pricing point compared to a desktop GPU. It is also natural that datacenter GPUs produce higher performance and less bound to feature constraints. As an example, CUDA forward compatibility and virtual GPUs are only supported in datacenter GPUs.

If you are exploring deep learning and plans to experiment on popular algorithms such as CNN, RNN, or LSTM, desktop GPUs should suffice. Alternatively, if you are working on state-of-the-art computer vision model, transformers like BERT or GPT, or style transfer, datacenter GPU might be a better option especially from performance point of view.

Desktop GPUs for Deep Learning

The table below summarizes the list of NVIDIA desktop GPU models that serve as a better fit for building a deep learning AI system, especially through containerization support of AI tasks. The list is sourced from the Wikipedia entry of NVIDIA GPUs, and NVIDIA product pages and datasheets.

SeriesModel NameArchitectureChip NameBus InterfaceLaunch Date
GeForce 600GeForce GT 630KeplerGK107PCIe 2.0 x16April 24th, 2012
KeplerGK208-301-A1PCIe 2.0 x8May 29th, 2013
GeForce GT 635KeplerGK208PCIe 3.0 x8February 19th, 2013
GeForce GT 640KeplerGK107-301-A2 PCIe 3.0 x16April 24th, 2012
KeplerGK107PCIe 3.0 x16June 5th, 2012
KeplerGK208-400-A1PCIe 2.0 x8May 29th, 2013
GeForce GTX 645KeplerGK106PCIe 3.0 x16April 22nd, 2013
GeForce GTX 650KeplerGK107-450-A2PCIe 3.0 x16September 13th, 2012
GeForce GTX 650 TiKeplerGK106-220-A1PCIe 3.0 x16October 9th, 2012
GeForce GTX 650 Ti BoostKeplerGK106-240-A1PCIe 3.0 x16March 26th, 2013
GeForce GTX 660KeplerGK106-400-A1PCIe 3.0 x16September 13th, 2012
KeplerGK104-200-KD-A2PCIe 3.0 x16August 22nd, 2012
GeForce GTX 660 TiKeplerGK104-300-KD-A2PCIe 3.0 x16August 16th, 2012
GeForce GTX 670KeplerGK104-325-A2PCIe 3.0 x16May 10th, 2012
GeForce GTX 680KeplerGK104-400-A2PCIe 3.0 x16March 22nd, 2013
GeForce GTX 690Kepler2× GK104-355-A2PCIe 3.0 x16April 29th, 2012
GeForce 700GeForce GT 710KeplerGK208-301-A1PCIe 2.0 x8March 27th, 2014
KeplerGK208-203-B1PCIe 2.0 x8January 26th, 2016
GeForce GT 720KeplerGK208-201-B1PCIe 2.0 x8March 27th, 2014
GeForce GT 730KeplerGK208-301-A1PCIe 2.0 x8June 18th, 2014
KeplerGK208-400-A1PCIe 2.0 x8June 18th, 2014
GeForce GT 740KeplerGK107-425-A2PCIe 3.0 x16May 29th, 2014
GeForce GTX 745MaxwellGM107-300-A2PCIe 3.0 x16February 18th, 2014
GeForce GTX 750MaxwellGM107-300-A2PCIe 3.0 x16February 18th, 2014
GeForce GTX 750 TiMaxwellGM107-400-A2PCIe 3.0 x16February 18th, 2014
GeForce GTX 760 192-bitKeplerGK104-200-KD-A2PCIe 3.0 x16October 17th, 2013
GeForce GTX 760KeplerGK104-225-A2PCIe 3.0 x16June 25th, 2013
GeForce GTX 760 TiKeplerGK104PCIe 3.0 x16September 27th, 2013
GeForce GTX 770KeplerGK104-425-A2PCIe 3.0 x16May 30th, 2013
GeForce GTX 780KeplerGK110-300-A1PCIe 3.0 x16May 23rd, 2013
GeForce GTX 780 TiKeplerGK110-425-B1PCIe 3.0 x16November 7th, 2013
GeForce GTX TITANKeplerGK110-400-A1PCIe 3.0 x16February 21st, 2013
GeForce GTX TITAN BlackKeplerGK110-430-B1PCIe 3.0 x16February 18th, 2014
GeForce GTX TITAN ZKepler2× GK110PCIe 3.0 x16March 25th, 2014
GeForce 900GeForce GT 945AMaxwellGM108PCIe 3.0 x8March 13th, 2015
GeForce GTX 950MaxwellGM206-250PCIe 3.0 x16August 20th, 2015
GeForce GTX 960MaxwellGM206-300PCIe 3.0 x16January 22nd, 2015
GeForce GTX 970MaxwellGM204-200PCIe 3.0 x16September 18th, 2014
GeForce GTX 980MaxwellGM204-400PCIe 3.0 x16September 18th, 2014
GeForce GTX 980 TiMaxwellGM200-310PCIe 3.0 x16June 1st, 2015
GeForce GTX TITAN XMaxwellGM200-400PCIe 3.0 x16March 17th, 2015
GeForce 10GeForce GT 1030PascalGP108-300PCIe 3.0 x4May 17th, 2017
GeForce GTX 1050PascalGP107-300PCIe 3.0 x16October 25th, 2016
GeForce GTX 1050 TiPascalGP107-400PCIe 3.0 x16October 25th, 2016
GeForce GTX 1060PascalGP106-300PCIe 3.0 x16August 18th, 2016
GeForce GTX 1070PascalGP104-200PCIe 3.0 x16June 10th, 2016
GeForce GTX 1070 TiPascalGP104-300PCIe 3.0 x16November 2nd, 2017
GeForce GTX 1080PascalGP104-400PCIe 3.0 x16May 27th, 2016
GeForce GTX 1080 TiPascalGP102-350PCIe 3.0 x16March 5th, 2017
Nvidia TITAN XPascalGP102-400PCIe 3.0 x16August 2nd, 2016
Nvidia TITAN XpPascalGP102-450PCIe 3.0 x16April 6th, 2017
VoltaNvidia TITAN VVoltaGV100-400-A1PCIe 3.0 x16December 7th, 2017
GeForce 16GeForce GTX 1660 TiTuringTU102-300A-K1-A1PCIe 3.0 x16February 22nd, 2019
GeForce GTX 1660 SuperTuringTU104-450-A1PCIe 3.0 x16October 29th, 2019
GeForce GTX 1660TuringTU104-400A-A1PCIe 3.0 x16March 14th, 2019
GeForce GTX 1650 SuperTuringTU104-410-A1PCIe 3.0 x16November 22nd, 2019
GeForce GTX 1650TuringTU106-400A-A1PCIe 3.0 x16April 23rd, 2019
GeForce 20GeForce RTX 2080 TiTuringTU102-300A-K1-A1PCIe 3.0 x16September 20th, 2018
GeForce RTX 2080 SuperTuringTU104-450-A1PCIe 3.0 x16July 23rd, 2019
GeForce RTX 2080TuringTU104-400A-A1PCIe 3.0 x16September 20th, 2018
GeForce RTX 2070 SuperTuringTU104-410-A1PCIe 3.0 x16July 9th, 2019
GeForce RTX 2070TuringTU106-400A-A1PCIe 3.0 x16October 17th, 2018
GeForce RTX 2060 SuperTuringTU106-410-A1PCIe 3.0 x16July 9th, 2019
GeForce RTX 2060TuringTU106-200A-KA-A1PCIe 3.0 x16January 7th, 2019
GeForce 30GeForce RTX 3090 TiAmpereGA102-350-A1PCIe 4.0 x16March 29th, 2022
GeForce RTX 3090AmpereGA102-300-A1PCIe 4.0 x16September 1st, 2020
GeForce RTX 3080 TiAmpereGA102-225-A1PCIe 4.0 x16May 31st, 2021
GeFore RTX 3080AmpereGA102-200-KD-A1PCIe 4.0 x16September 1st, 2020
GeForce RTX 3070 TiAmpereGA104-400-A1PCIe 4.0 x16May 31st, 2021
GeForce RTX 3070AmpereGA104-300-A1PCIe 4.0 x16September 1st, 2020
GeForce RTX 3060 TiAmpereGA104-200-A1PCIe 4.0 x16December 1st, 2020
GeForce 40GeForce RTX 4090Ada LovelaceAD102-300PCIe 4.0 x16October 12th, 2022
GeForce RTX 4080Ada LovelaceAD103-300PCIe 4.0 x16November 16th, 2022
GeForce RTX 4070 TiAda LovelaceAD104-400PCIe 4.0 x16January 5th, 2023
GeForce RTX 4070Ada LovelaceAD104-250PCIe 4.0 x16April 13th, 2023
GeForce RTX 4060 TiAda LovelaceAD106-350PCIe 4.0 x8May 24th, 2023
Ada LovelaceAD106-351PCIe 4.0 x8July 18th, 2023
GeForce RTX 4060Ada LovelaceAD107-400PCIe 4.0 x8June 29th, 2023

What about non-desktop GPUs that are also not datacenter GPUs such as those on workstation or laptop? Can we also use them for deep learning? NVIDIA provides the list for all products with CUDA support on this page: https://developer.nvidia.com/cuda-gpus.

GPU and CUDA Version

Each version of CUDA toolkit has a minimum compute capability that it supports. If you are using older GPU, you may be unable to run the latest version of CUDA. After obtaining the compute capability of your chosen NVIDIA GPU, you check against CUDA compatibility in this article.

5 thoughts on “List of NVIDIA Desktop Graphics Card Models for Building Deep Learning AI System

  1. Pingback: Guide: Installing Cuda Toolkit 9.1 on Ubuntu 16.04 « Amikelive | Technology Blog

  2. Pingback: CUDA Compatibility of NVIDIA Display / GPU Drivers | Amikelive | Technology Blog

  3. Pingback: Guide: Installing Docker Engine Utility for NVIDIA GPU (nvidia-docker2) on Ubuntu 16.04 | Amikelive | Technology Blog

  4. TonyKing

    The info about the GeForce GT 945A is false. The release date is february 15th 2016 and It’s clock speed is 1072MHz with its memory speed being 900MHz. You’ve probably confused it with the mobile gpu GeForce 945A, which is something completely different.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *