GeForce 40 series
![]() An RTX 4090 Founders Edition in its packaging | |
Release date | October 12, 2022 |
---|---|
Codename | AD10x |
Architecture | Ada Lovelace |
Models | GeForce RTX series |
Transistors |
|
Fabrication process | TSMC 4N[1] |
Cards | |
High-end |
|
Enthusiast |
|
API support | |
Direct3D | Direct3D 12.0 Ultimate (feature level 12_2) |
OpenCL | OpenCL 3.0 |
OpenGL | OpenGL 4.6 |
Vulkan | Vulkan 1.3 |
History | |
Predecessor | GeForce 30 series |
The GeForce 40 series is a family of graphics processing units developed by Nvidia, succeeding the GeForce 30 series. The series was announced on September 20, 2022, at the GPU Technology Conference (GTC) 2022 event; the RTX 4090 was released on October 12, 2022,[1] and the RTX 4080 will be released the following month. The cards are based on the Ada Lovelace architecture and feature hardware-accelerated raytracing (RTX) with Nvidia's third-generation RT cores and fourth-generation Tensor Cores.
Details
Architectural highlights of the Ada Lovelace architecture include the following:[2]
- CUDA Compute Capability 8.9[3]
- TSMC 4N process (custom designed for NVIDIA)[1] – not to be confused with N4
- Fourth-generation Tensor Cores with FP8, FP16, bfloat16, TensorFloat-32 (TF32) and sparsity acceleration
- Third-generation Ray Tracing Cores, along with concurrent ray tracing, shading and compute
- Shader Execution Reordering - needs to be enabled by the developer[4]
- NVENC with 8K 10-bit 60FPS AV1 fixed function hardware encoding[5][6]
- A new generation of Optical Flow Accelerator to aid DLSS 3.0 intermediate AI-based frame generation[7]
- No NVLink support[8]
Products
- Double-precision (FP64) performance of the Ada Lovelace chips are 1/64 of single-precision (FP32) performance.
- All the cards feature GDDR6X video memory.
Model | Launch | Launch MSRP (USD) |
Code name(s) |
Transistors (billion) | Die size (mm2) | Core config[a] |
SM count[b] |
L2 cache (MB) |
Clock speeds[c] | Fillrate[d][e] | Memory | Processing power (TFLOPS) | TDP (watts) | |||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Core clock (MHz) |
Memory (GT/s) |
Pixel (Gpx/s) |
Texture (Gtex/s) |
Size (GB) |
Bandwidth (GB/s) |
Bus width (bit) |
Half precision (boost) |
Single precision (boost) |
Double precision (boost) |
Tensor compute [sparse] | ||||||||||
GeForce RTX 4080 (12 GB)[9][10] |
Cancelled | $899 | AD104-400 | 35.8 | 294.5 | 7680 240:80:60:240 |
60 | 48 | 2310 (2610) |
21.0 | 184.8 (208.8) |
554.4 (626.4) |
12 | 504 | 192 | 35.482 (40.090) |
35.482 (40.090) |
0.554 (0.626) |
160.4 [320.8] |
285 |
GeForce RTX 4080[11] |
Nov 16, 2022 | $1199 | AD103-300 | 45.9 | 378.6 | 9728 304:112:76:304 |
76 | 64 | 2210 (2505) |
22.4 | 247.5 (280.6) |
671.8 (761.5) |
16 | 716.8 | 256 | 42.998 (48.737) |
42.998 (48.737) |
0.672 (0.762) |
194.9 [389.8] |
320 |
GeForce RTX 4090[10] |
Oct 12, 2022 | $1599 | AD102-300 | 76.3 | 608.5 | 16384 512:176:128:512 |
128 | 72 | 2230 (2520) |
21.0 | 392.5 (443.5) |
1141.8 (1290.2) |
24 | 1008 | 384 | 73.073 (82.575) |
73.073 (82.575) |
1.142 (1.290) |
330.3 [660.6] |
450 |
- ^ Shader Processors : Texture mapping units : Render output units : Ray tracing cores : Tensor Cores
- ^ The number of Streaming multi-processors on the GPU.
- ^ Core boost values (if available) are stated below the base value inside brackets.
- ^ Pixel fillrate is calculated as the lowest of three numbers: number of ROPs multiplied by the base core clock speed, number of rasterizers multiplied by the number of fragments they can generate per rasterizer multiplied by the base core clock speed, and the number of streaming multiprocessors multiplied by the number of fragments per clock that they can output multiplied by the base clock rate.
- ^ Texture fillrate is calculated as the number of texture mapping units (TMUs) multiplied by the base (or boost) core clock speed.
RTX 4080 12 GB controversy
When the 12 GB RTX 4080 was announced, numerous outlets, prominent YouTubers, reviewers, and the community criticized Nvidia for calling it an RTX 4080 instead of RTX 4070, given previous Nvidia GPU generations and the large gap in specifications and performance compared to the 16 GB card.[12][13][14][15] Unlike other cases of the same-named product with differing memory configurations that were otherwise very close in performance, the 12GB RTX 4080 uses a completely different chip and configuration: among other differences, the 12 GB 4080, using the AD104 chip, was to feature 27% fewer CUDA cores, along with a cut down 192-bit memory bus, typically used for the xx60 cards[citation needed]. This made the card up to 30% slower than the 16 GB RTX 4080 in raw performance while being priced significantly higher than previous xx70 cards ($900 vs. $500 for the RTX 3070, approximately 80% more expensive).
On October 14, 2022, Nvidia announced that due to the confusion caused by the naming scheme, it would be "unlaunching"—i.e. pausing the launch of—the 12 GB RTX 4080, with the 16 GB RTX 4080's launch remaining unaffected. Future marketing plans for the card have not been revealed.[16][17]
See also
- GeForce 10 series
- GeForce 16 series
- GeForce 20 series
- GeForce 30 series
- Nvidia Workstation GPUs (formerly Quadro)
- Nvidia Data Center GPUs (formerly Tesla)
- List of Nvidia graphics processing units
References
- ^ a b c "NVIDIA Delivers Quantum Leap in Performance, Introduces New Era of Neural Rendering With GeForce RTX 40 Series". NVIDIA Newsroom. September 20, 2022. Retrieved October 8, 2022.
- ^ "NVIDIA Ada Lovelace Architecture". NVIDIA.
- ^ "I.7. Compute Capability 9.x". docs.nvidia.com.
- ^ Palumbo, Alessio (September 23, 2022). "NVIDIA Ada Lovelace Follow-Up Q&A - DLSS 3, SER, OMM, DMM and More". Wccftech. Retrieved September 25, 2022.
- ^ "Creativity At The Speed of Light: GeForce RTX 40 Series Graphics Cards Unleash Up To 2X Performance in 3D Rendering, AI, and Video Exports For Gamers and Creators". NVIDIA.
- ^ "Nvidia Video Codec SDK". August 23, 2013.
- ^ Chiappetta, Marco (September 22, 2022). "NVIDIA GeForce RTX 40 Architecture Overview: Ada's Special Sauce Unveiled". HotHardware. Retrieved September 25, 2022.
- ^ "Jensen Confirms: NVLink Support in Ada Lovelace is Gone". TechPowerUp. September 21, 2022.
- ^ "NVIDIA GeForce RTX 4080 Graphics Cards for Gaming". NVIDIA. Archived from the original on September 28, 2022. Retrieved October 30, 2022.
- ^ a b "NVIDIA Ada GPU Architecture" (PDF). Nvidia. Retrieved October 1, 2022.
- ^ "NVIDIA GeForce RTX 4080 Graphics Cards for Gaming". Nvidia. Retrieved October 14, 2022.
- ^ Laird, Jeremy (September 22, 2022). "We've run the numbers and Nvidia's RTX 4080 cards don't add up". PC Gamer. Retrieved September 27, 2022.
- ^ "Why the RTX 4080 12GB feels a lot like a rebranded RTX 4070". Digital Trends. September 21, 2022. Retrieved September 27, 2022.
- ^ Walton, Jarred (September 23, 2022). "Why Nvidia's RTX 4080, 4090 Cost so Damn Much". Tom's Hardware. Retrieved September 27, 2022.
- ^ Guyton, Christian (September 22, 2022). "Buyer beware: the 12GB RTX 4080 is hiding a dirty little secret". TechRadar. Retrieved September 27, 2022.
- ^ "Unlaunching The 12 GB 4080". NVIDIA. Retrieved October 14, 2022.
- ^ Warren, Tom (October 14, 2022). "Nvidia says it's 'unlaunching' the 12GB RTX 4080 after backlash". The Verge. Retrieved October 14, 2022.
External links
- Articles with short description
- Short description with empty Wikidata description
- Articles lacking reliable references from September 2022
- All articles lacking reliable references
- Use mdy dates from October 2022
- All articles with unsourced statements
- Articles with unsourced statements from September 2022
- Commons category link is locally defined
- Computer-related introductions in 2022
- GeForce series
- Graphics processing units
- Graphics cards