Our GPU benchmarks hierarchy ranks all the current and previous generation graphics cards by performance, including all of the best graphics cards. Whether it's playing games or doing high-end creative work like 4K video editing, your graphics card typically plays the biggest role in determining performance, and even the best CPUs for Gaming take a secondary role.
We're still retesting all of the ray-tracing capable GPUs on a slightly revamped test suite, using a Core i9-13900K instead of the current 12900K. (We had to restart testing unfortunately, due to game updates.) For now, we have the same test suite we used in 2022. No new GPUs have been added since last month, but we've retested a some of the cards that had odd results and we should probably see a few new cards launch this month if the rumors are correct.
Our full GPU hierarchy using traditional rendering (aka, rasterization) comes first, and below that we have our ray tracing GPU benchmarks hierarchy. Those of course require a ray tracing capable GPU so only AMD's RX 7000/6000-series, Intel's Arc, and Nvidia's RTX cards are present. The results are all without enabling DLSS, FSR, or XeSS on the various cards, mind you.
Nvidia's Ada Lovelace architecture powers its latest generation RTX 40-series, with new features like DLSS 3 Frame Generation. AMD's RDNA 3 architecture powers the RX 7000-series, with only two desktop cards presently released. Meanwhile, Intel's Arc Alchemist architecture brings a third player into the dedicated GPU party, though it's more of a competitor for the previous generation midrange offerings.
On page two, you'll find our 2020–2021 benchmark suite, which has all of the previous generation GPUs running our older test suite running on a Core i9-9900K testbed. We also have the legacy GPU hierarchy (without benchmarks, sorted by theoretical performance) for reference purposes.
The following tables sort everything solely by our performance-based GPU gaming benchmarks, at 1080p "ultra" for the main suite and at 1080p "medium" for the DXR suite. Factors including price, graphics card power consumption, overall efficiency, and features aren't factored into the rankings here. The current 2022/2023 results use an Alder Lake Core i9-12900K testbed. Now let's hit the benchmarks and tables.
GPU Benchmarks Ranking 2023
For our latest benchmarks, we test (nearly) all GPUs at 1080p medium and 1080p ultra, and sort the table by the 1080p ultra results. Where it makes sense, we also test at 1440p ultra and 4K ultra. All of the scores are scaled relative to the top-ranking 1080p ultra card, which in our new suite is the RTX 4090 (at least at 4K and 1440p).
You can also see the above summary chart showing the relative performance of the cards we've tested across the past several generations of hardware at 1080p ultra — swipe through the above gallery if you want to see the 1080p medium, 1440p and 4K ultra images. There are a few missing options (e.g., the GT 1030, RX 550, and several Titan cards), but otherwise it's basically complete. We do have data in the table below for some of the other (older) GPUs.
The eight games we're using for our standard GPU benchmarks hierarchy are Borderlands 3 (DX12), Far Cry 6 (DX12), Flight Simulator (DX11/DX12), Forza Horizon 5 (DX12), Horizon Zero Dawn (DX12), Red Dead Redemption 2 (Vulkan), Total War Warhammer 3 (DX11), and Watch Dogs Legion (DX12). The fps score is the geometric mean (equal weighting) of the eight games.
Graphics Card | 1080p Ultra | 1080p Medium | 1440p Ultra | 4K Ultra | Specifications (Links to Review) |
---|---|---|---|---|---|
GeForce RTX 4090 (opens in new tab) | 100.0% (149.7fps) | 100.0% (188.8fps) | 100.0% (143.7fps) | 100.0% (116.0fps) | AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab) |
Radeon RX 7900 XTX (opens in new tab) | 99.1% (148.3fps) | 99.5% (187.9fps) | 94.5% (135.9fps) | 81.6% (94.6fps) | Navi 31, 12288 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W (opens in new tab) |
Radeon RX 7900 XT (opens in new tab) | 95.2% (142.5fps) | 96.8% (182.9fps) | 87.8% (126.2fps) | 69.6% (80.7fps) | Navi 31, 10752 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W (opens in new tab) |
GeForce RTX 4080 (opens in new tab) | 94.4% (141.3fps) | 97.2% (183.5fps) | 90.5% (130.1fps) | 77.5% (89.9fps) | AD103, 9728 shaders, 2505MHz, 16GB GDDR6X@22.4Gbps, 717GB/s, 320W (opens in new tab) |
Radeon RX 6950 XT (opens in new tab) | 91.5% (137.0fps) | 98.9% (186.9fps) | 80.1% (115.2fps) | 59.1% (68.5fps) | Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W (opens in new tab) |
Radeon RX 6900 XT (opens in new tab) | 88.8% (132.9fps) | 96.5% (182.2fps) | 76.2% (109.5fps) | 55.7% (64.6fps) | Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W (opens in new tab) |
GeForce RTX 3090 Ti (opens in new tab) | 88.7% (132.8fps) | 94.2% (177.9fps) | 80.9% (116.3fps) | 67.5% (78.3fps) | GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab) |
GeForce RTX 4070 Ti (opens in new tab) | 87.7% (131.4fps) | 90.9% (171.7fps) | 79.9% (114.8fps) | 62.7% (72.8fps) | AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W (opens in new tab) |
GeForce RTX 3090 (opens in new tab) | 85.8% (128.5fps) | 92.8% (175.2fps) | 76.0% (109.2fps) | 62.1% (72.1fps) | GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W (opens in new tab) |
Radeon RX 6800 XT (opens in new tab) | 85.6% (128.1fps) | 95.3% (179.9fps) | 72.3% (104.0fps) | 52.6% (61.0fps) | Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W (opens in new tab) |
GeForce RTX 3080 Ti (opens in new tab) | 84.9% (127.0fps) | 92.3% (174.2fps) | 74.5% (107.1fps) | 60.4% (70.1fps) | GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W (opens in new tab) |
GeForce RTX 3080 12GB (opens in new tab) | 84.4% (126.4fps) | 92.2% (174.1fps) | 73.9% (106.2fps) | 59.3% (68.8fps) | GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W (opens in new tab) |
GeForce RTX 3080 (opens in new tab) | 80.9% (121.1fps) | 91.1% (172.0fps) | 69.0% (99.1fps) | 54.6% (63.3fps) | GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W (opens in new tab) |
Radeon RX 6800 (opens in new tab) | 78.9% (118.0fps) | 93.3% (176.1fps) | 64.4% (92.6fps) | 45.6% (52.9fps) | Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W (opens in new tab) |
Radeon RX 6750 XT (opens in new tab) | 71.1% (106.5fps) | 91.2% (172.3fps) | 55.0% (79.0fps) | 37.1% (43.1fps) | Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W (opens in new tab) |
GeForce RTX 3070 Ti (opens in new tab) | 70.0% (104.8fps) | 85.1% (160.7fps) | 57.9% (83.2fps) | 40.1% (46.5fps) | GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W (opens in new tab) |
Radeon RX 6700 XT (opens in new tab) | 68.9% (103.1fps) | 89.3% (168.7fps) | 52.0% (74.7fps) | 35.0% (40.6fps) | Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W (opens in new tab) |
Titan RTX (opens in new tab) | 66.9% (100.1fps) | 82.4% (155.6fps) | 55.2% (79.3fps) | 41.2% (47.7fps) | TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W (opens in new tab) |
GeForce RTX 3070 (opens in new tab) | 66.5% (99.6fps) | 82.7% (156.1fps) | 53.9% (77.5fps) | 37.3% (43.3fps) | GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W (opens in new tab) |
GeForce RTX 2080 Ti (opens in new tab) | 64.1% (96.0fps) | 80.3% (151.6fps) | 52.4% (75.3fps) | 38.5% (44.6fps) | TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W (opens in new tab) |
GeForce RTX 3060 Ti (opens in new tab) | 60.9% (91.2fps) | 78.6% (148.5fps) | 48.7% (69.9fps) | GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W (opens in new tab) | |
Radeon RX 6700 10GB (opens in new tab) | 59.7% (89.4fps) | 81.2% (153.3fps) | 44.7% (64.3fps) | 28.8% (33.4fps) | Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W |
GeForce RTX 2080 Super (opens in new tab) | 56.8% (85.0fps) | 73.1% (138.0fps) | 45.2% (65.0fps) | 31.5% (36.5fps) | TU104, 3072 shaders, 1815MHz, 8GB GDDR6@15.5Gbps, 496GB/s, 250W (opens in new tab) |
GeForce RTX 2080 (opens in new tab) | 54.9% (82.2fps) | 70.5% (133.1fps) | 43.4% (62.4fps) | TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W (opens in new tab) | |
Radeon RX 6650 XT (opens in new tab) | 54.4% (81.4fps) | 75.3% (142.3fps) | 39.7% (57.0fps) | Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W (opens in new tab) | |
Radeon RX 6600 XT (opens in new tab) | 53.2% (79.6fps) | 74.3% (140.4fps) | 38.8% (55.8fps) | Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W (opens in new tab) | |
Intel Arc A770 16GB (opens in new tab) | 51.9% (77.6fps) | 62.2% (117.4fps) | 41.8% (60.1fps) | 30.5% (35.4fps) | ACM-G10, 4096 shaders, 2100MHz, 16GB GDDR6@17.5Gbps, 560GB/s, 225W (opens in new tab) |
GeForce RTX 2070 Super (opens in new tab) | 51.1% (76.4fps) | 65.8% (124.2fps) | 40.0% (57.4fps) | TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W (opens in new tab) | |
Intel Arc A770 8GB (opens in new tab) | 49.5% (74.1fps) | 61.3% (115.7fps) | 39.7% (57.0fps) | 28.6% (33.2fps) | ACM-G10, 4096 shaders, 2100MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W |
Radeon RX 5700 XT (opens in new tab) | 49.2% (73.7fps) | 66.6% (125.8fps) | 37.1% (53.3fps) | 25.2% (29.3fps) | Navi 10, 2560 shaders, 1905MHz, 8GB GDDR6@14Gbps, 448GB/s, 225W (opens in new tab) |
GeForce RTX 3060 (opens in new tab) | 48.0% (71.8fps) | 63.6% (120.2fps) | 36.9% (53.1fps) | GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W (opens in new tab) | |
Radeon VII (opens in new tab) | 46.5% (69.7fps) | 60.4% (114.0fps) | 36.9% (53.0fps) | 27.1% (31.4fps) | Vega 20, 3840 shaders, 1750MHz, 16GB HBM2@2.0Gbps, 1024GB/s, 300W (opens in new tab) |
Radeon RX 6600 (opens in new tab) | 45.9% (68.8fps) | 65.7% (124.1fps) | 32.7% (47.1fps) | Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W (opens in new tab) | |
GeForce RTX 2070 (opens in new tab) | 45.4% (67.9fps) | 58.8% (111.0fps) | 35.5% (51.0fps) | TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W (opens in new tab) | |
Intel Arc A750 (opens in new tab) | 45.0% (67.3fps) | 57.4% (108.3fps) | 35.7% (51.3fps) | 25.3% (29.4fps) | ACM-G10, 3584 shaders, 2050MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W (opens in new tab) |
GeForce GTX 1080 Ti (opens in new tab) | 44.5% (66.5fps) | 58.6% (110.6fps) | 35.0% (50.3fps) | 25.5% (29.5fps) | GP102, 3584 shaders, 1582MHz, 11GB GDDR5X@11Gbps, 484GB/s, 250W (opens in new tab) |
GeForce RTX 2060 Super (opens in new tab) | 43.5% (65.1fps) | 56.3% (106.3fps) | 33.6% (48.2fps) | TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W (opens in new tab) | |
Radeon RX 5700 (opens in new tab) | 43.3% (64.8fps) | 58.9% (111.3fps) | 32.8% (47.2fps) | Navi 10, 2304 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 180W (opens in new tab) | |
Radeon RX 5600 XT (opens in new tab) | 38.8% (58.1fps) | 53.2% (100.6fps) | 29.2% (42.0fps) | Navi 10, 2304 shaders, 1750MHz, 8GB GDDR6@14Gbps, 336GB/s, 160W (opens in new tab) | |
Radeon RX Vega 64 (opens in new tab) | 37.9% (56.8fps) | 50.0% (94.3fps) | 28.9% (41.6fps) | 20.2% (23.5fps) | Vega 10, 4096 shaders, 1546MHz, 8GB HBM2@1.89Gbps, 484GB/s, 295W (opens in new tab) |
GeForce RTX 2060 (opens in new tab) | 36.9% (55.2fps) | 51.3% (96.8fps) | 27.2% (39.1fps) | TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W (opens in new tab) | |
GeForce GTX 1080 (opens in new tab) | 35.5% (53.1fps) | 47.7% (90.0fps) | 27.4% (39.4fps) | GP104, 2560 shaders, 1733MHz, 8GB GDDR5X@10Gbps, 320GB/s, 180W (opens in new tab) | |
GeForce RTX 3050 (opens in new tab) | 35.4% (53.0fps) | 48.3% (91.2fps) | 27.0% (38.7fps) | GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W (opens in new tab) | |
GeForce GTX 1070 Ti (opens in new tab) | 34.1% (51.1fps) | 45.4% (85.8fps) | 26.4% (37.9fps) | GP104, 2432 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 180W (opens in new tab) | |
Radeon RX Vega 56 (opens in new tab) | 33.8% (50.6fps) | 44.7% (84.4fps) | 25.7% (37.0fps) | Vega 10, 3584 shaders, 1471MHz, 8GB HBM2@1.6Gbps, 410GB/s, 210W (opens in new tab) | |
GeForce GTX 1660 Super (opens in new tab) | 30.3% (45.3fps) | 43.9% (82.8fps) | 22.5% (32.4fps) | TU116, 1408 shaders, 1785MHz, 6GB GDDR6@14Gbps, 336GB/s, 125W (opens in new tab) | |
GeForce GTX 1660 Ti (opens in new tab) | 30.1% (45.0fps) | 43.7% (82.4fps) | 22.4% (32.2fps) | TU116, 1536 shaders, 1770MHz, 6GB GDDR6@12Gbps, 288GB/s, 120W (opens in new tab) | |
GeForce GTX 1070 (opens in new tab) | 29.9% (44.8fps) | 39.7% (75.1fps) | 23.0% (33.1fps) | GP104, 1920 shaders, 1683MHz, 8GB GDDR5@8Gbps, 256GB/s, 150W (opens in new tab) | |
GeForce GTX 1660 (opens in new tab) | 26.9% (40.2fps) | 39.7% (75.1fps) | 19.8% (28.5fps) | TU116, 1408 shaders, 1785MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W (opens in new tab) | |
Radeon RX 5500 XT 8GB (opens in new tab) | 26.6% (39.8fps) | 38.4% (72.6fps) | 19.8% (28.5fps) | Navi 14, 1408 shaders, 1845MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W (opens in new tab) | |
Radeon RX 590 (opens in new tab) | 26.3% (39.4fps) | 36.3% (68.6fps) | 20.2% (29.1fps) | Polaris 30, 2304 shaders, 1545MHz, 8GB GDDR5@8Gbps, 256GB/s, 225W (opens in new tab) | |
GeForce GTX 980 Ti (opens in new tab) | 24.0% (35.9fps) | 33.1% (62.6fps) | 18.5% (26.7fps) | GM200, 2816 shaders, 1075MHz, 6GB GDDR5@7Gbps, 336GB/s, 250W (opens in new tab) | |
Radeon R9 Fury X (opens in new tab) | 23.7% (35.4fps) | 34.1% (64.4fps) | Fiji, 4096 shaders, 1050MHz, 4GB HBM2@2Gbps, 512GB/s, 275W (opens in new tab) | ||
Radeon RX 580 8GB (opens in new tab) | 23.6% (35.3fps) | 32.7% (61.7fps) | 18.1% (26.0fps) | Polaris 20, 2304 shaders, 1340MHz, 8GB GDDR5@8Gbps, 256GB/s, 185W (opens in new tab) | |
GeForce GTX 1650 Super (opens in new tab) | 22.6% (33.9fps) | 36.0% (68.0fps) | 16.0% (23.0fps) | TU116, 1280 shaders, 1725MHz, 4GB GDDR6@12Gbps, 192GB/s, 100W (opens in new tab) | |
Radeon RX 5500 XT 4GB (opens in new tab) | 22.4% (33.5fps) | 35.4% (66.9fps) | Navi 14, 1408 shaders, 1845MHz, 4GB GDDR6@14Gbps, 224GB/s, 130W (opens in new tab) | ||
GeForce GTX 1060 6GB (opens in new tab) | 21.5% (32.2fps) | 30.7% (58.0fps) | 16.0% (23.0fps) | GP106, 1280 shaders, 1708MHz, 6GB GDDR5@8Gbps, 192GB/s, 120W (opens in new tab) | |
Radeon RX 6500 XT (opens in new tab) | 20.6% (30.8fps) | 34.8% (65.8fps) | 12.5% (18.0fps) | Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W (opens in new tab) | |
Radeon R9 390 (opens in new tab) | 19.9% (29.8fps) | 27.1% (51.2fps) | Grenada, 2560 shaders, 1000MHz, 8GB GDDR5@6Gbps, 384GB/s, 275W (opens in new tab) | ||
GeForce GTX 980 (opens in new tab) | 19.3% (28.9fps) | 28.4% (53.7fps) | GM204, 2048 shaders, 1216MHz, 4GB GDDR5@7Gbps, 256GB/s, 165W (opens in new tab) | ||
GeForce GTX 1650 GDDR6 (opens in new tab) | 19.2% (28.8fps) | 30.0% (56.7fps) | TU117, 896 shaders, 1590MHz, 4GB GDDR6@12Gbps, 192GB/s, 75W (opens in new tab) | ||
Intel Arc A380 (opens in new tab) | 18.9% (28.3fps) | 29.0% (54.7fps) | 13.5% (19.5fps) | ACM-G11, 1024 shaders, 2450MHz, 6GB GDDR6@15.5Gbps, 186GB/s, 75W (opens in new tab) | |
Radeon RX 570 4GB (opens in new tab) | 18.9% (28.3fps) | 28.4% (53.6fps) | 13.9% (20.0fps) | Polaris 20, 2048 shaders, 1244MHz, 4GB GDDR5@7Gbps, 224GB/s, 150W (opens in new tab) | |
GeForce GTX 1060 3GB (opens in new tab) | 18.6% (27.8fps) | 27.8% (52.6fps) | GP106, 1152 shaders, 1708MHz, 3GB GDDR5@8Gbps, 192GB/s, 120W (opens in new tab) | ||
GeForce GTX 1650 (opens in new tab) | 18.0% (26.9fps) | 27.1% (51.1fps) | TU117, 896 shaders, 1665MHz, 4GB GDDR5@8Gbps, 128GB/s, 75W (opens in new tab) | ||
GeForce GTX 970 (opens in new tab) | 17.7% (26.5fps) | 26.0% (49.1fps) | GM204, 1664 shaders, 1178MHz, 4GB GDDR5@7Gbps, 256GB/s, 145W (opens in new tab) | ||
Radeon RX 6400 (opens in new tab) | 15.8% (23.7fps) | 27.5% (52.0fps) | Navi 24, 768 shaders, 2321MHz, 4GB GDDR6@16Gbps, 128GB/s, 53W (opens in new tab) | ||
GeForce GTX 780 (opens in new tab) | 14.7% (22.0fps) | 20.4% (38.5fps) | GK110, 2304 shaders, 900MHz, 3GB GDDR5@6Gbps, 288GB/s, 230W (opens in new tab) | ||
GeForce GTX 1050 Ti (opens in new tab) | 13.2% (19.8fps) | 20.1% (38.0fps) | GP107, 768 shaders, 1392MHz, 4GB GDDR5@7Gbps, 112GB/s, 75W (opens in new tab) | ||
GeForce GTX 1630 (opens in new tab) | 11.3% (16.9fps) | 17.9% (33.9fps) | TU117, 512 shaders, 1785MHz, 4GB GDDR6@12Gbps, 96GB/s, 75W (opens in new tab) | ||
GeForce GTX 1050 (opens in new tab) | 9.9% (14.8fps) | 15.8% (29.8fps) | GP107, 640 shaders, 1455MHz, 2GB GDDR5@7Gbps, 112GB/s, 75W (opens in new tab) | ||
Radeon RX 560 4GB (opens in new tab) | 9.9% (14.8fps) | 16.9% (31.8fps) | Baffin, 1024 shaders, 1275MHz, 4GB GDDR5@7Gbps, 112GB/s, 60-80W (opens in new tab) | ||
Radeon RX 550 4GB (opens in new tab) | 10.4% (19.6fps) | Lexa, 640 shaders, 1183MHz, 4GB GDDR5@7Gbps, 112GB/s, 50W (opens in new tab) | |||
GeForce GT 1030 (opens in new tab) | 7.7% (14.5fps) | GP108, 384 shaders, 1468MHz, 2GB GDDR5@6Gbps, 48GB/s, 30W (opens in new tab) |
*: GPU couldn't run all tests, so the overall score is slightly skewed at 1080p ultra.
While the RTX 4090 does technically take first place at 1080p ultra, it's the 1440p and especially 4K numbers that impress. It's only 1% faster than the next closest RX 7900 XTX at 1080p ultra, but that increases to 6% at 1440p and 23% at 4K. Against the RTX 3090 Ti, it's also a major upgrade: 13% faster at 1080p, 24% faster at 1440p, and 48% faster at 4K.
(Just in case you check our reviews and notice a difference in scores, the above fps numbers incorporate both the average and minimum fps into a single score — with the average given more weight than the 99th percentile fps.)
Again, keep in mind that we're not including any ray tracing or DLSS results in the above table, as we intend to use the same test suite with the same settings on all current and previous generation graphics cards. Since only RTX cards support DLSS (and RTX 40-series if you want DLSS 3), that would drastically limit which cards we could directly compare.
Of course the RTX 4090 comes at a steep price, though it's not that much worse than the previous generation RTX 3090. In fact, we'd say it's a lot better, as the 3090 was only a modest improvement in performance compared to the 3080 at the time of launch. Nvidia seems to have pulled out all the stops with the 4090, increasing the core counts, clock speeds, and power limits to push it beyond all contenders.
Stepping down from the RTX 4090, the RTX 4080 and RX 7900 XTX trade blows at higher resolutions, while AMD's previous generation RX 6950 XT technically takes the lead at 1080p medium where CPU bottlenecks become the limiting factor. We've updated our test PC (you can see early results in our latest GeForce RTX 4070 Ti and Radeon RX 7900 XTX and 7900 XT (opens in new tab) reviews), and it's a good time for updates as we need to retest all of the latest game patches and driver updates... but that will take some time.
Speaking of the RTX 4070 Ti, it ended up falling below the RX 7900 XT by 8–10 percent on average in our rasterization benchmarks. It will turn the tables in ray tracing, but if you don't care about RT, you can certainly make the argument that the 7900 XT represents a better value — again, discounting DLSS as well.
Outside of the latest releases from AMD and Nvidia, the RX 6000- and RTX 30-series chips still perform reasonably well and in many cases represent a better 'deal' — even though the hardware is in some cases over two years old now. Intel's Arc GPUs also fall into this category and are something of a wild card.
We've been testing and retesting GPUs periodically, and the Arc chips had some erratic behavior that we eventually sorted out (it was caused by Windows VBS getting turned on). Compared to our launch reviews (opens in new tab), performance hasn't changed much, outside of DirectX 9 games. There are a few other fluctuations, mostly from game updates rather than drivers. Overall, the A770 8GB ends up landing just a bit ahead of the A750, at a slightly higher street price. Also note that the 8GB A770 comes with a factory overclock, which is why it sometimes outperforms the 16GB model.
Turning to the previous generation GPUs, the RTX 20-series and GTX 16-series chips end up scattered throughout the results, along with the RX 5000-series. The general rule of thumb is that you get one or two "model upgrades" with the newer architectures, so for example the RTX 2080 Super comes in just below the RTX 3060 Ti, while the RX 5700 XT lands a few percent behind the RX 6600 XT.
Go back far enough and you can see how modern games at ultra settings severely punish cards that don't have more than 4GB VRAM. We've been saying for a few years now that 4GB is just scraping by, and 6GB or more is desirable. The GTX 1060 3GB, GTX 1050, and GTX 780 actually failed to run some of our tests, which skews their results a bit, even though they do better at 1080p medium.
Now let's switch over to the ray tracing hierarchy.
Ray Tracing GPU Benchmarks Ranking 2023
Enabling ray tracing, particularly with demanding games like those we're using in our DXR test suite, can cause framerates to drop off a cliff. We're testing with "medium" and "ultra" ray tracing settings. Medium means using medium graphics settings but turning on ray tracing effects (set to "medium" if that's an option; otherwise, "on"), while ultra turns on all of the RT options at more or less maximum quality.
Because ray tracing is so much more demanding, we're sorting these results by the 1080p medium scores. That's also because the RX 6500 XT and RX 6400 along with the Arc A380 basically can't handle ray tracing even at these settings, and testing at anything more than 1080p medium would be fruitless. We've finished testing all the current ray tracing capable GPUs, though there will undoubtedly be more cards in the future.
The five ray tracing games we're using are Bright Memory Infinite, Control Ultimate Edition, Cyberpunk 2077, Metro Exodus Enhanced, and Minecraft — all of these use the DirectX 12 / DX12 Ultimate API. (Note that we have had to drop Fortnite from our latest reviews, as the new version broke our benchmarks and changed the available settings. Thanks, Epic!) The fps score is the geometric mean (equal weighting) of the five games, and the percentage is scaled relative to the fastest GPU in the list, which again is the GeForce RTX 4090.
Graphics Card | 1080p Medium | 1080p Ultra | 1440p Ultra | 4K Ultra | Specifications (Links to Review) |
---|---|---|---|---|---|
GeForce RTX 4090 (opens in new tab) | 100.0% (161.9fps) | 100.0% (132.8fps) | 100.0% (97.6fps) | 100.0% (53.6fps) | AD102, 16384 shaders, 2520MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab) |
GeForce RTX 4080 (opens in new tab) | 83.2% (134.7fps) | 79.9% (106.1fps) | 73.8% (72.0fps) | 69.0% (37.0fps) | AD103, 9728 shaders, 2505MHz, 16GB GDDR6X@22.4Gbps, 717GB/s, 320W (opens in new tab) |
GeForce RTX 3090 Ti (opens in new tab) | 71.5% (115.7fps) | 65.6% (87.1fps) | 61.2% (59.8fps) | 57.8% (31.0fps) | GA102, 10752 shaders, 1860MHz, 24GB GDDR6X@21Gbps, 1008GB/s, 450W (opens in new tab) |
GeForce RTX 4070 Ti (opens in new tab) | 71.0% (114.9fps) | 65.6% (87.1fps) | 58.7% (57.3fps) | 53.2% (28.5fps) | AD104, 7680 shaders, 2610MHz, 12GB GDDR6X@21Gbps, 504GB/s, 285W (opens in new tab) |
Radeon RX 7900 XTX (opens in new tab) | 67.5% (109.2fps) | 60.8% (80.7fps) | 54.7% (53.4fps) | 49.1% (26.3fps) | Navi 31, 12288 shaders, 2500MHz, 24GB GDDR6@20Gbps, 960GB/s, 355W (opens in new tab) |
GeForce RTX 3090 (opens in new tab) | 66.1% (107.0fps) | 58.9% (78.2fps) | 54.9% (53.6fps) | 50.8% (27.2fps) | GA102, 10496 shaders, 1695MHz, 24GB GDDR6X@19.5Gbps, 936GB/s, 350W (opens in new tab) |
GeForce RTX 3080 Ti (opens in new tab) | 64.2% (103.9fps) | 57.6% (76.5fps) | 53.4% (52.1fps) | 49.3% (26.4fps) | GA102, 10240 shaders, 1665MHz, 12GB GDDR6X@19Gbps, 912GB/s, 350W (opens in new tab) |
Radeon RX 7900 XT (opens in new tab) | 64.0% (103.6fps) | 55.0% (73.0fps) | 48.8% (47.7fps) | 43.0% (23.1fps) | Navi 31, 10752 shaders, 2400MHz, 20GB GDDR6@20Gbps, 800GB/s, 315W (opens in new tab) |
GeForce RTX 3080 12GB (opens in new tab) | 63.6% (103.0fps) | 56.5% (75.0fps) | 51.9% (50.7fps) | 47.3% (25.3fps) | GA102, 8960 shaders, 1845MHz, 12GB GDDR6X@19Gbps, 912GB/s, 400W (opens in new tab) |
GeForce RTX 3080 (opens in new tab) | 58.7% (95.0fps) | 51.7% (68.7fps) | 47.4% (46.3fps) | 41.9% (22.4fps) | GA102, 8704 shaders, 1710MHz, 10GB GDDR6X@19Gbps, 760GB/s, 320W (opens in new tab) |
Radeon RX 6950 XT (opens in new tab) | 50.0% (80.9fps) | 42.9% (56.9fps) | 36.9% (36.0fps) | 32.6% (17.5fps) | Navi 21, 5120 shaders, 2310MHz, 16GB GDDR6@18Gbps, 576GB/s, 335W (opens in new tab) |
GeForce RTX 3070 Ti (opens in new tab) | 48.9% (79.1fps) | 42.0% (55.8fps) | 37.0% (36.1fps) | GA104, 6144 shaders, 1770MHz, 8GB GDDR6X@19Gbps, 608GB/s, 290W (opens in new tab) | |
Radeon RX 6900 XT (opens in new tab) | 46.5% (75.3fps) | 39.6% (52.6fps) | 34.4% (33.6fps) | 30.1% (16.1fps) | Navi 21, 5120 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W (opens in new tab) |
Titan RTX (opens in new tab) | 45.9% (74.3fps) | 40.1% (53.3fps) | 35.9% (35.0fps) | 32.4% (17.4fps) | TU102, 4608 shaders, 1770MHz, 24GB GDDR6@14Gbps, 672GB/s, 280W (opens in new tab) |
GeForce RTX 3070 (opens in new tab) | 45.8% (74.2fps) | 39.3% (52.2fps) | 34.3% (33.5fps) | GA104, 5888 shaders, 1725MHz, 8GB GDDR6@14Gbps, 448GB/s, 220W (opens in new tab) | |
GeForce RTX 2080 Ti (opens in new tab) | 43.8% (70.9fps) | 38.2% (50.7fps) | 33.6% (32.9fps) | TU102, 4352 shaders, 1545MHz, 11GB GDDR6@14Gbps, 616GB/s, 250W (opens in new tab) | |
Radeon RX 6800 XT (opens in new tab) | 43.5% (70.4fps) | 36.7% (48.7fps) | 31.9% (31.2fps) | 28.1% (15.0fps) | Navi 21, 4608 shaders, 2250MHz, 16GB GDDR6@16Gbps, 512GB/s, 300W (opens in new tab) |
GeForce RTX 3060 Ti (opens in new tab) | 40.8% (66.0fps) | 34.5% (45.9fps) | 30.1% (29.4fps) | GA104, 4864 shaders, 1665MHz, 8GB GDDR6@14Gbps, 448GB/s, 200W (opens in new tab) | |
Radeon RX 6800 (opens in new tab) | 37.3% (60.4fps) | 31.3% (41.5fps) | 27.1% (26.4fps) | Navi 21, 3840 shaders, 2105MHz, 16GB GDDR6@16Gbps, 512GB/s, 250W (opens in new tab) | |
GeForce RTX 2080 Super (opens in new tab) | 36.7% (59.4fps) | 31.6% (42.0fps) | 27.8% (27.1fps) | TU104, 3072 shaders, 1815MHz, 8GB GDDR6@15.5Gbps, 496GB/s, 250W (opens in new tab) | |
GeForce RTX 2080 (opens in new tab) | 35.2% (57.0fps) | 29.9% (39.7fps) | 26.2% (25.5fps) | TU104, 2944 shaders, 1710MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W (opens in new tab) | |
GeForce RTX 2070 Super (opens in new tab) | 32.3% (52.4fps) | 27.5% (36.5fps) | 23.7% (23.1fps) | TU104, 2560 shaders, 1770MHz, 8GB GDDR6@14Gbps, 448GB/s, 215W (opens in new tab) | |
Radeon RX 6750 XT (opens in new tab) | 30.8% (49.8fps) | 26.1% (34.6fps) | 22.2% (21.7fps) | Navi 22, 2560 shaders, 2600MHz, 12GB GDDR6@18Gbps, 432GB/s, 250W (opens in new tab) | |
GeForce RTX 3060 (opens in new tab) | 30.6% (49.5fps) | 25.7% (34.2fps) | 22.0% (21.5fps) | GA106, 3584 shaders, 1777MHz, 12GB GDDR6@15Gbps, 360GB/s, 170W (opens in new tab) | |
Radeon RX 6700 XT (opens in new tab) | 28.8% (46.5fps) | 24.4% (32.4fps) | 20.6% (20.1fps) | Navi 22, 2560 shaders, 2581MHz, 12GB GDDR6@16Gbps, 384GB/s, 230W (opens in new tab) | |
GeForce RTX 2070 (opens in new tab) | 28.5% (46.2fps) | 24.1% (32.0fps) | 20.9% (20.4fps) | TU106, 2304 shaders, 1620MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W (opens in new tab) | |
Intel Arc A770 8GB (opens in new tab) | 28.5% (46.1fps) | 21.5% (28.6fps) | 16.8% (16.4fps) | ACM-G10, 4096 shaders, 2100MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W | |
Intel Arc A770 16GB (opens in new tab) | 28.0% (45.3fps) | 23.5% (31.3fps) | 21.6% (21.1fps) | ACM-G10, 4096 shaders, 2100MHz, 16GB GDDR6@17.5Gbps, 560GB/s, 225W (opens in new tab) | |
GeForce RTX 2060 Super (opens in new tab) | 27.4% (44.3fps) | 22.9% (30.4fps) | 19.7% (19.2fps) | TU106, 2176 shaders, 1650MHz, 8GB GDDR6@14Gbps, 448GB/s, 175W (opens in new tab) | |
Intel Arc A750 (opens in new tab) | 26.8% (43.4fps) | 19.6% (26.1fps) | 15.7% (15.4fps) | ACM-G10, 3584 shaders, 2050MHz, 8GB GDDR6@16Gbps, 512GB/s, 225W (opens in new tab) | |
Radeon RX 6700 10GB (opens in new tab) | 26.5% (42.8fps) | 22.0% (29.3fps) | 17.9% (17.5fps) | Navi 22, 2304 shaders, 2450MHz, 10GB GDDR6@16Gbps, 320GB/s, 175W | |
GeForce RTX 2060 (opens in new tab) | 23.8% (38.4fps) | 19.1% (25.3fps) | TU106, 1920 shaders, 1680MHz, 6GB GDDR6@14Gbps, 336GB/s, 160W (opens in new tab) | ||
Radeon RX 6650 XT (opens in new tab) | 23.2% (37.6fps) | 19.3% (25.6fps) | Navi 23, 2048 shaders, 2635MHz, 8GB GDDR6@18Gbps, 280GB/s, 180W (opens in new tab) | ||
Radeon RX 6600 XT (opens in new tab) | 22.6% (36.6fps) | 18.5% (24.6fps) | Navi 23, 2048 shaders, 2589MHz, 8GB GDDR6@16Gbps, 256GB/s, 160W (opens in new tab) | ||
GeForce RTX 3050 (opens in new tab) | 21.7% (35.2fps) | 18.2% (24.1fps) | GA106, 2560 shaders, 1777MHz, 8GB GDDR6@14Gbps, 224GB/s, 130W (opens in new tab) | ||
Radeon RX 6600 (opens in new tab) | 19.0% (30.7fps) | 15.4% (20.5fps) | Navi 23, 1792 shaders, 2491MHz, 8GB GDDR6@14Gbps, 224GB/s, 132W (opens in new tab) | ||
Intel Arc A380 (opens in new tab) | 9.1% (14.7fps) | ACM-G11, 1024 shaders, 2450MHz, 6GB GDDR6@15.5Gbps, 186GB/s, 75W (opens in new tab) | |||
Radeon RX 6500 XT (opens in new tab) | 6.1% (9.9fps) | Navi 24, 1024 shaders, 2815MHz, 4GB GDDR6@18Gbps, 144GB/s, 107W (opens in new tab) |
If you felt the RTX 4090 performance was impressive at 4K in our standard test suite, just take a look at the results with ray tracing. Nvidia put even more ray tracing enhancements into the Ada Lovelace architecture, and those start to show up here. There are still further potential performance improvements for ray tracing with SER, OMM, and DMM — not to mention DLSS3, though that ends up being a bit of a mixed bag, since the generated frames don't include new user input and add latency.
Even at 1080p medium, a relatively tame setting for DXR (DirectX Raytracing), the RTX 4090 roars past all contenders and leads the previous generation RTX 3090 Ti by 40%. At 1080p ultra, the lead grows to 56%, and it's nearly 70% at 1440p. Nvidia made claims before the RTX 4090 launch that it was "2x to 4x faster than the RTX 3090 Ti" — factoring in DLSS 3's Frame Generation technology — but even without DLSS 3, the 4090 is 80% faster than the 3090 Ti.
AMD continues to relegate DXR and ray tracing to secondary status, focusing more on improving rasterization performance — and on reducing manufacturing costs through the use of chiplets on the new RDNA 3 GPUs. As such, the ray tracing performance from AMD isn't particularly impressive. The new RX 7900 XTX basically matches Nvidia's previous generation RTX 3080 12GB, which puts it just a bit behind the RTX 3080 Ti and RTX 3090 — and the new 4070 Ti outpaces it by 10% at 4K, 13% at 1440p, and 19% at 1080p. The step down RX 7900 XT meanwhile lands above the RX 6950 XT and just below the RTX 3080 10GB card.
Intel's Arc A7-series parts again show some strange behavior, with performance either besting the RTX 3060 in some cases, or trailing the RTX 3050 in others. Minecraft and the Bright Memory Infinite Benchmark are the two big problems, with the former performing quite poorly across all Arc GPUs while the latter appears to have a memory leak or something similar that kills performance after a bit. The 16GB card managed to complete one run of each test setting, but the 8GB cards had performance plummet about half-way through the run, which explains the terrible 99th percentile FPS results. Intel is aware of these issues and is working to improve performance with a future driver.
You can also see what DLSS Quality mode did for performance in DXR games on the RTX 4090 in our review, but the short summary is that it boosted performance by 78% at 4K ultra. DLSS 3 meanwhile improved framerates another 30% to 100% in our preview testing, though we recommend exercising caution when looking at performance with Frame Generation enabled. It can boost frame rates in benchmarks, but when actually playing games it often doesn't feel much faster than without the feature. Overall, with DLSS2, the 4090 in our ray tracing test suite is nearly four times as fast as AMD's RX 7900 XTX. Ouch.
AMD's FSR 2.0 would prove beneficial here, if AMD can get widespread adoption, but it still trails DLSS. Right now, only one of the games in our DXR suite (Cyberpunk 2077) has FSR2 support, while three more from our rasterization suite support FSR2. By comparison, all of the DXR games we're testing support DLSS2, plus another five from our rasterization suite — and three of the games even support DLSS3.
Without FSR2, AMD's fastest GPUs can only clear 60 fps at 1080p ultra, while remaining decently playable at 1440p with 40–50 fps on average. But native 4K DXR remains out of reach for just about every GPU, with only the 3090 Ti, 4080, and 4090 breaking the 30 fps mark on the composite score — and a couple of games still come up short on the 4080 and 3090 Ti.
The midrange GPUs like the RTX 3070 and RX 6700 XT basically manage 1080p ultra and not much more, while the bottom tier of DXR-capable GPUs barely manage 1080p medium — and the RX 6500 XT can't even do that, with single digit framerates in most of our test suite, and one game that wouldn't even work at our chosen "medium" settings. (Control requires at least 6GB VRAM to let you enabled ray tracing.)
Intel's Arc A380 ends up just ahead of the RX 6500 XT in ray tracing performance, which is interesting considering it only has 8 RTUs going up against AMD's 16 Ray Accelerators. Intel posted a deep dive into its ray tracing hardware, and Arc sounds reasonably impressive, except for the fact that the number of RTUs in the A380 severely limits performance. The top-end A770 still only has 32 RTUs, which proves sufficient for it to pull ahead (barely) of the RTX 3060 in DXR testing, but it's can't go much further than that. Arc A770 also ends up ahead of AMD's RX 6800 in DXR performance, showing just how poor AMD's RDNA 2 hardware is when it comes to ray tracing.
It's also interesting to look at the generational performance of Nvidia's RTX cards. The slowest 20-series GPU, the RTX 2060, still outperforms the new RTX 3050 by a bit, but the fastest RTX 2080 Ti comes in a bit behind the RTX 3070. Where the 2080 Ti basically doubled the performance of the 2060, the 3090 delivers about triple the performance of the 3050. Hopefully a future RTX 4050 will deliver similar gains as the 4090, at a far more affordable price point.
Choosing a Graphics Card
Which graphics card do you need? To help you decide, we created this GPU benchmarks hierarchy consisting of dozens of GPUs from the past four generations of hardware. Not surprisingly, the fastest cards use either Nvidia's Ampere architecture or AMD's Big Navi. AMD's latest graphics cards perform well without ray tracing, but tend to fall behind once RT gets enabled — even more so if you enable DLSS, which you should. GPU prices are finally hitting reasonable levels, however, making it a better time to upgrade.
Of course it's not just about playing games. Many applications use the GPU for other work, and we covered some professional GPU benchmarks in our RTX 3090 Ti review. But a good graphics card for gaming will typically do equally well in complex GPU computational workloads. Buy one of the top cards and you can run games at high resolutions and frame rates with the effects turned all the way up, and you'll be able to do content creation work equally well. Drop down to the middle and lower portions of the list and you'll need to start dialing down the settings to get acceptable performance in regular game play and GPU benchmarks.
It's not just about high-end GPUs either, of course. We tested Intel's Xe Graphics DG1, which basically competes with integrated graphics solutions. The results weren't pretty, and we didn't even try running any of those at settings beyond 1080p medium. Still, you can see where those GPUs land at the very bottom of the 2020-2021 GPU benchmarks list. Thankfully, Intel's Arc Alchemist, aka DG2, appears to be cut from entirely different cloth... well, mostly.
If your main goal is gaming, you can't forget about the CPU. Getting the best possible gaming GPU won't help you much if your CPU is underpowered and/or out of date. So be sure to check out the Best CPUs for gaming page, as well as our CPU Benchmarks Hierarchy to make sure you have the right CPU for the level of gaming you're looking to achieve.
Test System and How We Test for GPU Benchmarks
We've used two different PCs for our testing. The latest 2022/2023 configuration uses an Alder Lake CPU and platform, while our previous testbed uses Coffee Lake and Z390. Here are the details of the two PCs.
Tom's Hardware 2022–2023 GPU Testbed
Intel Core i9-12900K (opens in new tab)
MSI Pro Z690-A WiFi DDR4 (opens in new tab)
Corsair 2x16GB DDR4-3600 CL16 (opens in new tab)
Crucial P5 Plus 2TB (opens in new tab)
Cooler Master MWE 1250 V2 Gold (opens in new tab)
Cooler Master PL360 Flux (opens in new tab)
Cooler Master HAF500
Windows 11 Pro 64-bit
Tom's Hardware 2020–2021 GPU Testbed
Intel Core i9-9900K (opens in new tab)
Corsair H150i Pro RGB (opens in new tab)
MSI MEG Z390 Ace (opens in new tab)
Corsair 2x16GB DDR4-3200 (opens in new tab)
XPG SX8200 Pro 2TB (opens in new tab)
Windows 10 Pro (opens in new tab) (21H1)
For each graphics card, we follow the same testing procedure. We run one pass of each benchmark to "warm up" the GPU after launching the game, then run at least two passes at each setting/resolution combination. If the two runs are basically identical (within 0.5% or less difference), we use the faster of the two runs. If there's more than a small difference, we run the test at least twice more to determine what "normal" performance is supposed to be.
We also look at all the data and check for anomalies, so for example RTX 3070 Ti, RTX 3070, and RTX 3060 Ti all generally going to perform within a narrow range — 3070 Ti is about 5% faster than 3070, which is about 5% faster than 3060 Ti. If we see games where there are clear outliers (i.e. performance is more than 10% higher for the cards just mentioned), we'll go back and retest whatever cards are showing the anomaly and figure out what the "correct" result would be.
Due to the length of time required for testing each GPU, updated drivers and game patches inevitably will come out that can impact performance. We periodically retest a few sample cards to verify our results are still valid, and if not, we go through and retest the affected game(s) and GPU(s). We may also add games to our test suite over the coming year, if one comes out that is popular and conducive to testing — see our what makes a good game benchmark for our selection criteria.
GPU Benchmarks: Individual Game Charts
The above tables provide a summary of performance, but for those that want to see the individual game charts, for both the standard and ray tracing test suites, we've got those as well. These charts were up-to-date as of December 13, 2022, with testing conducted using the latest Nvidia and AMD drivers in most cases, though some of the cards were tested with slightly older drivers. We've added more cards since then, with newer drivers, and retested the Intel Arc GPUs.
Note that we're only including the current and previous generations of hardware in these charts, as otherwise things get too cramped — and you can argue that with 29 cards in the 1080p charts, we're already well past that point. (Hint: Click the enlarge icon if you're on PC.)
These charts are up to date as of April 6, 2023.
GPU Benchmarks Hierarchy — 1080p Medium
GPU Benchmarks Hierarchy — 1080p Ultra
GPU Benchmarks Hierarchy — 1440p Ultra
GPU Benchmarks Hierarchy — 4K Ultra
Power, Clocks, Temperatures, and Fan Speeds
While our GPU benchmarks hierarchy sorts things solely by performance, for those interested in power and other aspects of the GPUs, here are the appropriate charts.
If you're looking for the legacy GPU hierarchy, head over to page two! We moved it to a separate page to help improve load times in our CMS as well as for the main website. And if you're looking to comment on the GPU benchmarks hierarchy, head over to our forums (opens in new tab) and join the discussion!