Hot topics close

Nvidia GeForce RTX 3080 Founders Edition Review

Nvidia GeForce RTX 3080 Founders Edition Review
If you’re a current (or aspiring) 4K gamer, Nvidia's ferocious, field-redefining GeForce RTX 3080 graphics card is the only one worth considering.

Is that a pumpkin? A leaf falling off a tree? Well then, it must be that time of year again when Nvidia invites us back into CEO Jensen Huang's kitchen to watch new graphics cards pulled from his oven of technological delight. This time around, his team has baked up a sequel to the original GeForce RTX line of 20-Series cards. The flagship of the stack, the $699 Nvidia GeForce RTX 3080 Founders Edition, features new architecture ("Ampere," the follow-on to the GeForce 20-Series' "Turing"), a brand-new cooler, a revolutionary PCB design, and tons more compute cores, to boot.

This launch represents a new approach for Nvidia. In part, it falls back on technological revolutions past, but it also forges its own path forward, with fresh solutions to old problems that could, over the next few years (if not decades), revolutionize how PC gamers interact with their favorite games and creative applications. With a slew of new software offerings juiced up by incredibly powerful hardware, the new RTX 3080 Founders' Edition is a triumph of graphics-card engineering and performance, earning our Editors' Choice as the new king of 4K gaming and the next great step in GPU evolution.

A Cool New Way to...Cool a Card

Ready for some nitty-gritty technical details about GPU cooling systems? Here's a beauty shot of the aesthetic design of the Nvidia GeForce RTX 3080 Founders Edition, and by extension the rest of Nvidia's new line of Founders Edition cards.

Nvidia GeForce RTX 3080 Founders Edition

Just...awesome. The visual design of the two-slot, 10.5-inch-long Nvidia GeForce RTX 3080 Founders Edition is modern, elegant, and almost completely devoid of the kind of "gamerzzzz" accents that (to my eyes, anyway) have plagued the component industry for too many years. The RTX 3080 Founders card looks like that kid from middle school who stopped bleaching his tips, gelling his hair, and wearing skate shoes to Thanksgiving...but still kept a little bit of his edge from those rebellious years. (That would be the angular cooling system and black-on-silver color palette.) Of course, third-party card makers will take some different approaches, and not all will conform to the Founders Edition's unique design, which has a fan on top of the card, as well as a fan on its bottom.

Similar Products
Nvidia GeForce RTX 3080 Founders Edition (PCIe)

Two strips of LEDs line the card in total: one that accents the upward-facing fan, and another underneath the GeForce RTX 3080 badge on the side of the card. Other than that, the card's brushed-metal casing is interrupted by little else. Of all the reference GPU designs I've seen over the decades, this is by far the most "grownup"-looking of the bunch. The card is simultaneously professional and intimidating, a hard balance to hit when you're swimming in the world of industrial design and also trying to keep esports types happy.

Now, onto that cooling situation. I'll let Nvidia take a portion of the explanation from here, from its reviewers' materials:

"Our engineers architected a super dense PCB design that’s 50 percent smaller than previous designs to allow room for a fan to flow air directly through the module. We designed it from first principles to deliver the highest possible thermal performance—without compromising fan acoustics. It’s exceptionally cool and quiet. We call it a “flow-through” design, and it works harmoniously with the PC chassis cooling systems, pulling fresh air through and pushing warm air out of the case."

If you hadn't already gathered from the photos above, this means a whole new approach to cooling the printed circuit board (PCB) of a graphics card. Nvidia's is a fresh, unique approach to an age-old problem that could upend how card manufacturers design their heatsink and fan combos from here forward.

Nvidia GeForce RTX 3080 Founders Edition (Ampere New PCB)A PCB that's twice as strong at half the size (Credit: Nvidia)

It's the same approach to the problem of cooling PCBs that "blower"-style cards have been utilizing for years, with one exception: the forward-most fan. This fan is designed to handle just one job: pulling in air from the front of the case, passing it through the heatsink, and ejecting it out the top of the fan toward the back of the case, like so...

Nvidia GeForce RTX 3080 Founders Edition (RTX 3080 Cooling Scheme)(Credit: Nvidia)

But in early discussions across the video-card-o-sphere after Nvidia's revelation of the design, the question has remained...won't that blow hot air from the GPU across the CPU?

Nvidia GeForce RTX 3080 Founders Edition (Forward Fan)

To this, Nvidia has several counters. The first is that in traditional PCB cooling approaches, a significant portion of the heat is already dumped to the GPU's backplate, which sits right next to the CPU cooler, anyway (at least in the case of air-cooled PC builds). Second, the company says that in the case of cards in the price range of the GeForce RTX 3080 ($600-plus), the company has found that most users have moved on to CPU liquid cooling, where the air around the actual CPU matters far less. In a liquid-cooling scheme, radiators tend to be placed at the front of a case or along the top, both of which lend themselves more favorably to the airflow pattern created by the RTX 3080: Air is sucked in through the front of the card, and dumped out through the backplate and the top of the PCB.

Nvidia GeForce RTX 3080 Founders Edition (Flip Side)

We'll put this whole apparatus to the test in our cooling evaluation below, but, if for nothing else, Nvidia should be commended, just as AMD tentatively was at the time of the launch of its first blower-style cards, for trying something different. Cooling a GPU without heating up every other component around it is one of the most complicated aspects of designing a modern-day graphics card, and Nvidia has taken an extreme approach to solving it that might be celebrated, but could just as easily be derided as being "over-engineered." Let's see in our tests. But first: some specs. Lots of specs.

Specs Compared: Nvidia GeForce RTX 3080 Founders Edition vs. the World

To start, it's only fair that we compare the current iteration of Nvidia's lineup to the cards it will succeed, the GeForce RTX 20-Series. While the Nvidia GeForce RTX 2080 may have been exorbitantly priced for its time, the release of the RTX 2080 Super brought things into perspective, if not nearly as much as our favorite GeForce release last year, the GeForce RTX 2070 Super. 

The differences between the Ampere and Turing families clearly don't start at the identical launch pricing. The RTX 3080 packs in nearly three times as many CUDA cores as the previous generation, likely thanks to the boost provided by moving from the aged-out 14nm process down to Samsung's 8nm node.

Interestingly, though, it looks on the surface as though the RTX 3080 may have less horsepower for pushing Nvidia's own DLSS technology, given its lacking Tensor-core count compared to previous years. But these "third generation" Tensor cores are supposedly twice as powerful as the previous gen. So in the case of Tensor cores, less really does equal more. (As a reminder, DLSS is a technology that uses Nvidia's AI-based neural network to "train" the handful of games that support it to run anti-aliasing a whole heck of a lot faster than it used to. This is made possible thanks to some very fancy math happening behind the scenes and under the shroud of RTX-based video cards equipped with the company's "Tensor" cores.)

The company also claims a boost of 1.9x in performance per watt (PPW) over the previous generation, a huge jump versus Turing's overall PPW gains over the generation before that, "Pascal." This represents a new paradigm for more-efficient GPUs that could, in theory at least, be scaled down to Nvidia's budget offerings to greater effect than what we're seeing at the top end of the stack in the RTX 3080.

Ampere PPW (Credit: Nvidia)

Despite that increase in efficiency, though, the RTX 3080 still bumps up the power requirements compared to the RTX 2080: 225 watts in the latter, versus 320 watts in the new card. This left the engineers at Nvidia with an interesting problem, wherein the PCB still needed to shrink in size, but it also needed more PSU power delivered to it than a traditional pairing of an eight-pin and a six-pin power connector could provide.

Nvidia GeForce RTX 3080 (Top Edge)

Enter: the new 12-pin connector. Placed smack-dab in the middle of the card's top edge, the new 12-pin power connector is much smaller than any other we've seen before, and it comes with an adapter that turns two traditional eight-pin PSU power connectors into one 12-pin. The result is a PCB and GPU that can still be plugged into any classic power supply but also retains its smaller size without sacrificing any performance. Note that Nvidia recommends a minimum 750-watt PSU for use with the RTX 3080 Founders Edition.

Nvidia GeForce RTX 3080 (Power Connector)

Meanwhile, the closest point of spec comparison we can find on AMD's side is the discontinued AMD Radeon VII, a card that was fine in its own right, but a bit too pricey and a bit too slow even at the time of its release to garner a tip-top recommendation from us. Here, the GeForce RTX 3080 laps the Radeon VII in almost every component aspect save for memory size and bandwidth. But as we've since seen, much of that brute-force horsepower for the Radeon VII never really translated to better numbers in gaming, though the card is still favored for heavy content-creation loads.

Speaking of which, improvements have also been extended to the memory pool on the RTX 3080. It is both larger (8GB in the RTX 2080 vs. 10GB in the RTX 3080) and smarter. The GDDR6X memory onboard the RTX 3080 uses new signaling pathways to boost the amount of bandwidth, from 14Gbps in the RTX 2080 to 19Gbps in the RTX 3080. This works in concert with the card's support for the emerging PCI Express 4.0 bus, which I'll get into a bit later.

Nvidia GeForce RTX 3080 Ports

The card doesn't break as many conventions with its port scheme as it does with the rest of its design. The Founders Edition has three DisplayPort 1.4b outputs and one HDMI 2.1 port (more on that in a second). One interesting bit here is the lack of a VR-focused USB-C/VirtualLink port, something that some Nvidia RTX cards released over the past few years have included. Why ditch it? Well, if we had to wager a guess, it's because two years after its debut, there still aren't any VR headsets that gain anything by using the single-cable approach over their current wired setups. Valve even cancelled production of its VirtualLink adapter for the Valve Index, its flagship VR device that would be one of the main beneficiaries from the kind of tech that VirtualLink offers. Why? "Lack of adoption," according to the company.

The RTX 3080 does come with a bit future-looking tech like HDMI 2.1, though, (currently, no monitors support this spec, and only a few TVs do), as well as 8K HDR capture through GeForce Experience, and AV1 Decode that will support 8K streaming faster and more efficiently than current codecs like H.264.

A Slew of Fresh Software (and Something More Familiar)

With the Ampere launch, Nvidia wasn't just announcing new graphics cards. The company also unveiled a host of new software features that will work in tandem with GeForce RTX cards to improve the performance of certain games, potentially increase the performance of gamers themselves, and help give everyone a bit more control over their WFH (or game-from-home) space, too.

Same DLSS, Different Day

Based on previous leaks that were said to be made up of internal slides, a big expectation in the minds of serious gamers during this launch was the possible reveal of DLSS 3.0, which sadly, didn’t happen. What we got instead was something we’ve had for a while already: DLSS 2.1, a version that iterates on the original 2.0 (which itself was an iteration on DLSS 1.0) rather than upending it completely.

We've tested DLSS extensively over the past year, pitting it against "competing" technologies (Nvidia might take issue with that "competing" association) such as CAS and FidelityFX from AMD, both of which are now equally integrated into many DLSS-enabled titles like Shadow of the Tomb Raider and Death Stranding (more on that below). 

RTX and DLSS Cores(Credit: Nvidia)

In our testing, we found that while FidelityFX and CAS are great for what they are and still offer up a viable way for AMD users to squeeze anywhere between 5 and 15 percent more performance out of their GPUs without sacrificing on visual fidelity, we found that no sharpening tech—FidelityFX, Nvidia Freestyle, or otherwise—are even on the same plane of reality as DLSS 2.1. Its ability to simultaneously make games look better and run up to 40 percent faster is a magic trick that can't be replicated with patched-in options like sharpening tech.

Nvidia DLSS 4K Death Stranding(Credit: Nvidia)

But about that missing piece of the launch. While DLSS 1.0 and DLSS 2.0 have both had their shining moments, many (including myself) were hoping that DLSS 3.0 would be the true card-selling killer app for the 30-Series. This is because one of the slides in the original leak claimed that DLSS 3.0 won't work with just one or two more games (as was the case in the transition between DLSS 1.0 and 2.0), but all games that have TAA integrated into their list of anti-aliasing options.

This is a huge number of titles, and if this degree of DLSS support were to emerge, it would expand the reach of DLSS from six games in total as of this writing, to potentially hundreds or even thousands more. All it would take is an Nvidia Game Ready driver update and the participation of the developer. This could be Nvidia's silver bullet, the thing that sets rival AMD back in the race by years, potentially. If in one fell swoop Nvidia could give all GeForce RTX card owners 40 percent more performance on lots of their favorite games with nothing more than a firmware update? Move over discrete hardware...AI has got the heavy lifting covered from here.

But of course, it's all speculation still. Until we see hard evidence that DLSS 3.0 is even real, let alone that it can do all the things that the leaks suggest it might, it remains one of the few question marks left on an otherwise stellar report card for Nvidia this launch.

Nvidia Reflex

Another new addition this year was the introduction of Nvidia Reflex, a new software-and-hardware solution that aims to reduce the latency that can occur among the CPU, the GPU, and the monitor you're playing on. Again quoting from Nvidia's materials:

"Our published research shows that reducing system latency is the key metric that contributes to improving a player’s mechanical aiming performance. Nvidia Reflex is a new suite of technologies built to optimize and measure system latency for competitive games."

To test the feature, with the RTX 3080 Founders Edition fired up, I opened the multiplayer shooter Valorant on the recent Editors' Choice Asus ROG Swift 360Hz PG259QN monitor, the fastest-refresh monitor of the moment, and loaded into the training grounds to give the tech its best chance at improving my gameplay.

Nvidia Reflex Graph(Credit: Nvidia)

At these frame rates, I'm a little too old and a lot too slow to notice much of a difference anecdotally. It was also difficult to tell if the increase in responsiveness I was feeling was a result of playing on a 360Hz monitor (my daily driver at home "only" goes up to 120Hz), or if it was Reflex actually making me a better, faster, more accurate gamer than ever...which is why I devised a test.

Nvidia Reflex Render Queue(Credit: Nvidia)

To find out if there really was some secret sauce at work here, I ran two tests: one with the RTX 3080 running Reflex while at the practice range in Valorant, and another with an AMD Radeon RX 5700 XT, just to see what the effect of removing both G-Sync and Reflex from the equation would look like. According to the slide below, Nvidia claims that Valorant actually sees the most benefit from the integration of Reflex, with Apex Legends, Fortnite, and Destiny 2 less so.

GeForce RTX 3080 Reflex Chart(Credit: Nvidia)

On the first test (with RTX 3080) I scored 11 of 30 hits, which is about what I expected out of myself with the practice mode turned to the Hard setting. In this mode, the target bots are only on the screen for fractions of a second, in which time you need to spot them, track to them, and hit a headshot to score a point. No body shots here: This is all precision and speed at its most extreme. It's a lot to manage in a very tiny sliver of time, which is exactly the type of scenario that Reflex was built for.

Reflex Boost On

On the second test, I was actually surprised by how much more difficult the task became. Despite three separate tries, I wasn't able to get a score above only 8 out of 30 while using the Radeon RX 5700 XT. Each time, the frame rate was well above the refresh rate of the monitor (it never dipped below 500fps), and I could just as easily switch the feature on and off with the Nvidia card and see my score rise and drop accordingly.

Reflex Boost Off

I've been a competitive FPS player for decades, now, and if all it took was Reflex to increase the headshot percentage of an aging 33-year-old by almost 30 percent? Hey, sign me up, I'm sold. But this is only my anecdotal experience, one of an ex-aspiring pro StarCraft II player (for the two months I thought I could make it) who has long since hung up his dreams alongside his mechanical keyboard. So I may not be the best judge of whether or not this system is useful or not.

Who this is really marketed to is the cream of the esports crop, the 16-to-25-year-olds that still have both the reaction times and the physical dexterity to notice and be able to take advantage of the difference between 120ms response times and 50ms response times. We didn't have anyone on hand who might fit this profile, but I'd imagine as time goes on, there will be plenty of esports professionals who take the tech out for a test drive and push it to the limits of what's possible.

Ready to Broadcast?

Nvidia also unveiled a whole new broadcaster suite (aptly named "Nvidia Broadcast"), built for everyone from aspiring streamers to the WFH warrior who just wants a little more privacy or a way to keep their Zoom calls in focus during the most hectic of situations.

Nvidia Broadcast(Credit: Nvidia)

As anyone who's been working from home during the past few months can tell you: distractions for both you and the people you're trying to have meetings with are everywhere. Plus, somehow they also seem to know how to pick just the perfect moment to start barking, begging for attention, dropping things in the kitchen, forgetting to wear pants in the background, and on and on...

Jumping off the same tech that made RTX Voice such a hit, Nvidia's Broadcast suite includes a number of tools that use AI to do everything from background noise removal to tracking your face on a webcam and even creating an all-digital green screen out of any background, a big plus for aspiring and professional streamers alike.

We didn't have the right equipment at the time of this review to put this feature through the wringer, but stay tuned as we continue to test all cards in the RTX line for more info on just how effective Broadcast is at turning any room into a professional streaming studio.

RTX IO: Speeding Up Game Loads

RTX IO is a new solution to an age-old problem for game developers: game loading speeds. Since the dawn of consoles that use platter hard drives as the primary way to store digital downloads and installed games, developers have been stuck at a speed limit of about 5,400rpm thanks to spinning metal, plus the limits of Serial ATA interfaces. More realistically, this translates to load speeds that peak at about 100MBps, which means that large levels, dynamic elements, and any interactive set pieces that are larger than that (which many assets are, in modern AAA games) need to take time to load from the hard drive into the GPU.

RTX IO Bandwidth(Credit: Nvidia)

Until now, this time has been filled with things like characters taking elevator rides, crawling through air vents, or squeezing between rock faces as the level loads in the background. Now, however, with the advent of consoles like the Sony PS5 and its innovative approach to using PCI Express 4.0 SSDs, game designers are free to start thinking about all new ways in which faster load times from storage into the VRAM of your GPU could change how we play video games from the ground up.

RTX IO Pathing(Credit: Nvidia)

In brief, think of RTX IO as a PC gamer's solution to the same problem. RTX IO is an API that game developers can integrate at the engine level, which helps to offload some of the work that's created when a PCIe 4.0-based SSD is providing data to the system. RTX IO bypasses the normal pathing of an SSD using cores of the CPU to unpack compressed data, and instead allows for GPU-based lossless decompression that should, in theory, improve the performance of games that are built to take advantage of it.

But, before you leave this section thinking that we've entered an entirely new paradigm of gaming technology, let me be the one to temper that expectation (if not squash it completely): developers actually need to design and build the games that are able to take advantage of the new loading times before we'll even get a chance to see them in action. Everything that RTX IO enables for games has to be built at the level of the engine, and for now, nearly all games that are cross-platform (read: basically any AAA title that wants to recoup the cost of investment) are still designed to load at the same speed as the previous console generation, about 100Mbps peak in most scenarios.

This means that for now, as well as the foreseeable future, RTX IO remains an on-the-horizon technology that doesn't have any real-world implementations today, nor do we expect to see any for quite a while (or at least as long as a AAA title takes to develop, give or take a few years).

A Peek Into the Future: 8K Gaming (?!?)

Finally, we come to some major meat of Nvidia's launch of Ampere: the (tentative) arrival of 8K 60fps gaming. (See our primer, What Is 8K?)

That's right, 8K. Now, Nvidia hasn't exactly changed everything we know about gaming just yet; the claim of being able to play games at 8K and get 60fps was made only for some games on the yet-to-be-released GeForce RTX 3090. That card will cost more than twice as much as the RTX 3080 and be aimed primarily at content creators (though Nvidia certainly isn't going to discourage anyone from buying it strictly for high-resolution gaming, either).

In its presentation, Nvidia claims the GeForce RTX 3090 can get many popular esports titles (such as Apex Legends, Rocket League, and World of Tanks) well above the 60fps threshold at 8K in some cases. And with the help of DLSS, it might accelerate even the most demanding RTX-enabled AAA games, like Control, from under 10fps to up to 57fps with all settings maxed out.

8K Gaming(Credit: Nvidia)

That is a hugely impressive claim, but we won't be able to verify it for ourselves at PC Labs just yet. There aren't many 8K display manufacturers out there right now, and of the few there are, they aren't exactly tossing out 8K testing samples by the handful. However, when the time comes that we have both the GeForce RTX 3090 and an 8K display ready to benchmark with, expect a slew of frame-rate graphs and frame-rate charts filled with numbers from our test runs!

Let's Get Testing! Time to Play

So, back to the card on hand. PC Labs ran the Nvidia GeForce RTX 3080 through a series of DirectX 11- and 12-based synthetic and real-world benchmarks. Our spanking-new PC Labs test rig is Intel-based and employs a PCI Express 3.0, not 4.0, motherboard. It's equipped with an Intel Core i9-10900K processor, 16GB of G.Skill DDR4 memory, a solid-state boot drive, and an Asus ROG Maximus XII Hero (Wi-Fi) motherboard. All cards below were retested on this rig. Given our tests with the Core i9-10900K and recent Ryzen 9 CPUs, this rig is the best reasonable configuration of the moment in 2020 to cut the CPU out of the equation for frame rates.

For our testing, I focused some of the effort on the esports aspect of the Nvidia GeForce RTX 3080 with games like Counter-Strike: Global Offensive (CS:GO) and Rainbow Six: Siege, and I also ran the card through the rest of our new standard benchmark regimen, which tests a card's abilities to handle AAA games at the highest possible quality settings, as well as how it handles during synthetic benchmarks that stress the card in a variety of ways.

A quick note: You'll notice there's no talk of "PCI Express 3.0 vs. PCI Express 4.0 performance" here, and there's a reason for that. Namely, our new main graphics testing rig is using an Intel Core i9-10900K, on Intel's Z490 platform...and Intel's latest chipsets and boards don't support PCI Express 4.0 (at least not yet). The only mainstream platforms that support PCIe 4.0 are AMD's X570 and B450, and we'd need to run another full set of games on, say, a Ryzen 9 3900X or 3950X to come close to a comparison with the Core i9-10900K. Even then, between disparate motherboards, firmware versions, RAM modules, and specifically CPUs, the results, by nature, would be only roughly comparable.

In short: We will need to fiddle with things a bit more, given the limited time we have had with the GeForce RTX 3080 so far, to see what the effect of PCI Express 4.0, if any, is. We should have a concrete answer well before the time Intel's platforms get their PCIe 4.0 functionality "switched on," as it were.

Nvidia GeForce RTX 3080 (Detail)

Also remember that almost every test we run (aside from the esports titles) is done at the highest possible quality preset or settings. If you have a higher-hertz monitor and you're worried your card might not make the frame-rate grade, it could still be possible with the right card and a combination of lower settings. Not only that, but many of these titles (including Death Stranding, Shadow of the Tomb Raider, and F1 2020) have both DLSS and FidelityFX CAS with Upscaling integrated directly into the game. This can mean boosts of up to 40 percent more performance on top, depending on the setting and the card you're playing with.

And so, onward to our test results. Note: If you want to narrow down our results below to a specific resolution (say, the resolution of the monitor you plan to game on), click the other two resolution dots in the chart legends below to suppress them and see a single set of results. Our new list of AAA titles includes a mix of recent AAA titles like Red Dead Redemption 2 and F1 2020, as well as some older-but-still-reliable pillars of the benchmarker's toolkit, like Shadow of the Tomb Raider and Far Cry 5.

Testing Results: Synthetic Benchmarks

Synthetic benchmarks can be good predictors of real-world gaming performance. UL's circa-2013 Fire Strike Ultra is still a go-to as an approximation of the load levied by mainstream 4K gaming. We're looking only at the test's Graphics Subscore, not the Overall Score, to isolate the card performance. Meanwhile, we also ran 3DMark's Time Spy Extreme test, which is a good test of how well a card will do specifically in DirectX 12 games at 4K resolution. Finally there's Port Royal, which is strictly a test for RTX cards right now, measuring how well they handle ray-tracing tasks. (Thus why blank results for the AMD cards on that one.)

Tests like FurMark, V-Ray, and LuxMark measure disparate aspects of a card's performance; everything from how powerful the onboard VRAM is to what the card is capable of when it's stress-tested to the limit. While tests like Furmark, 3DMark, and Superposition all returned results within expectations for the 30 percent in improvement we were hoping to get out of the card, VRAM-bound runs like V-Ray and LuxMark really showcase the true strength of GDDR6X over the previous generation, posting a performance increase of nearly 75 percent in the case of the former.

Testing Results: Recent AAA Games

The following benchmarks are games that you can play. We typically used in each case the highest in-game preset and, if available, DirectX 12. As mentioned, we've got a host of AAA titles in here; multiplayer-focused and esports titles are in another chart further down.

Across every test we ran, the Nvidia GeForce RTX 3080 Founders Edition proves itself as the premium GPU of a new 4K gaming era. While gains in both 1080p and 1440p resolutions were substantial, the real meat of the RTX 3080's gaming-benchmark dominance becomes apparent once you take a look at 4K numbers.

In games like F1 2020, the RTX 3080 Founders Edition saw results that were up to 70 percent faster than the previous-generation card it's replacing (the RTX 2080 Founders Edition) and still manages to beat out both the GeForce RTX 2080 Ti and the GeForce RTX 2080 Super by a substantial margin.

For a possible look at the future of gaming, you'll want to pay special attention to the chart for Death Stranding. This is because, like Shadow of the Tomb Raider and Call of Duty: Modern Warfare, it's one of the few titles out there that integrates DLSS directly into the engine of the game. This means it can achieve serious performance gains over results we'd see with the feature turned off, and it represents the potential for DLSS 2.0 (and hopefully, 3.0), as the upscaling method continues to evolve.

So, how did those "third generation" Tensor cores pay off? On the GeForce RTX 2080 Ti, Death Stranding hits an impressive 121fps with DLSS turned on to Performance mode, but the RTX 3080 just has that little bit of extra juice to carry it through, posting a finish of 148fps (which is just crazy when you remember this is all happening at 4K resolution). This result makes it the first single-card solution we've seen that finally lives up to the (possibly premature) expectations set by 4K 144Hz monitors like the Editors' Choice Acer Predator XB3, succeeding where even dual GeForce RTX 2080s strung together in NVLink have previously struggled to keep up.

Going through the rest of the AAA gaming results, it's clear that the GeForce RTX 3080 is a card that makes a statement and has the horsepower to back it up: 4K gaming isn't just here, it's here and then some. With that in mind, the percentage gains in both 1080p and 1440p aren't as substantial as the ones in 4K. That makes sense when you consider that 4K performance is usually highly GPU-dependent, while lower resolutions are generally some combination of CPU- and GPU-dependent depending on the engine and how the game itself has specifically been optimized.

For example, while the difference between 4K results in Total War: Warhammer II represented a gain of more than 35 percent between the GeForce RTX 2080 Ti and the RTX 3080, it translated to just a 3 percent lift once we dialed the game resolution down to 1080p. This wasn't always the case, though. In Red Dead Redemption 2, the 1080p and 1440p results saw gains of roughly 20 percent each, while 4K results were around 30 percent faster.

Again, it's all down to the game and the engine, but one thing regardless: No matter what subset of game benchmarks we put this beast through, the $699 RTX 3080 always comes out looking like a substantially better deal, in aggregate, than anything the RTX 20-Series has to offer.

Testing Results: How About Some Legacy AAA Titles?

We also ran some quick tests on some oldies-but-goodies that still offer the AAA gaming experience. These legacy tests include runs of Hitman: Absolution, Tomb Raider (2013), and Bioshock: Infinite, the last being a game that has no business still being as well optimized as it is here in 2020.

If you want steady, longstanding, reliable numbers, look no further than these legacy titles to tell the tale. Though they're far from the most popular games on the Steam Charts nowadays, these titles still offer a picture of how well modern graphics cards handle older AAA titles built on aging engines.

The RTX 3080 Founders Edition continues its reign of dominance here, in places increasing the performance of the GeForce RTX 2080 in 4K on games like Tomb Raider and Sleeping Dogs by more than 50 percent. Given Nvidia's claim that an RTX 3080 should be thought of having the same amount of power as "two RTX 2080s," that's not quite borne out here in every instance, but the jumps are still gigantic from RTX 2080 to RTX 3080.

Testing Results: Multiplayer/Esports Games

Though most of PC Labs' game tests are maxed out in graphical fidelity to push the cards to their limit, multiplayer gaming is all about maintaining the best balance between graphical fidelity and frame rate. With that in mind, we've kept CS:GO, Rainbow Six: Siege, and Final Fantasy XIV tuned to the best combination of necessary improvements in settings (higher anti-aliasing and lower shadows, for example), while still trying to keep frame rates for 1080p games above 144fps.

Why 144fps? See our Death Stranding discussion above. That's a coveted target for highly competitive esports gamers who have high-refresh-rate 120Hz or 144Hz monitors. For more casual players with ordinary 60Hz monitors, a solid 80fps or 90fps at your target resolution, with some overhead to account for dips under 60fps, is fine.

Now, it's rare that even a mid-tier card like the AMD Radeon RX 5700XT would struggle in these tests, so it's no surprise to see new category leaders like the RTX 3080 run all these titles with ease. Even Rainbow Six: Siege beat its old record, which used to require two RTX 2080 Super Founders Edition cards to achieve similar results in the same tier, and that was when we used to test at just a 50 percent render level of the full resolution.

The one narrative to take away from the RTX 3080 multiplayer testing is that if you're only going to play these games (or something like them in the realm of competitive esports titles), this card is mega-overkill, overshadowed of course only by whatever results we may see out of the upcoming Nvidia GeForce RTX 3090 Founders Edition in the next few weeks. If you're on a budget or don't mind sacrificing quality in AAA titles, we'd recommend finding a better balance between a higher-refresh monitor, a mid-tier card like a GeForce GTX 1660 Ti, and maybe an extra keyboard for when you wear down the WASD keys on the first one!

Overclocking and Thermals: The New Cool

This where a significant chunk of rubber meets the road for Nvidia, as the company looks to be banking big on its new compact PCB and cooling solution design. Though we can't get into specifics just yet, what we can say is that in looking at other all-in-board (AIB) cards that are due for their turn down the testing pipeline, Nvidia seems to be the only one approaching the problem of cooling the RTX 3080 differently this time around. Many of the coming AIB cards are about as traditional as they come, with big honking heatsinks, even bigger PCBs, and enough fans to keep a Texas church cool in August. So, how did the Founders Edition's new cooling system fare when its feet were set to the (GPU) fire?

Nvidia Pullthrough Design(Credit: Nvidia)

We ran a 10-minute stress test in FurMark on the Nvidia GeForce RTX 3080 Founders Edition, and the reference card peaked at a temperature of 77 degrees C. This is pretty chill for a card at this level of power, but still quite a bit hotter than what we saw out of the original RTX 2080 Founders Edition, which peaked at just 66 degrees C in our testing.

We also attempted to use a thermal camera to track where the air and heat was collecting in our case but...there's a problem with that. We use a FLIR One Pro to take our measurements, which is a great thermal camera to use when you've got a direct line of sight with what you're trying to measure. But when you obscure it with something like, say, a glass pane on the side of a computer case, the image gets way fuzzier, as you can see...

FLIR Image FrontFLIR BackFLIR Case

That is a necessity, though: Keeping the case side on, that is. Much of the airflow "pathing" that Nvidia is keen on with this latest style of cooler doesn't work unless the case is closed up tight and every fan is tuned to send air in very specific directions. So even after I swapped some fans around and turned them all to full speed in the BIOS, the resulting heat map of where air is coming in and where it's leaving the case is obscured by the fact that the case has to be closed to be effective in the first place. Thus the fuzzy imaging.

When it came time to overclock the card, using EVGA's Precision X1 utility, I was able to achieve a stable and reproducible overclock profile of around 10 percent across both the memory clock and the GPU clock. This translated to gains of just about 5 percent in gaming and synthetic benchmarks (title-dependent), however the overclock remained highly stable across multiple tests. With so much tuning happening under the hood of the RTX 3080 Founders Edition, I'm not surprised to see so little headroom here, replicating much of the same story we saw across the Turing line, including in the Founders Edition versions we tested of the GeForce RTX 2080 Ti, the GeForce RTX 2080 Super, and the GeForce RTX 2080.

Nope, Nvidia Is NOT Messing Around This Time

If the GeForce RTX 3080 Founders Edition is any indication, the launch of the new GeForce RTX line of Nvidia graphics cards looks poised to be nothing short of a blowout, one that sets a new expectation of performance that (given the lack of leaks) we can only imagine AMD's coming Radeon cards might have a difficult time living up to. Never mind AMD, really; even Nvidia's previous Turing efforts looks like they pale in comparison, based on what we've seen from this initial Ampere offering.

The thing is, as good as the various Turing cards have been in their time as raw performers, some aspects of Turing feel unfulfilled since the time of its launch. Example: The flagship feature of the original RTX line, ray tracing, is just barely (and I mean barely) seeing wider adoption among game developers, two years after the initial release of the 20-Series. Nvidia was excited to debut a brand new type of gaming technology but put the cart a little too far ahead of the horse for most gamer's palates.

Nvidia GeForce RTX 3080 Founders Edition

Now, however, RTX (and by extension, DLSS), are starting to come into their own, and the pricing of Nvidia's new line of cards is following suit. These two factors combined make the launch of the 30-Series far more compelling and enticing to a greater swath of gamers than the 20-Series. A third factor could well be the mainstreaming of, and price drops on, high-refresh-rate gaming monitors. With Ampere, it feels like now, here in 2020, the next decade of PC gaming is starting to come together: the technology for fast game loading is emerging; the horsepower for high refresh rates (above 60Hz) is moving toward becoming a new expectation, rather than a niche; and AI tech is starting to supplement the raw pixel-pushing performance of GPUs.

In its marketing materials, Nvidia is calling the new RTX 3000 Series its "greatest generational leap," and although that was a mighty claim going in, we are a lot more convinced about it coming out of our benchmark suite. At this price, and at this level of power, and with this temperature profile, the RTX 3080 Founders Edition is exceeding all expectations set for it ahead of time (save for some middling gains in 1080p and 1440p results), and setting new frame-rate targets that all competing GPUs will need to aspire to on the road forward.

In short, the GeForce RTX 3080 Founders Edition is among the most beautifully designed, well-supported, and uniquely powerful GPUs ever released. It's a sea change in modern graphics-card manufacturing, and it represents a new era for a venerable company that shows no signs of slowing down anytime soon. You want the best that 4K gaming has to offer in the second half of 2020? Then you want the Nvidia GeForce RTX 3080 Founders Edition. Simple as that.

Editors' Note: Joseph Maldonado assisted with the testing of the legacy video cards for this review.

Artboard Created with Sketch.
Nvidia GeForce RTX 3080 Founders Edition
4.5

Editors' Choice

Pros
  • Chart-topping performance for the price

  • Whisper-quiet operation

  • Innovative cooling system

  • Gorgeous shroud design

  • Tons of complimentary software features

View More
The Bottom Line

If you’re a current (or aspiring) 4K gamer, Nvidia's ferocious, field-redefining GeForce RTX 3080 graphics card is the only one worth considering.

Nvidia GeForce RTX 3080 Founders Edition Specs
Graphics Processor Nvidia Ampere GA102 GPU Base Clock 1440 MHz GPU Boost Clock 1710 MHz Graphics Memory Type GDDR6X Graphics Memory Amount 10 GB DVI Outputs 0 HDMI Outputs 1 DisplayPort Outputs 3 VirtualLink Outputs No Number of Fans 2 Card Width double Card Length 10.5 inches Board Power or TDP 320 watts Power Connector(s) 1 12-pin
Best Graphics Card Picks Graphics Card Product Comparisons Further Reading
News Archive
  • Once Human
    Once Human
    New free-to-play survival game Once Human isn't basking in praise ...
    9 Jul 2024
    3
  • Utah Hockey Club
    Utah Hockey Club
    First game, first goals, first win: Inside Utah Hockey Club's big night
    8 Oct 2024
    10
  • Armenia
    Armenia
    Fresh clashes erupt between Azerbaijan, Armenia
    13 Sep 2022
    1
  • Caitlin Upton
    Caitlin Upton
    Former Miss Teen USA Contestant Decries Resurfacing Of ...
    30 Aug 2024
    2
  • Chicago Fire
    Chicago Fire
    Brett Is Leaving Chicago Fire After 10 Seasons: Here's Why
    18 Jan 2024
    20
  • Gwen Stefani
    Gwen Stefani
    Blake Shelton Moves Gwen Stefani To Tears During Walk Of Fame ...
    20 Oct 2023
    13
This week's most popular shots