AMD Radeon R9 295X2 Review: A Dual-GPU Beast

2022-10-02 01:15:08 By : Mr. Jay Zheng

We first caught wind of an upcoming dual-GPU Hawaii graphics card this time last month when AMD teased us with its top-secret “Two is Better Than One” campaign. Although AMD didn’t actually reveal anything, it was clearly planning a successor to the Radeon HD 7990, which is essentially two Tahiti dies on a single board, or in other words a pair of slightly underclocked Radeon HD 7970 GHz Edition GPUs.

Back when we tested the 7990 in April 2013, it was a formidable rival for the GeForce GTX Titan. The biggest problem the card faced was AMD’s frame latency performance, which was quite poor at the time, especially compared to a single-GPU solution like the Titan. The 7990 also suffered from enormous power consumption figures compared to the Titan, as we found it pulled almost 40% more power.

Nonetheless, putting a pair of 28nm GPUs — each containing 4.3 billion transistors — onto a single PCB measuring just 12in (30cm) long was an impressive feat of engineering. In fact, this is what made the single-GPU flagship R9 290X even more impressive last October. Although it was built using the same 28nm process, Hawaii XT packs 6.2 billion transistors, blowing the die size up to 438mm2 from 352mm2.

That expansion allows for 38% more SPUs than the HD 7970 GHz Edition, though it also make the 290X 20% more power hungry, giving it a TDP of roughly 300 watts! The card’s enormous power draw resulted in a huge thermal output — so much so that R9 290Xs often throttled just to maintain stability. Realising this, we weren’t sure if AMD was seriously considering two Hawaii XT GPUs on a single PCB.

Apparently so, as today marks the arrival of the Radeon R9 295X2, the most extreme graphics cards we have ever seen. It’s hard not to be impressed when specs like 12.4 billion transistors, 5632 stream processors and 11.5 TFLOPS of compute power are being thrown around, not to mention the card’s 8GB of GDDR5 memory and dual 512-bit memory bus, which provide a total memory bandwidth of 640GB/s.

AMD says that this graphics card is “not for the faint of heart” and that users should “handle with extreme caution.” The company faced two main challenges in developing the R9 295X2: keeping it cool and keeping it fed. The former is tackled by a dual-block closed-loop liquid cooler made by Asetek, and with a 500-watt thermal design power, the latter is more like a rite of passage for your power supply.

Measuring a motherboard-bending 12in (30cm) long, the R9 295X2 is roughly 3cm longer than the R9 290X and it’s a very heavy graphics card featuring a full metal construction including the backplate and fan shroud.

Speaking of the fan, we never expected such an insane graphics card would be cooled via a single fan and, well, it’s not. In fact the fan is used only to cool the GDDR5 memory and power regulators.

The GPUs are cooled using a pair of Asetek liquid cooling blocks, but we will get to them shortly. For now let’s check out the cards specifications…

The card’s GPU core is clocked at up to 1018MHz and it’s the same deal as the R9 290X… if the thermal load reaches 95℃ then the clock speed will be throttled down to keep temperatures in check. AMD hasn’t said how low the GPUs will underclock so we will have to look at this when testing.

Given the inherent problems AMD had trying to stop the R9 290X from throttling down while gaming, we failed to see how it could keep two of those GPUs in check.

What do you do when dealing with connector-melting levels of current? Add water of course.

Traditionally, GPUs are cooled using a heatsink made up of a copper and aluminium cocktail, while higher-end models have adopted performance-enhancing features such as vapor chambers and heatpipes.

Unfortunately for AMD, given the space they have to work with it just isn’t possible to fit a big enough heatsink, certainly not one capable of dispersing 500 watts of heat, no amount of heatpipes are going to help with that.

Water is 24 times more efficient at absorbing and transferring heat than air and closed-loop liquid cooling has become increasingly popular over the past few years, to the point where it has started to become a mainstream product.

At the forefront of liquid cooling, as well as the popular close-loop systems, is Asetek and therefore it comes as little surprise that AMD has tasked them with the job of designing a solution to cool the R9 295X2.

Installing a graphics card is typically quick and easy so whatever Asetek came up with had to be in keeping with this. Although it is highly unusual for a graphics card to come with a radiator attached as standard, in fact this is the first time we have ever seen it, we feel Asetek has come up with a relatively practical solution for what is otherwise a highly impractical graphics card.

The solution is simple enough: a dual-block closed-loop system circulates through a 120mm radiator that is 38mm thick, though 64mm of space will be needed once the fan is attached. More importantly there is plenty of hose separating the graphics card and the radiator, 380mm in total, which should be enough to reach most places around the case.

As we mentioned earlier heat from memory and regulators is dissipated by a separate heatsink and fan. The memory/regulator fan will automatically adjust based on regulator current.

Now that we have seen how AMD is keeping the R9 295X2 cool, there is also the matter of powering it. Power is fed into the card through a pair of 8-pin PCIe connectors, but a little more thought needs to go into how and where the cables are connected from in the case of a 500w TDP graphics card.

First and foremost, the power supply needs to support at least two 8-pin PCIe power connectors that are each capable of supplying 28A of dedicated current. The system power supply must support a combined 50A of current over the two 8-pin power connectors, in addition to providing power requirements for other components.

For dual-card installation, a second pair of 8-pin power connectors is required and additional power current support is required. It isn’t recommend to use power adapters or dongles for the 8-pin power connectors.

If your high-end PSU doesn’t have a single +12v rail like the PC Power & Cooling Silencer Mk III 1200w unit that we used, then it is important to work out how the +12V rails are distributed among the physical cables on the PSU. Thankfully, most enthusiast PSUs now feature a single +12v rail so it will make this step easy.

While it’s a safe bet to use a 1000 to 1200 watt power supply with a graphics card such as the R9 295X2 that is rated for 500w, there are less extreme units that will work. The Cooler Master GX 750w for example boasts a 60A Single +12V rail, though you will have to be careful as to what else is plugged in as there is little headroom to work with.

As usual, we tested each card with Fraps to record its average frame rate in seconds over a set amount of time. We typically run tests for 60 seconds. Reporting the average frames per second is how things have been done for… well, forever. It’s a fantastic metric in the sense that it’s easy to record and easy to understand but it doesn’t tell the whole story, as The Tech Report and others have shown.

To get a fuller picture, it’s increasingly apparent that you need to factor in a card’s frame latency, which looks at how quickly each frame is delivered. Regardless of how many frames a graphics card produces on average in 60 seconds, if it can’t deliver them all at roughly the same speed, you might see more brief jittery points with one GPU over another — something we’ve witnessed but didn’t fully understand.

Assuming two cards deliver equal average frame rates, the one with lowest stable frame latency is going to offer the smoothest picture, and that’s a pretty important detail to consider if you’re about to drop a wad of cash. As such, we’ll be including this information from now on by measuring how long in milliseconds it takes cards to render each frame individually and then graphing that in a digestible way.

We’ll be using the latency-focused 99th percentile metric, which looks at 99% of results recorded within X milliseconds, and the lower that number is, the faster and smoother the performance is overall. By removing 1% of the most extreme results, it’s possible to filter anomalies that might have been caused by other components. Kudos to The Tech Report and other sites like PC Per for shining a light on this issue.

The R9 295X2 rendered 135fps at 2560×1600 when playing Battlefield 3, 99% faster than a single R9 290X — almost perfect scaling. Even compared to a pair of GTX 780 SLI cards, the R9 295X2 was still 22% faster, while it was 75% faster than the GTX 780 Ti.

Compared to the previous-generation dual-GPU graphics cards, the R9 295X2 was an impressive 42% faster than the HD 7990 and 48% faster than the GTX 690.

Frame time performance isn’t an issue for the R9 295X2 in Battlefield 3 as it only took 9.9ms between frames, which is faster than anything else we tested.

Those of you who have moved on to Battlefield 4 will be happy to see that the R9 295X2 offers some pretty impressive results here as well. With an average of 67fps at 2560×1600 it was the fastest graphics card tested, beating the GTX 780 SLI combo by a 27% performance margin.

Therefore, it came as no surprise that the R9 295X2 was 89% faster than a single GTX 780 Ti, the world’s fastest single-GPU gaming graphics card. Compared to the R9 290X, the R9 295X2 was 74% faster and 41% faster than the HD 7990.

Again, we see that frame time performance isn’t an issue here either as the R9 295X2 took just 24.2ms between frames, making it faster than the R9 290X and GTX 780 Ti.

Steven Walton is a writer at TechSpot. TechSpot is a computer technology publication serving PC enthusiasts, gamers and IT pros since 1998.

*sob* this week I’ll win lotto for sure!

Or…you could spend $550 and get a R9 290 which for most people is more than enough – it is much more value for money then a GTX 780, and usually performs better for gaming.

Im going to get the Sapphire R9 290 Vapor X OC 4GB when it comes on sale in the next few days, for about $580 AU (PCcaseGear)…..very good value.

a quick google search shows that this beast will cost $1500.

How about some OpenCL tests as well, I want to know if I can use this card with Davinci Resolve, PFClean, Nuke etc.

I like how they left out the GTX780 Ti SLI, Titan SLI and Titan Black single and SLI.

Well, they had something like 24-48hours to get all the testing done, I think they covered a pretty good range of cards.

Still, you’d think they’d spend that time testing cards that are actually competitive instead of including a bunch of 2gb cards from nearly two generations ago. And considering how close the 780 SLI rig is to this new card, the 780 Ti SLI rig (with around 25% higher performance) might well have outperformed it.

The only comparison I care about is this with the Titan Z, at the moment.

Yes, I wish to receive exclusive discounts, special offers and competitions from our partners.

Now you can get the top stories from Kotaku delivered to your inbox. Enter your email below.

By subscribing you agree to our Terms of Use and Privacy Policy.