f2 X icon 3 y2 steam2
 
 Search find 4120 Best GPU benchmarks

Comparison of Processor Dependency between GeForce and Radeon in DX12

 

Good day to all! Without long prefaces, let's face it, what prompted us to this comparison was the video of the popular Australian youtube channel Hardware Unboxed , which compared the processor dependence of Radeon and GeForce video cards in low-level APIs (DX12, Vulkan), and received unexpected results for most.

Of course, the policy of our publication is exclusively true in any of its forms and manifestations, and we decided to purposefully check how true this is, without approximating the results, to compare head-to-head video cards Radeon and GeForce in conjunction with the same processors - Ryzen 5 3600. But due to the fact that we have different RAM, it has completely different overclocking potential, and in addition, we have video cards, to put it mildly, of different levels - RTX 3090 and Radeon RX Vega 64 Liquid Cooling (hereinafter - LC), we decided to go for some tricks - reduce the processor frequency to 3 GHz and reset the RAM to 3000 MHz and the timings from the XMP profile in order to prevent the "disclosure of the video card" RX Vega 64 LC from taking place. As a result, we got almost 100% identical processors, with a difference in the level of error. 

So, test benches: 

1.

Processor: AMD Ryzen 5 3600X (Matisse, L3 32 MB), 6 cores/12 threads, fixed 3000 MHz

Video card: Palit Gaming pro RTX 3090 1830/22000

Motherboard: Asrock B450 Pro4, AM4

RAM: Crucial Ballistix Tactical 2x8Gb 3000 CL16

Disk: nvme A-Data XPG SX8200 Pro, 2TB 

 

2. 

Processor: AMD Ryzen 5 3600 (Matisse, L3 32 MB), 6 cores/12 threads, fixed 3000 MHz

Video card: AMD Radeon RX Vega 64 Liquid Cooling ~1700/1108(+timings)

Motherboard: ASUS TUF B450-pro Gaming, AM4

RAM: Crucial Ballistix Sport LT 2x8Gb 3000 CL15

Storage: nvme Samsung 970 EVO Plus 512 GB

 

As a result, in our "truncated" case, it will look like this:

This R5 3600 will be paired with the RX Vega 64 LC


cachemem

CPUZ

009

 

And this one is in conjunction with the RTX 3090

image 2021 03 15 22 15 00

image 2021 03 15 22 13 01

image 2021 03 15 22 11 01

 

As a result, even on such a "cut-down" R5 3600, it was impossible not to run into the RX Vega 64 LC at the minimum resolution at the maximum graphics settings, and therefore it was decided to test it at the minimum settings. And in this case, this option is quite suitable for us, because the goal is not to measure the absolute processor performance, but to check how much more or less performance a given processor with video cards from a competing Chinese family can provide. At maximum settings, we only managed to drive a bald man... ahem, to test Hitman 3, and then only at one location, on a static frame that loads the processor the most. 

1. Hitman 3 - Mission "Mendoza, Argentina"

Settings: 

1212121212

Freezing directly: 

RX Vega 64LC

test5

RTX 3090

photo 2021 03 15 21 46 52

Hitman 3

And immediately we are struck by a rather tangible difference of 22.4%. 

2 Horizon Zero Dawn

Settings: Minimum, 720p, Minimum Resolution Scale

RX Vega 64LC

test3

RTX 3090

HZD 3090

horizon 

Here we see a difference of  11.7% .

3. Forza Horizon 4 Demo

Settings: full minimum wages, 720p

RX Vega 64LC

test1

RTX 3090

FH4 3090

Forza

Here the difference was 21.1%

Well, now let's see a couple of games "under the patronage" of Nvidia, will the situation change in them?

4. Shadow of the Tomb Raider

Settings: minimum, 720p, minimum resolution scale

RX Vega 64LC

test2

RTX 3090

SOTTR 3090

SOTTR

And we immediately see a noticeable improvement. The difference was a modest  4.6%, which, in our opinion, is within the normal range.

5. Watch Dogs Legion

Settings: minimum, 720p, minimum resolution scale, FOV 110

RX Vega 64LC

test4

RTX 3090

photo 2021 03 15 20 30 50

Luchdogs

Well, what can I say. The difference, neither more nor less, is 22.1% .

We certainly did not expect this: among all 5 games tested, the biggest difference was recorded in the game with the "Nvidia" logo (of course, along with Hitman 3, which had the same difference). It also seemed strange that in the only game in our review with the AMD logo - Horizon Zero Dawn, the difference turned out to be quite "democratic" compared to what we saw in the "green" Watch Dogs Legion...

In general, let's try to comment on all this somehow: of course, we will not pretend that we have not known about this situation for the past 6 years (since the first games on DX12 appeared), but it always seemed strange to us that no one ever did not affect - not a single blogger, not a single publication (except, of course, GameGPU, but dry numbers are given here, and not everyone "twists" the diagrams and notices this trend). But at one time, only the lazy did not discuss the high processor dependence of Radeon in DX11, where this really took place, and was confirmed by tests more than once. But why was it actively discussed then, and as if deliberately kept silent now? In our opinion, this is a fundamentally wrong approach, because if there is a certain problem, it needs to be solved, and old Jen Sun knows how to solve problems like no one else.
It's time to realize that the era of low-level APIs has come, the layouts have turned back to front, but until now, many inattentive publications and bloggers continue to recommend GeForce for weak processors, and continue to mention the high processor dependence of Radeon, either out of ignorance, or for some other reason. reason unknown to me. 

I would not say that this is a fatal problem, as was the case with Radeons in DX11, if you have a more or less productive processor. But if you have an unbalanced assembly with a weak processor (FX, ancient Xeon, 4-thread, etc.), and you plan to buy a powerful video card for it, then you should at least clarify which API is in the games for which the video card is bought. If the game is demanding on the processor and in it DX11 - then with 99% probability it is better to take GeForce, even if it is inferior to the competing Radeon in terms of pure performance with the top processor - anyway, you will mainly focus on the processor, and in such situations in dx11 nvidia will usually give you more fps. If DX12 (we will get to Vulkan in the next review) - accordingly, Radeon will be the best choice for a weak processor. And our review is the best confirmation of this,

CONCLUSION:

Let's try to summarize all this and state the essence of the article and tests: the same processor in DX12 is capable of delivering the highest FPS with AMD Radeon video cards, the difference can easily exceed 20%. And this should be taken into account, first of all, by owners of monitors with high-hertz matrices or owners of weak and mediocre processors. If you heard another blogger say that Radeons are more processor-dependent than Geforces, then at best he is not in the know (how can this be if it has been going on for 6 years!), At worst, he is either incompetent or an interested party.

I repeat: Radeons are more processor dependent in DX11. In DX12 the situation is reversed. The situation in the Volcano and other APIs (already outdated, but for general educational purposes) will be analyzed later. 

I hope this review helped someone and all this was not in vain. Thank you for your attention and understanding!