A Look Into Next Gen: AMD CES Keynote Thread

  • 66 results
  • 1
  • 2
Avatar image for ronvalencia
#51 Edited by ronvalencia (26633 posts) -

@loco145 said:
@ronvalencia said:
@loco145 said:

So, the price and performance of a 2080 with significantly higher power consumption and without the tensor and ray tracing cores? Meh.

VII's AI instructions are integrated with the main compute units like Tegra X1's SMs.

Which means that if it uses them for AI then they are unavailable for anything else.

The arguments are similar to discrete shaders type vs unified shaders. RTX 2080 has lower CUDA cores FMAD TFLOPS i.e. ~11 TFLOPS at 1900 Mhz stealth boost modes.

Avatar image for loco145
#52 Posted by loco145 (12075 posts) -

@ronvalencia said:
@loco145 said:
@ronvalencia said:
@loco145 said:

So, the price and performance of a 2080 with significantly higher power consumption and without the tensor and ray tracing cores? Meh.

VII's AI instructions are integrated with the main compute units like Tegra X1's SMs.

Which means that if it uses them for AI then they are unavailable for anything else.

The arguments are similar to discrete shaders type vs unified shaders. RTX 2080 has lower CUDA cores FMAD TFLOPS i.e. ~11 TFLOPS at 1900 Mhz stealth boost modes.

Yet, the Radeon VII is projected by AMD to perform about on par on actual games where only the regular CUDA cores are being used. Nvidia has been much better at utilizing their (on paper) flops than AMD for a long time now.

Avatar image for ronvalencia
#53 Edited by ronvalencia (26633 posts) -

@loco145 said:
@ronvalencia said:
@loco145 said:
@ronvalencia said:

VII's AI instructions are integrated with the main compute units like Tegra X1's SMs.

Which means that if it uses them for AI then they are unavailable for anything else.

The arguments are similar to discrete shaders type vs unified shaders. RTX 2080 has lower CUDA cores FMAD TFLOPS i.e. ~11 TFLOPS at 1900 Mhz stealth boost modes.

Yet, the Radeon VII is projected by AMD to perform about on par on actual games where only the regular CUDA cores are being used. Nvidia has been much better at utilizing their (on paper) flops than AMD for a long time now.

Vega 56 at 1710 Mhz (~400 watts) with 12 TFLOPS already beats RTX 2070. This is why I like to see VII with 56 CU and 1800 Mhz clock speed instead i.e. reduce CU count while increase clock speed.

I rather see 44 CU setup with 2Ghz clock speed and 128 ROPS.

Avatar image for loco145
#54 Edited by loco145 (12075 posts) -

The Vega 56 is nowhere near a 2070 on actual games. If you are significantly OC'ing it, then do the same with the 2070.

Avatar image for ronvalencia
#55 Edited by ronvalencia (26633 posts) -

@loco145 said:

The Vega 56 is nowhere near a 2070 on actual games. If you are significantly OC'ing it, then do the same with the 2070.

Loading Video...

Vega 56 at 1.71 Ghz clock speed at 12 TFLOPS beats both RTX 2070 and Strix Vega 64 (1.59 Ghz with 13.02 TFLOPS)

My point is clock speed increase on Vega 56 improves the rasterization and ROPS performance despite lower TFLOPS when compared to Strix Vega 64

VII with 60 CU at 1800Mhz rivals RTX 2080, hence why I want to see VII with 56 CU at 1800Mhz clock speed. TFLOPS is meaningless without factoring ROPS' read-write and rasterization (mass float to integer pixels process) performance.

Avatar image for R4gn4r0k
#56 Posted by R4gn4r0k (29774 posts) -
@davillain- said:
@R4gn4r0k said:

Anyone who didn't buy those overpriced RTX cards and waited is being rewarded for their patience.

Or they could've waited for a better RTX mid-range like the latest RTX 2060? I admit, I was really impress what I saw from the benchmarks and for only $350, not too shabby.

The RTX2080, ti and 2070 cards only offer mild improvements over the previous gen.

RTX2060 seems to be offering 1070ti performance.

The less RTX technology that is crammed into these things, the better.

Avatar image for pc_rocks
#57 Posted by PC_Rocks (1589 posts) -

@R4gn4r0k said:
@davillain- said:
@R4gn4r0k said:

Anyone who didn't buy those overpriced RTX cards and waited is being rewarded for their patience.

Or they could've waited for a better RTX mid-range like the latest RTX 2060? I admit, I was really impress what I saw from the benchmarks and for only $350, not too shabby.

The RTX2080, ti and 2070 cards only offer mild improvements over the previous gen.

RTX2060 seems to be offering 1070ti performance.

The less RTX technology that is crammed into these things, the better.

Nah, RTX or ray-tracing is good. I won't buy the cards for the same reasons as yours until it matures and evolved a bit more but I'm grateful early adapters are paying so that people like me will enjoy it eventually. A technology has to start from somewhere.

Avatar image for R4gn4r0k
#58 Posted by R4gn4r0k (29774 posts) -
@pc_rocks said:

Nah, RTX or ray-tracing is good. I won't buy the cards for the same reasons as yours until it matures and evolved a bit more but I'm grateful early adapters are paying so that people like me will enjoy it eventually. A technology has to start from somewhere.

For sure, it's great for those people willing to invest, because we are all getting ray tracing in our games in the future.

But there should've been a segment aimed at RTX enthousiasts, and there should've been a segment aimed at people that want very high resolutions or very high framerates. Because that is exactly why I bought a 1080ti and that's exactly why I won't buy an RTX card.

Avatar image for pc_rocks
#59 Posted by PC_Rocks (1589 posts) -

@R4gn4r0k said:
@pc_rocks said:

Nah, RTX or ray-tracing is good. I won't buy the cards for the same reasons as yours until it matures and evolved a bit more but I'm grateful early adapters are paying so that people like me will enjoy it eventually. A technology has to start from somewhere.

For sure, it's great for those people willing to invest, because we are all getting ray tracing in our games in the future.

But there should've been a segment aimed at RTX enthousiasts, and there should've been a segment aimed at people that want very high resolutions or very high framerates. Because that is exactly why I bought a 1080ti and that's exactly why I won't buy an RTX card.

I believe they have done it for the adaptability and convince the devs to incorporate RT otherwise they would sort of get into the chicken and egg problem.

Avatar image for KungfuKitten
#60 Edited by KungfuKitten (26046 posts) -

So can someone translate this a little? If I look to buy a mid-high end gaming PC by the end of 2019, is AMD going to be a viable option in terms of price/performance? It sounds like their GPU's are still not going to be top end, but close to, so a little shaky? And their CPU's are kicking butt?

Avatar image for ronvalencia
#61 Posted by ronvalencia (26633 posts) -

@neutrinoworks said:

Sounds like this thing will still require 285 watts just like Vega 64 to get the performance being shown...

This thing is fail

Navi is the savior we needed

Navi 10 flagship not Navi 12

Avatar image for davillain-
#62 Posted by DaVillain- (33689 posts) -

@KungfuKitten said:

So can someone translate this a little? If I look to buy a mid-high end gaming PC by the end of 2019, is AMD going to be a viable option in terms of price/performance? It sounds like their GPU's are still not going to be top end, but close to, so a little shaky? And their CPU's are kicking butt?

When buying a GPU mid or high, the question is always going to be, what Resolution/framerate are you targeting? Me for example, I game in 1440p/60fps+ on all of my games and I have a 1080Ti. I also have Ryzen 7 2700X and I love the CPU, worth every penny too.

But looking at this Radeon VII, for $700, I could find a better GPU deal then what AMD is offering though. The RTX 2070 however are all price between $500-$600 range but then again, Radeon VII comes with 16GB but it's gonna cost ya $700. Ask yourself this, do you really need all that 16GB? Since you said you're looking for a new GPU by the end of the year, I recommend you wait for actual Benchmarks on Radeon VII and compare RTX how they stack up.

I'm more interest how this will turn out by the end of the year and I'm not really in the market for a new GPU myself anytime soon.

Avatar image for loco145
#63 Posted by loco145 (12075 posts) -

@ronvalencia: High

@ronvalencia said:
@loco145 said:

The Vega 56 is nowhere near a 2070 on actual games. If you are significantly OC'ing it, then do the same with the 2070.

Loading Video...

Vega 56 at 1.71 Ghz clock speed at 12 TFLOPS beats both RTX 2070 and Strix Vega 64 (1.59 Ghz with 13.02 TFLOPS)

My point is clock speed increase on Vega 56 improves the rasterization and ROPS performance despite lower TFLOPS when compared to Strix Vega 64

VII with 60 CU at 1800Mhz rivals RTX 2080, hence why I want to see VII with 56 CU at 1800Mhz clock speed. TFLOPS is meaningless without factoring ROPS' read-write and rasterization (mass float to integer pixels process) performance.

I wouldnt expect high clocks on consoles. The boxes have to be small, therefore the GPU are always underclocked compared to desktop parts.

Avatar image for Uruz7laevatein
#64 Posted by Uruz7laevatein (33 posts) -

@neutrinoworks: You do realize Nvidia and AMD measure TDP/Power Consumption differently, you know like how a 185W/250W 1080/1080Ti hits easily 250W+/350W+ power consumption and thermal throttle points (82C/92C) at stock speeds. Or a better offender like how a "95W" i7/i9 drawing more power and heat than a 16-Core Threadripper CPU.

Avatar image for ronvalencia
#65 Edited by ronvalencia (26633 posts) -

@loco145 said:

@ronvalencia: High

@ronvalencia said:
@loco145 said:

The Vega 56 is nowhere near a 2070 on actual games. If you are significantly OC'ing it, then do the same with the 2070.

Vega 56 at 1.71 Ghz clock speed at 12 TFLOPS beats both RTX 2070 and Strix Vega 64 (1.59 Ghz with 13.02 TFLOPS)

My point is clock speed increase on Vega 56 improves the rasterization and ROPS performance despite lower TFLOPS when compared to Strix Vega 64

VII with 60 CU at 1800Mhz rivals RTX 2080, hence why I want to see VII with 56 CU at 1800Mhz clock speed. TFLOPS is meaningless without factoring ROPS' read-write and rasterization (mass float to integer pixels process) performance.

I wouldnt expect high clocks on consoles. The boxes have to be small, therefore the GPU are always underclocked compared to desktop parts.

For raster graphics, my point is rasterization performance has higher priority over TFLOPS and it's shown by lower TFLOPS Vega 56 with higher clock speed at 1710Mhz being superior over higher TFLOPS Vega 64 with lower clock speed at 1590 Mhz (Strix Vega 64).

Reducing CU count also reduces power consumption.

For consoles, MS or Sony should push AMD towards rasterization performance since it's bottlenecking CU's TFLOPS.

AMD GPU's rasterization performance increase wasn't scaling with TFLOPS increase e.g. you have a situation where R9-390X OC rivaling Fury Pro

Assuming Anandtech's 128 ROPS claim is correct, this improvement path is half way to improve rasterization performance, but it's not properly done without increasing raster units (mass floating to pixel integer conversion hardware). CrossFire Vega 64 can beat RTX 2080 Ti which includes double raster units to 8 and 128 ROPS read-write units.

I rather see 48 CU with 1925 Mhz configuration.

Avatar image for ronvalencia
#66 Edited by ronvalencia (26633 posts) -

@goldenelementxl:

The Vega graphic IP belongs to the generation 9 family. The minor ISA/IP changes withing this family are reflected through the additional revision number.

  • VEGA 10 GFX9.00
  • RAVEN RIDGE GFX9.02
  • VEGA 12 GFX9.04
  • VEGA 20 GFX9.06 <-------- Vega II
  • RAVEN RIDGE 2 (PICASSO?) GFX9.09

----------

More information VII's 128 ROPS improvements

From http://www.reddit.com/r/Amd/comments/aei49q/post_ces_radeon_vii_details_145ghz_base_175ghz/edscpuq

The ROPs are tied to the memory controller, and can essentially be doubled or halved when designing. The first GCN chip, Tahiti (7970), did not have them closely connected, but this caused some performance problems, so after that AMD has generally tied the ROPs to the memory controller.

AMD has had 8 ROPs per 64 bits of memory bandwidth for some time now. For Fury they had 16 per memory controller for 64 in total. Vega double the number of ROPs per controller to 32, but cut the number of controllers to two. Because Vega 20 has four controllers but keeps the Vega design, we now get 128 ROPs in the design.

This is one of the difference against NVidia. NVidia has moved to 16 ROPs per 64-bit memory controller with Maxwell and has seen good results with that change. AMD arguably has more capable ROPs, but it is past time for them to change. Both Vega 64 and Polaris 10 is bottlenecking on ROPs to some extent.

VII could be faster with Vega 64 LC's cooling solution.