Why do people like you feel the need to be so vocally negative? If it's not interesting, then leave. This article was meant to be informative, not groundbreaking.
I mean, what are you using a $4,000 M1 Ultra for? If it's Blender rendering, it will be beaten, on the Metal back-end, by a 1050 Ti Laptop on Optix or a Radeon 6800 XT on Metal. The latter has a power consumption rate of 300w, but you could buy a lot of them for the $2,000 *extra* that the M1 Ultra costs over the M1 Max. It gets embarassing once you look at your options in gaming.
The M1 Ultra, at the moment, is a solution in search of a problem, or a solution terribly kneecapped by a problem, that being Apple's thorough ecosystem lack of need for a high-end GPU.
> M1 Ultra is beating the pants off both Nvidia and AMD in perf/watt.
As a 5 nm chip, it should be compared against other 5 nm chips. So, you'll have to wait for RDNA 3 and Lovelace mobile GPUs, if you want to know which architecture is truly more efficient.
Note: I say *mobile* GPUs, because the Ultra is using what's essentially a laptop chip. Desktop GPUs operate well above their peak efficiency point. So, it's not until we have them in mobile form that we can make meaningful efficiency comparisons.
This is true. At the end, it'll come down mainly between "Nvidia's mature support" Versus "Apple's vertical optimisations". Apple's solution should win-out in the long run, but it will be very interesting to see how far away that is. Still, we have to compare products that are available in the here-and-now, and plot the performance against the battery life.
I hear tiled rendering gpus are easier to split up and tend to conserve memory (and therefore interconnect??) bandwidth, but afaik thats not what the amd drivers do now.
Surely this will manifest in the linux drivers too, and what they change will be public.
All I know about that is Vega first added support for tiled rendering, with the DSBR. I'm not sure RDNA has some version of that engine, or if it uses a different approach.
However, for Infinity Cache to work as well as it does, one would expect they're doing TBDR.
I don’t think so. I bet they are trying hard to make this GPU work and be seen for the driver as a single chip, so any driver optimizations would be towards the new architecture not the chiplet design per se.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
21 Comments
Back to Article
lemurbutton - Friday, June 10, 2022 - link
Again, not interesting.M1 Ultra is beating the pants off both Nvidia and AMD in perf/watt.
Che0063 - Friday, June 10, 2022 - link
Yeah, let's post a comment in response to news.Why do people like you feel the need to be so vocally negative? If it's not interesting, then leave. This article was meant to be informative, not groundbreaking.
Kangal - Friday, June 10, 2022 - link
Didn't you hear? He is not interested.:P
DannyH246 - Friday, June 10, 2022 - link
Agreed. Your comment clearly would not be interesting to him.Khanan - Friday, June 10, 2022 - link
There’s really people who defend this delusional troll? That’s sad.brucethemoose - Saturday, June 11, 2022 - link
Something about consumer CPUs/GPUs in general drives commenters to be quite tribal and negative.You don't see many negative comments about AI accelerator or network switch or exotic memory coverage here.
Fulljack - Friday, June 10, 2022 - link
if I have a workstation with access to 1200W of power, I wouldn't care less about the performance of mobile chip that could only push as high as 300W.mukiex - Friday, June 10, 2022 - link
I mean, what are you using a $4,000 M1 Ultra for? If it's Blender rendering, it will be beaten, on the Metal back-end, by a 1050 Ti Laptop on Optix or a Radeon 6800 XT on Metal. The latter has a power consumption rate of 300w, but you could buy a lot of them for the $2,000 *extra* that the M1 Ultra costs over the M1 Max. It gets embarassing once you look at your options in gaming.The M1 Ultra, at the moment, is a solution in search of a problem, or a solution terribly kneecapped by a problem, that being Apple's thorough ecosystem lack of need for a high-end GPU.
Khanan - Friday, June 10, 2022 - link
Correct, it’s a good product but also way too expensive. Something which only Apple can afford, anyone else would’ve been called delusional.Khanan - Friday, June 10, 2022 - link
The first comment as per usual complete trash.mode_13h - Sunday, June 12, 2022 - link
> M1 Ultra is beating the pants off both Nvidia and AMD in perf/watt.As a 5 nm chip, it should be compared against other 5 nm chips. So, you'll have to wait for RDNA 3 and Lovelace mobile GPUs, if you want to know which architecture is truly more efficient.
Note: I say *mobile* GPUs, because the Ultra is using what's essentially a laptop chip. Desktop GPUs operate well above their peak efficiency point. So, it's not until we have them in mobile form that we can make meaningful efficiency comparisons.
Kangal - Thursday, June 30, 2022 - link
This is true.At the end, it'll come down mainly between "Nvidia's mature support" Versus "Apple's vertical optimisations". Apple's solution should win-out in the long run, but it will be very interesting to see how far away that is. Still, we have to compare products that are available in the here-and-now, and plot the performance against the battery life.
So I'm mostly curious about a direct/indirect comparison:
Resident Evil 8, Metal3, macOS 13/Ventura, M2 Max chipset, 32cu iGPU, 32GB uniRAM
Resident Evil 8, Vulkan 1.3, Windows10 Pro, AMD r7-6800H, RX-6800S-8gb, 16GB RAM
Resident Evil 8, DirectX 12.5, Windows11, Intel Core i7-1280P, RTX-3070Q-8gb, 16GB RAM
eg/ 2022 16in MacBook Pro, vs, 15in ASUS Zephyrus G15, vs, 17in Razer Blade
Khanan - Friday, June 10, 2022 - link
Again that annoying “RNDA” typo, this time in a article. :Dbrucethemoose - Saturday, June 11, 2022 - link
Would this imply big driver changes too?I hear tiled rendering gpus are easier to split up and tend to conserve memory (and therefore interconnect??) bandwidth, but afaik thats not what the amd drivers do now.
Surely this will manifest in the linux drivers too, and what they change will be public.
mode_13h - Sunday, June 12, 2022 - link
All I know about that is Vega first added support for tiled rendering, with the DSBR. I'm not sure RDNA has some version of that engine, or if it uses a different approach.However, for Infinity Cache to work as well as it does, one would expect they're doing TBDR.
Khanan - Sunday, June 12, 2022 - link
I don’t think so. I bet they are trying hard to make this GPU work and be seen for the driver as a single chip, so any driver optimizations would be towards the new architecture not the chiplet design per se.Victor_Voropaev717 - Tuesday, June 14, 2022 - link
I think it is interesting