A 12" tablet with Ryzen APUs please! I've pretty much ditched laptops in favor of Surface-style tablets. A Ryzen tablet at $500 would be fantastic news.
Yeah, while I still have real powerful workstation and laptop I know where you are coming from. I use the Surface 3 Pro exactly like this, with Linux, instead of an iPad, exactly to be able to run really powerful and professional open source software (like an XEmacs, GCC, LLVM & GDB ;-) on-the-go: https://www.youtube.com/watch?v=uqpXJV3XKrU&t=...
Obviously you don't don't know much about software development if you are laughing at a person who uses XEmacs to write code. Yes, developers who prefer XEmacs are old school, but in my experience those are generally the best programmers. XEmacs has a steeper learning curve, but the ability the write macros in Lisp means that they can do some amazing things inside of a terminal. I learned long ago that open source programming tools may not look as flashy, but they are often better.
Are you talking about the originals or the X/Pro versions? If it's the originals, I think it would probably be similar, but the X/Pro versions would definitely pull ahead
Newer is not always better, just look at the P4 and Bulldozer for examples of that (Not that Vega is bad, far from it). But honestly you're comparing a 15w part with something that's like 110w. Yes efficiency and node shrinks help immensely, but there's still something to be said for higher power limits on older hardware. Now it does have support for newer modes and newer instruction sets so in time it could get better, but I dont think it will ever really run away from the performance level of the xbox one or ps4 in any meaningful way. All that being said, you'll have all of that performance in a thin and light that you can travel with.
Vega is really no faster clock for clock, than previous generation.
Also, this APU doesn't have nearly the bandwidth that the PS4 has, nor the EDRAM cache that the XB1 has, so GPU performance will not match up to those.
That's only true when comparing legacy software under both platforms. There are pretty substantial IPC gains, but they require optimization. It's the same reason MMX, SSE, 3DNOW, etc, all had pretty substantial benefits to programs that utilized them, and those were simply extensions of current architecture. Zen is an entirely new x86-compatible model.
Vega is much faster than previous generations especially when compared to GCN1.0. And especially in bandwidth constrained scenario where AMD's Delta Colour Compression and improved culling comes into play.
DDR4 2400Mhz is one such bandwidth constrained scenario. AMD should have thrown DDR4 3200Mhz or more support at the problem.
Frequency by itself doesn't tell you the whole picture. Ryzen on the desktop had good bandwidth at a given frequency, so I think Ryzen Mobile will do fine against Kaby Lake Refresh. It doesn't have to worry about cross-CCX latency, so there's reduced need for faster clocked RAM.
Also, it's not as easy as "turn this knob and get a perfect IMC that does all the things for zero power". They did the best they could. Not to mention you won't see support for high speed RAM in most laptops running integrated graphics. So for Ryzen Mobile it's actually better to extract as much bandwidth as they can from more common RAM.
Consoles also don't have delta colour compression and the tiled rasterizer. So whilst it wont have the same bandwidth as say... The PS4 in theoretical terms. It would likely beat the Xbox One S.
No. XB1 has way more bandwidth than delta compression and other efficiency boosts can make up for. Dual channel DDR4 2400 vs quad channel DDR3 2133. That's even BEFORE you count a halfway talented developer's usage of the ESRAM, which when used properly takes a lot of pressure off the main memory. No, compute resources are the real limiting factor on XB1. The new XBOX on the other hand has plenty of both.
Zen is more powerful than the Jaguar cores in the consoles, and even the 4C/8T Zen will beat out the 8C Jaguar, especially when boost is applied. That said, RR should be better than the original XboxOne (S), as that console has to make due with DDR3. The original PS4 uses GDDR5 and has a beefier GPU. I’d estimate it’s something like X1X > PS4pro > PS4 > RR > XboxOne/S The qualifier here is that RR is limited to 25W, where some consoles go over 100W. I am pretty excited to see what sort of RR implementation awaits PS5 and Xbox(4).
The DDR3 in XB1 isn't really a massive handicap like you're claiming, at least not in the hands of a halfway decent developer. First, it's quad-channel, not dual-channel like Raven Ridge. Second, there's a chunk of ESRAM that greatly boosts overall effective bandwidth. Bandwidth isn't the biggest limiting factor for the XB1. I suspect XB1 will still best even the 10 cluster Ryzen Mobile. However, for a 12-25W (15W nominal) design, RR is really impressive.
I forgot that XboxOne used 256bit memory. Still, I would suspect that RR still might beat it, at least if initial 1080P benchmark claims hold true. Many XboxOne titles didn’t render at 1080P, but more like 900P. If RR can do decent 1080P gaming even at medium, that’s pretty promising.
If you scale detail level to match, I highly suspect the framerate on XB1 will be higher. Match framerates and resolution and XB1 will have more detail. Don't get me wrong, again, I think RR will be entry-level game-capable which is more than I would say for any Intel chip that isn't paired with discrete graphics. But there will be compromises.
Still, it is HP that is the worst here. Lenovo is light ultra portable, small dimentions at 15W- not meant/able to do gaming anyway. Ram is soldered same like on it's Intels 'APUs'. Acer Swift- configured with 25W cooling(so should get mXFR certification) and dual channel, IPS, SSD- and not heavy. Good specs and can do gaming. But HP- the heaviest, and not sure what cooling- could very well be 15W, where iGPU simply will not have enough power to provide good results. By the way- game results on AMD's slide come from this HP x360.
Even at 15W I'd take the HP over the Lenovo just for that single channel cluster. The Acer does look really interesting, but you'd definitely be buying it for performance first and battery life second.
The bottom line is there's a tradeoff for all the above models, though the Acer looks most appealing to me on paper.
I'd definitely pick the Lenovo, as laptop is only of value to me if it is extremely portable. Was even measuring it as an upgrade to my Surface Pro2- and 13.3" Lenovo is about the same weight, and only 3cm wider, 4 cm longer. R5 2500U is much faster than i5-4300U, comes with 8 GB vs 4GB RAM, and would perform better in old games as well.
A pity has a Ideapad 720S in the pipeline instead of a ThinkPad. Given that they just came out with the AMD based A275 und A475 it probably takes some time until we see a Ryzen Thinkpad - if ever.
One could get the impression that some companies intentionally cripple the AMD options. E.g. old, underperforming silicon ThinkPad and crippled low cost mode with the Ryzen (Ideapad); and the often seen single memory channel.
Maybe Intel even pays them to intentionally outperform the AMD flavors, ... :-/
I think the situation might be the complete opposite - after all, these chips use the exact same socket/BGA array as Bristol Ridge, meaning that all Lenovo needs to do is update their BIOSes and change the CPUs, and they'd be off to the races. I (and a few others with me at the time) assumed that those laptops were launched specifically for this purpose - to have a ready-made, already-tested ThinkPad platform into which to slot Raven Ridge. After all, pro-level QC takes time, and it'd be easier to just test the CPU on a known platform rather than start from scratch.
Still, I want a ThinkPad Yoga with a Ryzen 7 2700U. ThinkPad keyboard, this APU, good battery life and pen input? YES. Preferably with the 25W option, although if Intel parallelism is anything to go by, that would have to be a ThinkPad X1 Yoga (the 370 runs at 15W, the X1 at 25). Oh, and a 3:2 display. Are you listening, Lenovo?
I think ThinkPad (and HP Elitebooks) will be reserved for Intel at least through next year. HP does has an Elite-platform for AMD in the 7xx series (6xx, 8xx, 10xx are Intel only, and technically 6xx and Pro-platform) so it's possible we will see an Elitebook 745/755 that are Ryzen mobile...or they could introduced a whole new category like Elitebook 940/950 as a premium platform option sitting between their entry Elitebook 800's and Elitebook 1000's.
On page 2 AMD is claiming score of 707 in cinebench nT. On page 3, discussing mXFR, those graphs show 450 without and 550 with boost after 5 minutes of the test. A lot of throttling here.
(For reference, 65W 1500X gets about 800 points, so it isn't totally unexpected to be seeing just 450 points for sustained 15W)
You guys missed one of the most important points; how do the integrated GPUs compare to Intel's? Are they significantly faster? I know you don't have hardware yourself to test on, but some projections would be very useful.
There's a single TimeSpy slide that AMD gave us, showing a score of 915 for Ryzen Mobile and 350 for Kaby Lake-R. It had 7th gen performing ahead of 8th gen, and TimeSpy is a synthetic.
But what's with those serious RAM limitations? Up to 8GB? What's this, 2015? And single-channel for the Lenovo? Laptops should be coming with 16GB by default, and options for 32GB these days. And all RAM should be ECC RAM, to prevent RowHammer and other type of memory attacks, too.
16GB as base for a mainstream laptop is silly as a significant number of people don't require more than 8GB and RAM requirements haven't gone up much in recent years for general home usage. Add in the current high cost of RAM and it just makes that idea plain wasteful as you'd be adding in a cost of maybe $75 which would be a complete waste.
Your information is heavily dated -- non-technical people still tend to use a lot of RAM even if most of this is only for idling webpages. 8GB is a minimum for most workflows.
What are you basing that on? I tend to have about 8 to 10 applications open all the time including 2 browsers using 20+ tabs between them and use between 5 and 7GB. When I look at friends and family and their usage they multi-task much less than me and use from 3 to 6GB so 8GB gives all of us some headroom. I think geeks are often out of touch with what 'civilians' require from their PCs. 8GB is fine and has been for years as RAM use has stagnated. Of course it's easy to go above that as a power user but that's very much a minority sport.
Actually typical use case IS a billion windows and tabs open, at least among the 200+ people at my office. I don’t know how they can stand that or the desktop with 50 icons and files.
8gb is plenty unless you are an actual power user based on plenty of checking my user’s resources. Even at home running a bunch of stuff I rarely hit over 11gb usage.
Again, those people are either not average Joes or by "a billionwindows" you mean lots of Windows Explorer instances which is nothing too taxing on the system. And yeah, your average person has the desktop space filled with tons of icons but that's nothing unusual nor memory consuming.
It is typical, at least if you have ever worked in an office. Anyways my point is, 8gb is plenty for them - more than that would be a waste. So I think we agree ;)
Indeed. I've got 3 windows of Firefox running with a total of 50 tabs plus LibreOffice - all on an Atom tablet with 4 GB RAM. Typical office workflows don't need more than 4 GB especially as Windows 10 is quite memory-efficient. I've even run Ubuntu desktop VMs on this tablet with no hiccups.
If you're using 7GB then you should have 16GB in there otherwise you will have no room for disk cache. You really don't want your actual apps to be using all of your physical memory..
I'm hoping AMD adopts the new codec (AV1 or NETVC, whichever it is), ASAP, if it's finalized by the end of the year. I guess it will be too late to go into Zen 2 for next spring, but it should be supported in Zen 3 at least.
The codecs will track the GPU's so based on that timetable it is unlikely to get into Navi, so whatever comes after Navi and then whatever APU uses that post-Navi GPU.
They'll never adopt a VP9 based codec in hardware. Why? Google has abandoned it and isn't developing it further. They're transferring it to Cisco.
However, Google decided to incorporate VP10 into AOMedia Video 1 (AV1). The AV1 codec will use elements of VP10 as well as the experimental formats Daala (Xiph/Mozilla) and Thor (Cisco).[94][95][96] Accordingly, Google has stated that they will not deploy VP10 internally or officially release it, making VP9 the last of the VPx-based codecs to be released by Google.
NETVC will also never be added to hardware. Unless you think AMD or Nvidia will bank on anything outside of the MPEG LA you're dreaming.
Promising, promising indeed. But Ryzen did show that mobile parts can delivery this time too. But as other have said. We need these in cheap models, medium prised models and high end Computers too! Also differen sized devices. A 10” hyprid tablet would give Intel Atom based models good ride! And bigger versions may have at least a fighting chance against intel m-series.
I'm quite pleased that they managed to come in at 15w. I was reading around prior and thought they were going to be a 35w design, color me shocked. Now we just need to get some in for some testing - how convenient - right when I'm possibly in the market for a new laptop.
It's not. 40-45W power use under load seems to be standard on all Intel 8th gen so called "15W TDP" mobile CPUs. Previously, or at least from 4th to 7th gen you could be pretty sure that power use under load was almost always just under twice the stated TDP. My own i7-6660U uses 29W under load measured at the voltage rails for example.
Intel has lost all credibility when it comes to giving accurate power consumption data.
15W is the TDP, not the max power use. Due to turbo the chips will go past that by a fair amount in bursts, but the average over long periods will be ~15w. This applies to Intel AND AMD. (Either company would be stupid to not do that as you would be leaving tons of burst performance on the table and that is what a lot of real-world use is)
200/127/.50 Sounds less impressive when you look at the size of the chip vs. others (although the recent chips appear to be roughly half covered) and realize they are bulldozer-based.
The size appears similar to the full Zepplin (8 core) die. Don't expect it to be cheap. Also it appears that Vega will have to make due with shared DDR4 dram, no idea how that will work (I've been dreaming of an HBM area the CPU can use as a cache when the GPU doesn't need it, don't expect such things anytime soon).
It's on 14nm,the actual silicon is maybe 15$ and then there is test and packaging.The cost of the die is not much of an issue.They can easily price the average SKU between 80-100$ with the fastest SKUs a bit above and the lesser SKUs bellow. Not that Intel has much different pricing, the prices they list have nothing to do with reality,.
While it likely costs the same to make as a R3 Ryzen (closer to an R7, because all CPU cores have to work), it will take a long time to dig themselves out of their hole by pricing it at $80-100.
Don't forget they have to pay for the mask with "ryzen mobile" sales, while ryzen and epyc paid for Zepplin die tooling. I don't expect it to be a cheap chip unless AMD is absolutely forced to (like they have been forced to for years and are hungry for Intel level margins).
A very small difference between both processors, yet one is Ryzen 5 and the other is Ryzen 7. I really hope these are the lowest R7 and the highest R5.
A 14" Acer Swift 3 with the Kaby Lake-R (4 core/8 thread) 8 GB RAM and a MX150 from nVidia gets better FPS in every game outlined by AMD here. Why go for this? It's not lighter, it's not more efficient, it's not faster.
In the real world, GPU performance is sub-par a 1030 (MX150). The only upside is the fact that you don't have to deal with Intel's iGPU and nVidia's discrete GPU in the same package. Other than that... not worth the hassle.
There appears to be a power advantage over a KB + MX150 since the combined consumption of the CPU and dGPU are higher than mobile Ryzen alone. All things equal, you're going to give up some GPU performance in exchange for more battery life. It's a trade-off some people will be willing to make and others will reject. Cool either way, just buy what works best for you and don't worry about it.
With that said, I think Vega would do better with dedicated video memory of some sort which is why I would have liked to see these chips released with a small HBM cache that can be used to supplement the system's DDR4, but that's probably an unrealistic pipe dream for the time being. The added costs of associated would make mobile Ryzen more expensive...maybe more than a CPU + dGPU combination capable of the same performance.
Anyone got any hard numbs on how this compares to KBL + MX150? I saw some commentary that Ryzen Mobile was comparable to the 950M... IIRC MX150 was a perf bump on the old 940MX. So are they on a similar level or does MX150 have a material advantage?
Was thinking that if they are in the same zip code perf wise then Ryzen is a no brainer given power draw..
The 960M is consistently faster in synthetics (by a small margin) and in gaming benchmarks (by a much larger margin) than the MX150. Here's a couple of notebookcheck links to compare:
"If we look at processors from Intel that are 4C/8T, like the 35W Core i7-7700T, this scores 777 in our testing, which kind of drives away from AMD’s point here."
Not really. You're still comparing a 35W part to a 15W part, so 90% of the performance for 43% the power consumption is still a massive gain. What it really says is the 4T processor, be it with 2C or 4C is truly dead for anything but the lowest tiers. Something Intel have also realized in their current line up.
" In our desktop Ryzen reviews, we saw per-core power go as low as 8W per core, and AMD’s Vega was power hungry"
Vega is really only power hungry when pushed beyond ~1200MHz. Underclock it to that and Vega is extremely power efficient.
"Vega is really only power hungry when pushed beyond ~1200MHz. Underclock it to that and Vega is extremely power efficient."
This is very relevant for laptops but most desktop users don't seem to care as they want performance rather than efficiency and that is how AMD configure them.
Correct. It's also relevant to miners for example that generally run their GPU at around 1GHz as it's the optimum for power cost vs. performance. However, the article is about mobile CPUs, and in that respect it might have been a good idea to mention that running lower clocks rates drastically lowers the Vega power consumption which is exactly what has been done in these APUs.
Yeah, but that 35W part could sustain that performance for a much longer time, if not indefinitely. The 15W AMD part (and likewise 15W Intel parts) will throttle down a fair bit after sustained use. According to AMD the R7 2700U drops to ~550 on cinebench after a 5-min loop. (Last slide on page 3)
It's possible it could sustain it for longer. We don't know that though. And even 550 points is still a massive performance per watt advantage to the AMD part. 770 points at 35W is 22 points per watt while 550 points (sustained) at 15W is 36.67 points per watt. A whooping 66.7% performance per watt advantage.
If could be AMDs TDP value trusted, it could be finally good CPU/GPU for SmachZ or GPD gaming devices.. Present Atoms and AMD solution are very slow for decent mobile gaming..
You cannot compare Intel's Atom with AMD's Carizo.
It would be nonsense. Atom is an extremely low performance part that is barely able to equal the gaming performance of a Pentium 4.
Moreover, AMD's TDP values are the ONLY ONES TRUSTWORTHY while Intel was proven to respect the TDP when they feel like it and also they change the TDP definition almost every year.
AMD's Ryzen processors don't step 2% outside their rated TDP no matter the load scenario.
Intel's 8700K and 7980XE can reach a + 16% on their rated TDP.
So I'm inclined to trust AMD's specifications 100% more than Intel's.
I was very interested in that Lenovo Ideapad 720S model, 13.3in with a 4K screen and the 2700U? Seems like it'd be a nice upgrade from my current Yoga 900 after underwhelming options in 910 and 920. Then I read that was going to stuck in single channel with only PC4-2133. That makes it significantly less attractive.
This is impressive and I'm glad to see AMD chips that can finally compete with Intel in the low TDP range. I am however, disappointed LPDDR4 compatibility isn't included in the initial parts.
But these are only the first two and there are more to come, so I'm hopeful we'll see chips that support power-sipping memory. Any 15W TDP chip intended for the the ultrathin mobile market should at least allow for LP-DRAM. Let's not forget Intel has opened up Thunderbolt 3 and made it royalty-free. Adding these two technologies to AMDs Infinity Fabric "interconnect" onboard Raven Ridge would allow manufacturers to build sleeker devices. Board space is at a serious premium and that often why its hard to find low power AMD chips in these premium thin and lights.
“If we look at processors from Intel that are 4C/8T, like the 35W Core i7-7700T, this scores 777 in our testing, which kind of drives away from AMD’s point here. AMD succeeds in touting that it has ‘desktop-class performance’ in a small power package, attempting to redefine its status as high performance. Part of me thinks at this level, it could be said that all the mobile processors in this range have ‘desktop-class performance’, so this is a case of AMD now catching up to the competition.“
You just said that in Cinebench R15, AMD’s Ryzen 7 2700U achieved 707 at 15W and compare it to a 35W Intel product that achieved 777. But you call this catching up; I would call that blowing past the competition! That score is nearly double the performance per watt, considering that you just compared AMD’s product with 15W TDP with an Intel product with a 35W TDP.
Looking more closely, a 15W Ryzen 7 2700U appears to fall right in line with an Intel Skull Canyon NUC’s 45W Intel Core i7-6700HQ in CPU performance and slightly outperforms it in GPU performance. Per the official AnandTech review, the Skull Canyon NUC got a Cinebench R15 ST/MT score of 148.24/711.04. Per NotebookCheck, its Iris Pro Graphics 580 achieves a score of 3510 in 3DMark 11 - Performance.
I was similarly perplexed by the wording used here. How is more than double the performance per watt "catching up". The examples have the i7-7700T score 22.2 points per watt while the R7 2700U completely annihilates that by 47.1 point per watt. Seems to me that it is Intel that has a lot of catching up to do.
It could be a combination of years of Intel having a lion’s share of the media mindshare (before Ryzen, for the longest time, the fact of the matter was that Intel was far and away the superior architecture) combined with the fact that there may have been very limited time given between receipt date and embargo time, giving way to more errors cropping up in a highly rushed journalism process.
Yeah, but that 35W part could sustain that performance for a much longer time, if not indefinitely. The 15W AMD part (and likewise 15W Intel parts) will throttle down a fair bit after sustained use. According to AMD the R7 2700U drops to ~550 on cinebench after a 5-min loop. (Last slide on page 3)
It's possible it could sustain it for longer. We don't know that though. And even 550 points is still a massive performance per watt advantage to the AMD part. 770 points at 35W is 22 points per watt while 550 points (sustained) at 15W is 36.67 points per watt. A whooping 66.7% performance per watt advantage.
I probably missed this, but any word on bulk pricing in comparison with Intel U series?
Other than that, I'm pretty damn sure the 14nm LPP will shine at the 15w and lower power envelopes. This is where the power per watt comparisons matter for consumers. Same should be applicable to mobile Vega. I wonder if pairing the APU with a discreet mobile Vega part would have any advantages over an Intel/nVidia pair. Hopefully it would have better harmony and better switching drivers.
I would also love to see benchmarks emphasising latency vs Intel speed shift. I just hate to admit Intel might have an advantage there.
Too early to tell, but boy am I excited since what feels like ages.
Samsung's 14nm LPP process being leased by GF just doesn't do Ryzen and Vega much justice on high performance desktop parts. Given 14nm LPP's smartphone SoC heritage it sure does let these low power AMD designs shine though. I'm also anticipating great things from IBM's 7nm process so long as it isn't delayed for an extra year. Bring on the 4.5 - 6 watt fanless APU's.
Fingers crossed for 6 or even 8 cores Zen2 and 14-16 CUs at 7nm, with higher max clocks for ST. HBM would be the icing on the cake. Throw a dGPU with twice or thrice the CUs and put your hand in my pocket and help yourself to my wallet AMD.
Good for AMD! Last quarter was merely the prelude--in the next few quarters when mobile Ryzen hits its production stride and is fully optimized, along with the Integrated Vega gpu, AMD is going to begin breaking all kinds of records. With the dedicated Ryzen/Vega teams at AMD looking ahead to Ryzen 2 for the desktop--AMD will keep the "pedal to the metal" concerning IPC improvements, die shrinks and all the rest of it. What they won't do is what the old management teams at AMD did, which was to break records and zoom out far ahead of Intel's best (the original Pentium architecture at the time)--only to stupidly *sit on it* after that and hang out a shingle in the nutty idea that it might take Intel a decade to catch them...;) That won't be happening ever again, thanks to Su and the rest of the magnificent design and development teams! Thanks, AMD--nothing smells as sweet to me as the renewed and real competition in the x86 cpu marketplace! Make Intel earn every penny from now on!
They are now finally on par with Intel's 2015 part, Broadwell but using a quad core. I fear that the mobile Ryzen 3 are dual cores. AMD will become even closer once they get to 7nm putting them at the same playing field as Intel
We'll have to wait & see. However, since it seems like their R5 2500U (4C/8T) is a mobile counterpart to their desktop R5 1500X (also 4C/8T), with a lower base clock but similar Turbo clock, I would suspect that the mobile R3 is going to be a 4C/4T CPU. In fact, I wouldn't be surprised if a) they call it the Ryzen 3 2300U (to keep with the naming convention) & b) it comes with the same clocks as the 2500U (maybe less L3 cache, but I doubt it).
What should be interesting is seeing how these compare to Intel's Kaby Lake-refresh, low-power CPUs, like the i5-8250U/8350U or i7-8550U/8650U. Except for a Skylake-H i5 & a Skylake-H i7, those were the first true quad-core mobile CPUs with that low of a TDP (& those 2 had a 25W TDP instead of the 15W the Kaby Lakes & these Ryzens have)...
They'll succeed regardless of marketing if the enthusiasts says swear with the product. OEMs will also follow if there's enough buzz. I could see these as good competition to Kaby Lake R, where once again, AMD dominates the graphics limited benchmarks.
Hmm afaik the HP Envy x360 with the previous Raven Ridge A10/12 was/is single channel, so if the socket is the same and HP just reuses the rest of the machine, I don't see that one as a good laptop.
Wow! Finally some 1080p screens... Anything more average in the laptop world? HP not using the top model 2700U Lenovo sticking to a single channel ACER not putting a Touch display and only 256GB SSD.
If HP didn't take the 2500U, they might make a loss on chips they couldn't sell. They're doing AMD a favour by taking the slightly-defective chips off their hands. If they didn't, the 2700U you want would have to increase in price to compensate.
1. These would be great replacement for all those shitty $500 laptops that have an i7 cpu +nvidia950m or similar. Use the saved BOM for better screens, keyboards and SSDs and I think they would steal the show.
2. If AMD could put together an mx150 style card for laptop gpu's that could team with the APU (via crossfire?) they might be able to get some truly excellent gaming results for less than a $1000. Especially with freesync2 in the mix.
Kudos AMD (from someone who wished they didn't just now by a laptop).
If a 15W part does this well... I'd say a 35-45W APU (which can be undervolted if necessary) with 4C/8T and ECC would make for a pretty awesome ZFS NAS....
That's because of the manuf, process AMD uses. The GLOFO 14nm currently used for Ryzen and Vega is more suited for lower clocks not high ones. It's one of the reasons why Vega has a higher power draw than Pascal in desktop space... its currently clocked too high and has too much unnecessary voltage (and we noted that undervolting Vega 56 can bring down power consumption to 1070 levels, while simultaneously overclocking it on the core and HBM would bring up performance to 1080 level but at a lower power draw than 1080.
At lower frequencies though, I would imagine that power draw would go down substantially for both Ryzen and Vega... plus, Ryzen is only using one CCX here, so that likely helps as well. Will be really good if AMD decides to make a Raven Ridge 2 for example by making 8c/16th APU with Vega iGP's... all connected via Infinity Fabric of course on 7nm and same power draw.
I would imagine it would be doable, but at the same time, clocks would increase by a fairly good amount... the desktop could run on 5 GhZ base using upcoming 7nm manuf. process from IBM... as for Raven Ridge 2... possibly 3.2 GhZ base with 8 cores and 2 Navi iGP
Thy look good, but if they can not support 16/32gb RAM, it is deal breaker. I have a Asus G551JW with 4200H, which performs very good, but I ant upgrade for quad core, which does not throttle, and it graphics performance is not much worse than the 960m.
I'd expect ~80% performance in 3DMark, and ~50% in games. Not just because of less bandwidth- but also because it runs with total 25W TDP for both CPU and GPU- while X150 system will run on approximately 40W (15W + 25W).
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
140 Comments
Back to Article
serendip - Thursday, October 26, 2017 - link
A 12" tablet with Ryzen APUs please!I've pretty much ditched laptops in favor of Surface-style tablets. A Ryzen tablet at $500 would be fantastic news.
Rene23 - Thursday, October 26, 2017 - link
Yeah, while I still have real powerful workstation and laptop I know where you are coming from. I use the Surface 3 Pro exactly like this, with Linux, instead of an iPad, exactly to be able to run really powerful and professional open source software (like an XEmacs, GCC, LLVM & GDB ;-) on-the-go: https://www.youtube.com/watch?v=uqpXJV3XKrU&t=...timecop1818 - Friday, October 27, 2017 - link
Nice troll. Professional open source software (like Emacs). I'm still laughing.pSupaNova - Friday, October 27, 2017 - link
I don't get the joke.I thought if you can use Emacs then you must be way beyond Professional.
ddrіver - Thursday, November 16, 2017 - link
Sure, you need to write bits directly into memory to be a professional now. Emacs is very advanced, dork. And he's talking about XEmacs, not Emacs.amosbatto - Wednesday, December 13, 2017 - link
Obviously you don't don't know much about software development if you are laughing at a person who uses XEmacs to write code. Yes, developers who prefer XEmacs are old school, but in my experience those are generally the best programmers. XEmacs has a steeper learning curve, but the ability the write macros in Lisp means that they can do some amazing things inside of a terminal. I learned long ago that open source programming tools may not look as flashy, but they are often better.kulareddy - Thursday, October 26, 2017 - link
I would like to see how the 2700U performs compared to Xbox One and PS4.MajGenRelativity - Thursday, October 26, 2017 - link
Are you talking about the originals or the X/Pro versions? If it's the originals, I think it would probably be similar, but the X/Pro versions would definitely pull aheadkulareddy - Thursday, October 26, 2017 - link
No, Original Xbox One (S) and PS4MajGenRelativity - Thursday, October 26, 2017 - link
I did a spec comparison, and it has much better CPU performance, with probably similar GPU performance, maybe a little slower.kulareddy - Thursday, October 26, 2017 - link
But GPU architecture is Vega.MajGenRelativity - Thursday, October 26, 2017 - link
And?artk2219 - Thursday, October 26, 2017 - link
Newer is not always better, just look at the P4 and Bulldozer for examples of that (Not that Vega is bad, far from it). But honestly you're comparing a 15w part with something that's like 110w. Yes efficiency and node shrinks help immensely, but there's still something to be said for higher power limits on older hardware. Now it does have support for newer modes and newer instruction sets so in time it could get better, but I dont think it will ever really run away from the performance level of the xbox one or ps4 in any meaningful way. All that being said, you'll have all of that performance in a thin and light that you can travel with.https://www.anandtech.com/show/7528/the-xbox-one-m...
MajGenRelativity - Thursday, October 26, 2017 - link
Exactly. Vega has a lot of promise, but its performance remains mostly untapped for now.guidryp - Thursday, October 26, 2017 - link
Vega is really no faster clock for clock, than previous generation.Also, this APU doesn't have nearly the bandwidth that the PS4 has, nor the EDRAM cache that the XB1 has, so GPU performance will not match up to those.
Samus - Thursday, October 26, 2017 - link
That's only true when comparing legacy software under both platforms. There are pretty substantial IPC gains, but they require optimization. It's the same reason MMX, SSE, 3DNOW, etc, all had pretty substantial benefits to programs that utilized them, and those were simply extensions of current architecture. Zen is an entirely new x86-compatible model.StevoLincolnite - Friday, October 27, 2017 - link
Vega is much faster than previous generations especially when compared to GCN1.0.And especially in bandwidth constrained scenario where AMD's Delta Colour Compression and improved culling comes into play.
DDR4 2400Mhz is one such bandwidth constrained scenario.
AMD should have thrown DDR4 3200Mhz or more support at the problem.
Alexvrb - Saturday, October 28, 2017 - link
Frequency by itself doesn't tell you the whole picture. Ryzen on the desktop had good bandwidth at a given frequency, so I think Ryzen Mobile will do fine against Kaby Lake Refresh. It doesn't have to worry about cross-CCX latency, so there's reduced need for faster clocked RAM.Also, it's not as easy as "turn this knob and get a perfect IMC that does all the things for zero power". They did the best they could. Not to mention you won't see support for high speed RAM in most laptops running integrated graphics. So for Ryzen Mobile it's actually better to extract as much bandwidth as they can from more common RAM.
Gigaplex - Thursday, October 26, 2017 - link
You're not taking into account the memory architecture. The consoles use faster memory, and these APUs are generally memory bandwidth limited.StevoLincolnite - Friday, October 27, 2017 - link
Consoles also don't have delta colour compression and the tiled rasterizer.So whilst it wont have the same bandwidth as say... The PS4 in theoretical terms. It would likely beat the Xbox One S.
Alexvrb - Saturday, October 28, 2017 - link
No. XB1 has way more bandwidth than delta compression and other efficiency boosts can make up for. Dual channel DDR4 2400 vs quad channel DDR3 2133. That's even BEFORE you count a halfway talented developer's usage of the ESRAM, which when used properly takes a lot of pressure off the main memory. No, compute resources are the real limiting factor on XB1. The new XBOX on the other hand has plenty of both.nightyknight - Monday, October 30, 2017 - link
There is no way they will have the similar GPU performance lol.MonkeyPaw - Thursday, October 26, 2017 - link
Zen is more powerful than the Jaguar cores in the consoles, and even the 4C/8T Zen will beat out the 8C Jaguar, especially when boost is applied. That said, RR should be better than the original XboxOne (S), as that console has to make due with DDR3. The original PS4 uses GDDR5 and has a beefier GPU. I’d estimate it’s something like X1X > PS4pro > PS4 > RR > XboxOne/SThe qualifier here is that RR is limited to 25W, where some consoles go over 100W. I am pretty excited to see what sort of RR implementation awaits PS5 and Xbox(4).
Alexvrb - Thursday, October 26, 2017 - link
The DDR3 in XB1 isn't really a massive handicap like you're claiming, at least not in the hands of a halfway decent developer. First, it's quad-channel, not dual-channel like Raven Ridge. Second, there's a chunk of ESRAM that greatly boosts overall effective bandwidth. Bandwidth isn't the biggest limiting factor for the XB1. I suspect XB1 will still best even the 10 cluster Ryzen Mobile. However, for a 12-25W (15W nominal) design, RR is really impressive.MonkeyPaw - Thursday, October 26, 2017 - link
I forgot that XboxOne used 256bit memory. Still, I would suspect that RR still might beat it, at least if initial 1080P benchmark claims hold true. Many XboxOne titles didn’t render at 1080P, but more like 900P. If RR can do decent 1080P gaming even at medium, that’s pretty promising.Alexvrb - Saturday, October 28, 2017 - link
If you scale detail level to match, I highly suspect the framerate on XB1 will be higher. Match framerates and resolution and XB1 will have more detail. Don't get me wrong, again, I think RR will be entry-level game-capable which is more than I would say for any Intel chip that isn't paired with discrete graphics. But there will be compromises.Lolimaster - Saturday, October 28, 2017 - link
In cpu is probably way fast but in GPU it had like 2X less performance and probably more once you factor the lack of GDDR5 or HBM2 as dedicated ram.tipoo - Monday, October 30, 2017 - link
Or the MX150, which is popular in this segment.ddriver - Thursday, October 26, 2017 - link
That ideapad looked good until I saw "single channel"...Samus - Thursday, October 26, 2017 - link
Typical Lenovo fuckup right there. At least HP didn't commit their usual crime of cramming a 768p screen in a 15" laptop...neblogai - Friday, October 27, 2017 - link
Still, it is HP that is the worst here. Lenovo is light ultra portable, small dimentions at 15W- not meant/able to do gaming anyway. Ram is soldered same like on it's Intels 'APUs'. Acer Swift- configured with 25W cooling(so should get mXFR certification) and dual channel, IPS, SSD- and not heavy. Good specs and can do gaming. But HP- the heaviest, and not sure what cooling- could very well be 15W, where iGPU simply will not have enough power to provide good results. By the way- game results on AMD's slide come from this HP x360.Alexvrb - Saturday, October 28, 2017 - link
Even at 15W I'd take the HP over the Lenovo just for that single channel cluster. The Acer does look really interesting, but you'd definitely be buying it for performance first and battery life second.The bottom line is there's a tradeoff for all the above models, though the Acer looks most appealing to me on paper.
neblogai - Sunday, October 29, 2017 - link
I'd definitely pick the Lenovo, as laptop is only of value to me if it is extremely portable. Was even measuring it as an upgrade to my Surface Pro2- and 13.3" Lenovo is about the same weight, and only 3cm wider, 4 cm longer. R5 2500U is much faster than i5-4300U, comes with 8 GB vs 4GB RAM, and would perform better in old games as well.aebiv - Thursday, October 26, 2017 - link
I'm looking forward to Compulab updating their older AMD APU powered bricks to these chips.I just wish Thunderbolt 3 wasn't tied to Intel.
Computer Bottleneck - Thursday, October 26, 2017 - link
Thunderbolt 3 will be royalty free next year.Rene23 - Thursday, October 26, 2017 - link
A pity has a Ideapad 720S in the pipeline instead of a ThinkPad. Given that they just came out with the AMD based A275 und A475 it probably takes some time until we see a Ryzen Thinkpad - if ever.One could get the impression that some companies intentionally cripple the AMD options. E.g. old, underperforming silicon ThinkPad and crippled low cost mode with the Ryzen (Ideapad); and the often seen single memory channel.
Maybe Intel even pays them to intentionally outperform the AMD flavors, ... :-/
Valantar - Thursday, October 26, 2017 - link
I think the situation might be the complete opposite - after all, these chips use the exact same socket/BGA array as Bristol Ridge, meaning that all Lenovo needs to do is update their BIOSes and change the CPUs, and they'd be off to the races. I (and a few others with me at the time) assumed that those laptops were launched specifically for this purpose - to have a ready-made, already-tested ThinkPad platform into which to slot Raven Ridge. After all, pro-level QC takes time, and it'd be easier to just test the CPU on a known platform rather than start from scratch.Still, I want a ThinkPad Yoga with a Ryzen 7 2700U. ThinkPad keyboard, this APU, good battery life and pen input? YES. Preferably with the 25W option, although if Intel parallelism is anything to go by, that would have to be a ThinkPad X1 Yoga (the 370 runs at 15W, the X1 at 25). Oh, and a 3:2 display. Are you listening, Lenovo?
Samus - Thursday, October 26, 2017 - link
I think ThinkPad (and HP Elitebooks) will be reserved for Intel at least through next year. HP does has an Elite-platform for AMD in the 7xx series (6xx, 8xx, 10xx are Intel only, and technically 6xx and Pro-platform) so it's possible we will see an Elitebook 745/755 that are Ryzen mobile...or they could introduced a whole new category like Elitebook 940/950 as a premium platform option sitting between their entry Elitebook 800's and Elitebook 1000's.mdriftmeyer - Friday, October 27, 2017 - link
These aren't the flag ship mobile APUs from AMD. Those arrive in Feb '18 and demonstrated at CES.Zizy - Thursday, October 26, 2017 - link
On page 2 AMD is claiming score of 707 in cinebench nT. On page 3, discussing mXFR, those graphs show 450 without and 550 with boost after 5 minutes of the test.A lot of throttling here.
(For reference, 65W 1500X gets about 800 points, so it isn't totally unexpected to be seeing just 450 points for sustained 15W)
ddriver - Thursday, October 26, 2017 - link
Amd does not make the cooling solution. Thin devices cooling sucks, higher end CPUs always throttle back heavily.MajGenRelativity - Thursday, October 26, 2017 - link
Looking good!schizoide - Thursday, October 26, 2017 - link
You guys missed one of the most important points; how do the integrated GPUs compare to Intel's? Are they significantly faster? I know you don't have hardware yourself to test on, but some projections would be very useful.MajGenRelativity - Thursday, October 26, 2017 - link
The iGPUs are significantly faster, based on previous gen comparisons, and Raven Ridge only being fastermdriftmeyer - Friday, October 27, 2017 - link
They're not iGPUs. They are APUs for a reason, not just branding.Ian Cutress - Thursday, October 26, 2017 - link
There's a single TimeSpy slide that AMD gave us, showing a score of 915 for Ryzen Mobile and 350 for Kaby Lake-R. It had 7th gen performing ahead of 8th gen, and TimeSpy is a synthetic.MajGenRelativity - Thursday, October 26, 2017 - link
Thank you for the information Ian! Hopefully a driver update can push 8th ahead of 7th...schizoide - Thursday, October 26, 2017 - link
Nice! So up to 2.5-3x faster. That should be capable of running modern 1080p games at medium settings and 30fps.Krysto - Thursday, October 26, 2017 - link
Finally!But what's with those serious RAM limitations? Up to 8GB? What's this, 2015? And single-channel for the Lenovo? Laptops should be coming with 16GB by default, and options for 32GB these days. And all RAM should be ECC RAM, to prevent RowHammer and other type of memory attacks, too.
MajGenRelativity - Thursday, October 26, 2017 - link
8GB is probably base, with upgrades available. ECC RAM is more expensive, and usually unneeded.Pork@III - Thursday, October 26, 2017 - link
Rather it looks like it is from 2005...smilingcrow - Thursday, October 26, 2017 - link
16GB as base for a mainstream laptop is silly as a significant number of people don't require more than 8GB and RAM requirements haven't gone up much in recent years for general home usage.Add in the current high cost of RAM and it just makes that idea plain wasteful as you'd be adding in a cost of maybe $75 which would be a complete waste.
Pork@III - Thursday, October 26, 2017 - link
We talk for limit(?) of RAM volume support not for RAM prices.lmcd - Thursday, October 26, 2017 - link
Your information is heavily dated -- non-technical people still tend to use a lot of RAM even if most of this is only for idling webpages. 8GB is a minimum for most workflows.smilingcrow - Thursday, October 26, 2017 - link
What are you basing that on?I tend to have about 8 to 10 applications open all the time including 2 browsers using 20+ tabs between them and use between 5 and 7GB.
When I look at friends and family and their usage they multi-task much less than me and use from 3 to 6GB so 8GB gives all of us some headroom.
I think geeks are often out of touch with what 'civilians' require from their PCs.
8GB is fine and has been for years as RAM use has stagnated.
Of course it's easy to go above that as a power user but that's very much a minority sport.
vladx - Thursday, October 26, 2017 - link
"I think geeks are often out of touch with what 'civilians' require from their PCs."Indeed, and very few average Joes would even get to your use case of 20+ tabs and 2 browsers.
smilingcrow - Thursday, October 26, 2017 - link
Definitely, hence me quoting much lower RAM usage for average Jolenes.Icehawk - Thursday, October 26, 2017 - link
Actually typical use case IS a billion windows and tabs open, at least among the 200+ people at my office. I don’t know how they can stand that or the desktop with 50 icons and files.8gb is plenty unless you are an actual power user based on plenty of checking my user’s resources. Even at home running a bunch of stuff I rarely hit over 11gb usage.
vladx - Thursday, October 26, 2017 - link
Again, those people are either not average Joes or by "a billionwindows" you mean lots of Windows Explorer instances which is nothing too taxing on the system. And yeah, your average person has the desktop space filled with tons of icons but that's nothing unusual nor memory consuming.Icehawk - Friday, October 27, 2017 - link
It is typical, at least if you have ever worked in an office. Anyways my point is, 8gb is plenty for them - more than that would be a waste. So I think we agree ;)serendip - Friday, October 27, 2017 - link
Indeed. I've got 3 windows of Firefox running with a total of 50 tabs plus LibreOffice - all on an Atom tablet with 4 GB RAM. Typical office workflows don't need more than 4 GB especially as Windows 10 is quite memory-efficient. I've even run Ubuntu desktop VMs on this tablet with no hiccups.extide - Friday, October 27, 2017 - link
If you're using 7GB then you should have 16GB in there otherwise you will have no room for disk cache. You really don't want your actual apps to be using all of your physical memory..mdriftmeyer - Friday, October 27, 2017 - link
5 to 7 GB in Brower on top of several GBs for the OS. Do the math.Icehawk - Friday, October 27, 2017 - link
Would love to see a SS showing 5gb of explorer windows, good luck with that.extide - Friday, October 27, 2017 - link
ECC isn't really effective against rowhammer as it induces multi-bit errors. Really, rowhammer isnt much of an issue in the real world.Krysto - Thursday, October 26, 2017 - link
I'm hoping AMD adopts the new codec (AV1 or NETVC, whichever it is), ASAP, if it's finalized by the end of the year. I guess it will be too late to go into Zen 2 for next spring, but it should be supported in Zen 3 at least.MajGenRelativity - Thursday, October 26, 2017 - link
It takes time for CPUs to be developed, so it probably depends on how far the codec is along, and how closely AMD is involved.extide - Friday, October 27, 2017 - link
The codecs will track the GPU's so based on that timetable it is unlikely to get into Navi, so whatever comes after Navi and then whatever APU uses that post-Navi GPU.mdriftmeyer - Friday, November 3, 2017 - link
They'll never adopt a VP9 based codec in hardware. Why? Google has abandoned it and isn't developing it further. They're transferring it to Cisco.However, Google decided to incorporate VP10 into AOMedia Video 1 (AV1). The AV1 codec will use elements of VP10 as well as the experimental formats Daala (Xiph/Mozilla) and Thor (Cisco).[94][95][96] Accordingly, Google has stated that they will not deploy VP10 internally or officially release it, making VP9 the last of the VPx-based codecs to be released by Google.
NETVC will also never be added to hardware. Unless you think AMD or Nvidia will bank on anything outside of the MPEG LA you're dreaming.
peevee - Thursday, October 26, 2017 - link
Sounds very very sweet. Especially if laptops with all that power can be made with passive cooling (15W should be fine in a proper metal design).haukionkannel - Thursday, October 26, 2017 - link
Promising, promising indeed. But Ryzen did show that mobile parts can delivery this time too.But as other have said. We need these in cheap models, medium prised models and high end Computers too!
Also differen sized devices. A 10” hyprid tablet would give Intel Atom based models good ride!
And bigger versions may have at least a fighting chance against intel m-series.
bill.rookard - Thursday, October 26, 2017 - link
I'm quite pleased that they managed to come in at 15w. I was reading around prior and thought they were going to be a 35w design, color me shocked. Now we just need to get some in for some testing - how convenient - right when I'm possibly in the market for a new laptop.smilingcrow - Thursday, October 26, 2017 - link
The vast majority of laptops are 15W these days and not just the Ultrabooks so they had to hit the 15W mark to compete.Krysto - Thursday, October 26, 2017 - link
And yet Intel hasn't hit that mark for quad-core chips yet...But I agree, all laptop chips should come in at 15W. Perhaps beefy gaming laptops could still go to 30W or so.
lmcd - Thursday, October 26, 2017 - link
Intel's TDP numbers tend to run more accurately. I wouldn't get too excited.smilingcrow - Thursday, October 26, 2017 - link
Intel recently released a quad core 15W TDP laptop CPU:https://ark.intel.com/products/122589/Intel-Core-i...
Here's a review showing that the actual power usage under load is much higher than for the dual core Kaby Lake:
https://www.notebookcheck.net/Dell-XPS-13-i7-8550U...
Not sure if that system has using a TD-up of 25W or not!
SaturnusDK - Thursday, October 26, 2017 - link
It's not. 40-45W power use under load seems to be standard on all Intel 8th gen so called "15W TDP" mobile CPUs. Previously, or at least from 4th to 7th gen you could be pretty sure that power use under load was almost always just under twice the stated TDP. My own i7-6660U uses 29W under load measured at the voltage rails for example.Intel has lost all credibility when it comes to giving accurate power consumption data.
SaturnusDK - Thursday, October 26, 2017 - link
I have to mention. The 29W is with the CPU and GPU undervolted by 100mV.extide - Friday, October 27, 2017 - link
15W is the TDP, not the max power use. Due to turbo the chips will go past that by a fair amount in bursts, but the average over long periods will be ~15w. This applies to Intel AND AMD. (Either company would be stupid to not do that as you would be leaving tons of burst performance on the table and that is what a lot of real-world use is)wumpus - Thursday, October 26, 2017 - link
200/127/.50 Sounds less impressive when you look at the size of the chip vs. others (although the recent chips appear to be roughly half covered) and realize they are bulldozer-based.The size appears similar to the full Zepplin (8 core) die. Don't expect it to be cheap. Also it appears that Vega will have to make due with shared DDR4 dram, no idea how that will work (I've been dreaming of an HBM area the CPU can use as a cache when the GPU doesn't need it, don't expect such things anytime soon).
jjj - Thursday, October 26, 2017 - link
It's on 14nm,the actual silicon is maybe 15$ and then there is test and packaging.The cost of the die is not much of an issue.They can easily price the average SKU between 80-100$ with the fastest SKUs a bit above and the lesser SKUs bellow. Not that Intel has much different pricing, the prices they list have nothing to do with reality,.wumpus - Thursday, October 26, 2017 - link
While it likely costs the same to make as a R3 Ryzen (closer to an R7, because all CPU cores have to work), it will take a long time to dig themselves out of their hole by pricing it at $80-100.Don't forget they have to pay for the mask with "ryzen mobile" sales, while ryzen and epyc paid for Zepplin die tooling. I don't expect it to be a cheap chip unless AMD is absolutely forced to (like they have been forced to for years and are hungry for Intel level margins).
velanapontinha - Thursday, October 26, 2017 - link
A very small difference between both processors, yet one is Ryzen 5 and the other is Ryzen 7. I really hope these are the lowest R7 and the highest R5.zodiacfml - Friday, October 27, 2017 - link
Appears to me that these are the best parts already. I can get by with a mobile R3 without hyperthreading.stanleyipkiss - Thursday, October 26, 2017 - link
A 14" Acer Swift 3 with the Kaby Lake-R (4 core/8 thread) 8 GB RAM and a MX150 from nVidia gets better FPS in every game outlined by AMD here. Why go for this? It's not lighter, it's not more efficient, it's not faster.In the real world, GPU performance is sub-par a 1030 (MX150). The only upside is the fact that you don't have to deal with Intel's iGPU and nVidia's discrete GPU in the same package. Other than that... not worth the hassle.
BrokenCrayons - Thursday, October 26, 2017 - link
There appears to be a power advantage over a KB + MX150 since the combined consumption of the CPU and dGPU are higher than mobile Ryzen alone. All things equal, you're going to give up some GPU performance in exchange for more battery life. It's a trade-off some people will be willing to make and others will reject. Cool either way, just buy what works best for you and don't worry about it.With that said, I think Vega would do better with dedicated video memory of some sort which is why I would have liked to see these chips released with a small HBM cache that can be used to supplement the system's DDR4, but that's probably an unrealistic pipe dream for the time being. The added costs of associated would make mobile Ryzen more expensive...maybe more than a CPU + dGPU combination capable of the same performance.
Jon Tseng - Thursday, October 26, 2017 - link
Anyone got any hard numbs on how this compares to KBL + MX150? I saw some commentary that Ryzen Mobile was comparable to the 950M... IIRC MX150 was a perf bump on the old 940MX. So are they on a similar level or does MX150 have a material advantage?Was thinking that if they are in the same zip code perf wise then Ryzen is a no brainer given power draw..
stanleyipkiss - Thursday, October 26, 2017 - link
The MX150 is on par if not better than the old 960M. It's a huge step up from all iGPUs.BrokenCrayons - Thursday, October 26, 2017 - link
The 960M is consistently faster in synthetics (by a small margin) and in gaming benchmarks (by a much larger margin) than the MX150. Here's a couple of notebookcheck links to compare:https://www.notebookcheck.net/NVIDIA-GeForce-GTX-9...
https://www.notebookcheck.net/NVIDIA-GeForce-MX150...
vladx - Thursday, October 26, 2017 - link
Indeed GPU performance is vey dissapointing, but par for the course for Vega.SaturnusDK - Thursday, October 26, 2017 - link
A few comments"If we look at processors from Intel that are 4C/8T, like the 35W Core i7-7700T, this scores 777 in our testing, which kind of drives away from AMD’s point here."
Not really. You're still comparing a 35W part to a 15W part, so 90% of the performance for 43% the power consumption is still a massive gain. What it really says is the 4T processor, be it with 2C or 4C is truly dead for anything but the lowest tiers. Something Intel have also realized in their current line up.
" In our desktop Ryzen reviews, we saw per-core power go as low as 8W per core, and AMD’s Vega was power hungry"
Vega is really only power hungry when pushed beyond ~1200MHz. Underclock it to that and Vega is extremely power efficient.
smilingcrow - Thursday, October 26, 2017 - link
"Vega is really only power hungry when pushed beyond ~1200MHz. Underclock it to that and Vega is extremely power efficient."This is very relevant for laptops but most desktop users don't seem to care as they want performance rather than efficiency and that is how AMD configure them.
SaturnusDK - Thursday, October 26, 2017 - link
Correct. It's also relevant to miners for example that generally run their GPU at around 1GHz as it's the optimum for power cost vs. performance. However, the article is about mobile CPUs, and in that respect it might have been a good idea to mention that running lower clocks rates drastically lowers the Vega power consumption which is exactly what has been done in these APUs.extide - Friday, October 27, 2017 - link
Yeah, but that 35W part could sustain that performance for a much longer time, if not indefinitely. The 15W AMD part (and likewise 15W Intel parts) will throttle down a fair bit after sustained use. According to AMD the R7 2700U drops to ~550 on cinebench after a 5-min loop. (Last slide on page 3)SaturnusDK - Friday, October 27, 2017 - link
It's possible it could sustain it for longer. We don't know that though. And even 550 points is still a massive performance per watt advantage to the AMD part. 770 points at 35W is 22 points per watt while 550 points (sustained) at 15W is 36.67 points per watt. A whooping 66.7% performance per watt advantage.neblogai - Friday, October 27, 2017 - link
550 points are at 25W, 450 at 15W as said in note regarding AMD slide 19.ruthan - Thursday, October 26, 2017 - link
If could be AMDs TDP value trusted, it could be finally good CPU/GPU for SmachZ or GPD gaming devices.. Present Atoms and AMD solution are very slow for decent mobile gaming..IGTrading - Thursday, October 26, 2017 - link
You cannot compare Intel's Atom with AMD's Carizo.It would be nonsense. Atom is an extremely low performance part that is barely able to equal the gaming performance of a Pentium 4.
Moreover, AMD's TDP values are the ONLY ONES TRUSTWORTHY while Intel was proven to respect the TDP when they feel like it and also they change the TDP definition almost every year.
AMD's Ryzen processors don't step 2% outside their rated TDP no matter the load scenario.
Intel's 8700K and 7980XE can reach a + 16% on their rated TDP.
So I'm inclined to trust AMD's specifications 100% more than Intel's.
t.s - Thursday, October 26, 2017 - link
A good read, Ian! Thanks!Bateluer - Thursday, October 26, 2017 - link
I was very interested in that Lenovo Ideapad 720S model, 13.3in with a 4K screen and the 2700U? Seems like it'd be a nice upgrade from my current Yoga 900 after underwhelming options in 910 and 920. Then I read that was going to stuck in single channel with only PC4-2133. That makes it significantly less attractive.xemone - Thursday, October 26, 2017 - link
This is impressive and I'm glad to see AMD chips that can finally compete with Intel in the low TDP range. I am however, disappointed LPDDR4 compatibility isn't included in the initial parts.But these are only the first two and there are more to come, so I'm hopeful we'll see chips that support power-sipping memory. Any 15W TDP chip intended for the the ultrathin mobile market should at least allow for LP-DRAM. Let's not forget Intel has opened up Thunderbolt 3 and made it royalty-free. Adding these two technologies to AMDs Infinity Fabric "interconnect" onboard Raven Ridge would allow manufacturers to build sleeker devices. Board space is at a serious premium and that often why its hard to find low power AMD chips in these premium thin and lights.
Things are about to change!
[email protected] - Thursday, October 26, 2017 - link
“If we look at processors from Intel that are 4C/8T, like the 35W Core i7-7700T, this scores 777 in our testing, which kind of drives away from AMD’s point here. AMD succeeds in touting that it has ‘desktop-class performance’ in a small power package, attempting to redefine its status as high performance. Part of me thinks at this level, it could be said that all the mobile processors in this range have ‘desktop-class performance’, so this is a case of AMD now catching up to the competition.“You just said that in Cinebench R15, AMD’s Ryzen 7 2700U achieved 707 at 15W and compare it to a 35W Intel product that achieved 777. But you call this catching up; I would call that blowing past the competition! That score is nearly double the performance per watt, considering that you just compared AMD’s product with 15W TDP with an Intel product with a 35W TDP.
[email protected] - Thursday, October 26, 2017 - link
Looking more closely, a 15W Ryzen 7 2700U appears to fall right in line with an Intel Skull Canyon NUC’s 45W Intel Core i7-6700HQ in CPU performance and slightly outperforms it in GPU performance. Per the official AnandTech review, the Skull Canyon NUC got a Cinebench R15 ST/MT score of 148.24/711.04. Per NotebookCheck, its Iris Pro Graphics 580 achieves a score of 3510 in 3DMark 11 - Performance.SaturnusDK - Thursday, October 26, 2017 - link
I was similarly perplexed by the wording used here. How is more than double the performance per watt "catching up". The examples have the i7-7700T score 22.2 points per watt while the R7 2700U completely annihilates that by 47.1 point per watt. Seems to me that it is Intel that has a lot of catching up to do.[email protected] - Thursday, October 26, 2017 - link
It could be a combination of years of Intel having a lion’s share of the media mindshare (before Ryzen, for the longest time, the fact of the matter was that Intel was far and away the superior architecture) combined with the fact that there may have been very limited time given between receipt date and embargo time, giving way to more errors cropping up in a highly rushed journalism process.extide - Friday, October 27, 2017 - link
Yeah, but that 35W part could sustain that performance for a much longer time, if not indefinitely. The 15W AMD part (and likewise 15W Intel parts) will throttle down a fair bit after sustained use. According to AMD the R7 2700U drops to ~550 on cinebench after a 5-min loop. (Last slide on page 3)SaturnusDK - Friday, October 27, 2017 - link
It's possible it could sustain it for longer. We don't know that though. And even 550 points is still a massive performance per watt advantage to the AMD part. 770 points at 35W is 22 points per watt while 550 points (sustained) at 15W is 36.67 points per watt. A whooping 66.7% performance per watt advantage.lilmoe - Thursday, October 26, 2017 - link
I probably missed this, but any word on bulk pricing in comparison with Intel U series?Other than that, I'm pretty damn sure the 14nm LPP will shine at the 15w and lower power envelopes. This is where the power per watt comparisons matter for consumers. Same should be applicable to mobile Vega. I wonder if pairing the APU with a discreet mobile Vega part would have any advantages over an Intel/nVidia pair. Hopefully it would have better harmony and better switching drivers.
I would also love to see benchmarks emphasising latency vs Intel speed shift. I just hate to admit Intel might have an advantage there.
Too early to tell, but boy am I excited since what feels like ages.
Kamen75 - Thursday, October 26, 2017 - link
Samsung's 14nm LPP process being leased by GF just doesn't do Ryzen and Vega much justice on high performance desktop parts. Given 14nm LPP's smartphone SoC heritage it sure does let these low power AMD designs shine though. I'm also anticipating great things from IBM's 7nm process so long as it isn't delayed for an extra year. Bring on the 4.5 - 6 watt fanless APU's.I too am excited for the first time in years.
lilmoe - Thursday, October 26, 2017 - link
Fingers crossed for 6 or even 8 cores Zen2 and 14-16 CUs at 7nm, with higher max clocks for ST. HBM would be the icing on the cake. Throw a dGPU with twice or thrice the CUs and put your hand in my pocket and help yourself to my wallet AMD.Rocket321 - Thursday, October 26, 2017 - link
If you can dodge a wrench, you can dodge a ball.MrCommunistGen - Thursday, October 26, 2017 - link
Hahahaha! I was thinking along the same lines:"Remember the 5 D's of dodgeball: Dodge, duck, dip, dive and dodge."
when I read:
"Because mobile systems are thermally limited, battery limited, power limited, and battery limited"
WaltC - Thursday, October 26, 2017 - link
Good for AMD! Last quarter was merely the prelude--in the next few quarters when mobile Ryzen hits its production stride and is fully optimized, along with the Integrated Vega gpu, AMD is going to begin breaking all kinds of records. With the dedicated Ryzen/Vega teams at AMD looking ahead to Ryzen 2 for the desktop--AMD will keep the "pedal to the metal" concerning IPC improvements, die shrinks and all the rest of it. What they won't do is what the old management teams at AMD did, which was to break records and zoom out far ahead of Intel's best (the original Pentium architecture at the time)--only to stupidly *sit on it* after that and hang out a shingle in the nutty idea that it might take Intel a decade to catch them...;) That won't be happening ever again, thanks to Su and the rest of the magnificent design and development teams! Thanks, AMD--nothing smells as sweet to me as the renewed and real competition in the x86 cpu marketplace! Make Intel earn every penny from now on!zodiacfml - Friday, October 27, 2017 - link
They are now finally on par with Intel's 2015 part, Broadwell but using a quad core. I fear that the mobile Ryzen 3 are dual cores.AMD will become even closer once they get to 7nm putting them at the same playing field as Intel
spdragoo - Monday, October 30, 2017 - link
We'll have to wait & see. However, since it seems like their R5 2500U (4C/8T) is a mobile counterpart to their desktop R5 1500X (also 4C/8T), with a lower base clock but similar Turbo clock, I would suspect that the mobile R3 is going to be a 4C/4T CPU. In fact, I wouldn't be surprised if a) they call it the Ryzen 3 2300U (to keep with the naming convention) & b) it comes with the same clocks as the 2500U (maybe less L3 cache, but I doubt it).What should be interesting is seeing how these compare to Intel's Kaby Lake-refresh, low-power CPUs, like the i5-8250U/8350U or i7-8550U/8650U. Except for a Skylake-H i5 & a Skylake-H i7, those were the first true quad-core mobile CPUs with that low of a TDP (& those 2 had a 25W TDP instead of the 15W the Kaby Lakes & these Ryzens have)...
del42sa - Thursday, October 26, 2017 - link
Manufacturing process ???SaturnusDK - Thursday, October 26, 2017 - link
It says 14nm in the very first info box on the first pageFinestar - Thursday, October 26, 2017 - link
Im glad that Acer is included in this. They are my brand. https://www.finestar.com/twtech - Thursday, October 26, 2017 - link
That Envy X360 looks nice - except for the presence of that number pad, which makes it an automatic do-not-buy.SomeCodeJunkie - Thursday, October 26, 2017 - link
Type-O in your chart.Quad-Core with SMT
2.0 GHz Base
3.8 GHz Turbo <- Turbo on the 2500U is 3.6 GHz
zodiacfml - Friday, October 27, 2017 - link
They'll succeed regardless of marketing if the enthusiasts says swear with the product. OEMs will also follow if there's enough buzz.I could see these as good competition to Kaby Lake R, where once again, AMD dominates the graphics limited benchmarks.
haplo602 - Friday, October 27, 2017 - link
Hmm afaik the HP Envy x360 with the previous Raven Ridge A10/12 was/is single channel, so if the socket is the same and HP just reuses the rest of the machine, I don't see that one as a good laptop.haplo602 - Friday, October 27, 2017 - link
hmm ... can't edit comments ? oh well ...managed to find the maintenance and service guide, it is actually dual channel, sorry for the misinformation.
Lolimaster - Saturday, October 28, 2017 - link
The previous gen was Carrizo and ITS NOT THE SAME SOCKET, ITS A TOTALLY DIFFERENT ARCH.BaldFat - Friday, October 27, 2017 - link
I want this for my home server. 15 W and enough power to keep my programs running quickly.rocketbuddha - Friday, October 27, 2017 - link
Wow! Finally some 1080p screens...Anything more average in the laptop world?
HP not using the top model 2700U
Lenovo sticking to a single channel
ACER not putting a Touch display and only 256GB SSD.
AMD! With friends like these who needs Enemies?
GreenReaper - Thursday, November 9, 2017 - link
If HP didn't take the 2500U, they might make a loss on chips they couldn't sell.They're doing AMD a favour by taking the slightly-defective chips off their hands.
If they didn't, the 2700U you want would have to increase in price to compensate.
doggface - Saturday, October 28, 2017 - link
2 things come to mind.1. These would be great replacement for all those shitty $500 laptops that have an i7 cpu +nvidia950m or similar. Use the saved BOM for better screens, keyboards and SSDs and I think they would steal the show.
2. If AMD could put together an mx150 style card for laptop gpu's that could team with the APU (via crossfire?) they might be able to get some truly excellent gaming results for less than a $1000. Especially with freesync2 in the mix.
Kudos AMD (from someone who wished they didn't just now by a laptop).
Lolimaster - Saturday, October 28, 2017 - link
They don't need a dgpu, a full Raven Ridge 704SP Vega + 2GB HBM2 dedicated memory will murder the MX150.Another thing is they put the 25-35w RR + a high end mobile gpu for gaming laptops.
YoloPascual - Saturday, October 28, 2017 - link
Where are my thinkpads at?Rickyxds - Saturday, October 28, 2017 - link
i5 better than i7LordanSS - Saturday, October 28, 2017 - link
If a 15W part does this well... I'd say a 35-45W APU (which can be undervolted if necessary) with 4C/8T and ECC would make for a pretty awesome ZFS NAS....Lolimaster - Saturday, October 28, 2017 - link
The absurd efficiency of zen cores at low frequency really helps squeezing extra frequency from the gpu.From 700-800Mhz on avrg for mobile to 1.3Ghz is brutal.
deksman2 - Tuesday, November 7, 2017 - link
That's because of the manuf, process AMD uses.The GLOFO 14nm currently used for Ryzen and Vega is more suited for lower clocks not high ones.
It's one of the reasons why Vega has a higher power draw than Pascal in desktop space... its currently clocked too high and has too much unnecessary voltage (and we noted that undervolting Vega 56 can bring down power consumption to 1070 levels, while simultaneously overclocking it on the core and HBM would bring up performance to 1080 level but at a lower power draw than 1080.
At lower frequencies though, I would imagine that power draw would go down substantially for both Ryzen and Vega... plus, Ryzen is only using one CCX here, so that likely helps as well.
Will be really good if AMD decides to make a Raven Ridge 2 for example by making 8c/16th APU with Vega iGP's... all connected via Infinity Fabric of course on 7nm and same power draw.
I would imagine it would be doable, but at the same time, clocks would increase by a fairly good amount... the desktop could run on 5 GhZ base using upcoming 7nm manuf. process from IBM... as for Raven Ridge 2... possibly 3.2 GhZ base with 8 cores and 2 Navi iGP
hanselltc - Sunday, October 29, 2017 - link
Now all I need is a HP Envy 13 or a Razer Blade Stealth with this inside.Ro_Ja - Monday, October 30, 2017 - link
I'm looking forward to replacing my ASUS X550ZE-XX026H. Dual graphics is annoying. Can't even disable onboard GPU to use R5 M230.tyaty1 - Monday, October 30, 2017 - link
Thy look good, but if they can not support 16/32gb RAM, it is deal breaker.I have a Asus G551JW with 4200H, which performs very good, but I ant upgrade for quad core, which does not throttle, and it graphics performance is not much worse than the 960m.
tipoo - Monday, October 30, 2017 - link
Swift 3 will be interesting, wonder how this will compare to the MX150.neblogai - Monday, October 30, 2017 - link
I'd expect ~80% performance in 3DMark, and ~50% in games. Not just because of less bandwidth- but also because it runs with total 25W TDP for both CPU and GPU- while X150 system will run on approximately 40W (15W + 25W).