Comments Locked

70 Comments

Back to Article

  • RedGreenBlue - Thursday, April 26, 2018 - link

    I never could understand the strategy AMD could have had in letting him go. Now, the worst case of all, he goes to Intel. I expect he’s working on that rumored brand new architecture that rewrites or abandons x86. Maybe that’s his dream job, to usher in a new era beyond x86. Seriously though, if that’s what he wanted, maybe AMD, specifically Lisa Su, should have asked him what he wanted to do and paid him to do it. This is the freelance Da Vinci of architecture design.... and they let him go?
  • mukiex - Thursday, April 26, 2018 - link

    Intel's abandoned x86 before. Didn't work out for them. Internally they've dumped x86 for well over a decade; pretty much every modern x86 processor is some type of RISC hybrid internally.
  • peevee - Thursday, April 26, 2018 - link

    Both x86 (more correctly, x64) and any RISC are outdated. Both are concepts based in the understanding from the 80s (and many ideas from the 70s or earlier). Technologies fundamentally shifted requirements for processor architectures, and these are the main roadblocks to further performance increases now (starting from the main idea from the 40s of separation of CPU and memory which hit physical speed of light limitations for a couple decades now and resulted in proliferation of inefficiency in the form of caches, prefetch, speculative execution etc).
  • FunBunny2 - Thursday, April 26, 2018 - link

    "memory which hit physical speed of light limitations "

    leetle electrons don't push through wires at anywhere near speed of light. not even light in fiber does.
  • peevee - Friday, April 27, 2018 - link

    Physical electron speed has nothing to do with signal speed, which is the speed of the electromagnetic wave in the media (slower than c which is speed of light in vacuum).
  • Dragonstongue - Thursday, April 26, 2018 - link

    x64 is AMD property x86 is INTEL property, please make sure you know what you reference...physical speed of light only matter so much when the transmitting parts are at nm scale and mm apart from each other, they are still electronic chips being used beyond test bed for optical interconnect.

    as far as x86 and especially x64 being "outdated" that is laughable at best foolish to say at minimum, seeing as many chips these days are still only being based on 32bit Uarch and 64bit in mos places is still very much in its infancy, not to mention Intel lAMD and many others as well use CISC and RISC to which x86 and x64 are "bolted" if applicable.

    point is 64 bit computing is anything but "outdated" when was x64 "completed" in year 2000 by AMD the possibilities of it are absolutely not even close to being "ancient" even though you did not use these words.

    not everyone builds on x86 (no license) not everyone builds RISC or any other handful of design types for processors and OS but I suppose even though they were based on ideas thought of in 70s/80s funny is it not, how far do you think they have pushed boundaries....tada all based on earlier "ideas" which they have found ways of making happen or have yet to happen.

    anyways AHAHAH is all I can really say peevee
  • close - Friday, April 27, 2018 - link

    The only thing that makes x86 (including x86-64) feel outdated is the insistence on carrying so much legacy going forward. This is great of course because it means you can run 20 year old software on your modern CPU. Imagine running any of today's software on a phone 10 years from now... You'd think it's ridiculous. But running old software on PCs is a different matter.

    This is also not so great because it prevents it from being lean and optimized. This is coincidentally also the biggest burden on Windows OS. Getting rid of legacy would cause an uproar on one side but on the other side it would really improve performance and security.
  • Kevin G - Friday, April 27, 2018 - link

    Legacy support always has a cost associated with it. This has to be compared to the costs associated with porting software to the new architecture and transitioning to the new platform. Generally speaking, keeping legacy x86 support has always won versus developing a new architecture from scratch.

    The last time a new architecture won was recently in ultra mobile: there was always going to be a cost to port software to mobile or develop new applications from scratch. ARM has clearly won this market. The only foot hold x86 has here on the tablet side mainly leverages legacy support as a selling feature (MS Surface etc.). I would not discount the idea of running some ultra mobile software on platforms 10 years from now. Even now I've come across cases where using a slightly older version of Android is preferable on newer hardware to better match application support.
  • FunBunny2 - Friday, April 27, 2018 - link

    "This has to be compared to the costs associated with porting software to the new architecture and transitioning to the new platform. "

    the problem isn't, and hasn't been since *nix/C, the hardware. it's the OS calls. strictly speaking, all you need is a compiler. the tough part is the balance of sys calls to language code. at one time it was estimated that Windoze application code was 80% sys calls. that's going to be a bear to port.
  • peevee - Friday, April 27, 2018 - link

    "at one time it was estimated that Windoze application code was 80% sys calls"

    Estimated by whom?
  • Kevin G - Friday, April 27, 2018 - link

    There is still a cost here even with the wide spread of C and libraries. In fact, various libraries due leverage assembler for performance reasons. Drivers often touch assembler a bit too. Compilers need to be written. Various interpreted languages need to be ported and validated. In fact, even if a project is entirely C, it'll need to be tested throughly before it is production ready.
  • peevee - Monday, April 30, 2018 - link

    Thankfully, supporting a new hardware platform got much easier with LLVM.
  • FunBunny2 - Friday, April 27, 2018 - link

    "This is great of course because it means you can run 20 year old software on your modern CPU."

    there's at least millions, if not billions, of lines of COBOL and C/C++ code older than that running *nix on X86 machines. like it or not, X86 is the 360 of today.
  • peevee - Friday, April 27, 2018 - link

    "The only thing that makes x86 (including x86-64) feel outdated is the insistence on carrying so much legacy going forward. "

    Unfortunately, that is not it. That might have been the case 20 years ago. We are way past it. Legacy 386 command set decoder takes like 0.0000001% of silicon now. Things got WAY worse than that due to disconnect of the feature scale and architecture. Simplifying, only big data processing (like video etc) matters anymore, and the big data is stored millions of times farther than registers, meaning it takes so much more time to carry the signal and so much more energy both for overcoming resistance and RF noise in the long long lines.
  • Kevin G - Friday, April 27, 2018 - link

    There are still 8 bit and 16 bit modes supported in hardware. Various x86 prefixes have been retired only to come back in x86-64 under a different mode (looking at you VEX). The x87 FPU has been deprecated but you still need those registers in hardware even if most of those instructions are likely microcoded into SSE/AVX operations. That also highlights a bit of a hidden cost for legacy: special mode by passes that other wise wouldn't existed. Going back to x87, an 80 bit FPU operation could be microcoded to run in the more modern SSE/AVX hardware as two 64 bit operations for 128 bit precision but additional hardware/microode would be necessary to cast that 128 bit result down to 80 bit. This additional hardware can increase the difficulty in pipelining operations and/or make additional stages necessary.

    It is true that the amount of transistors for this support is consuming less and less die space by percentage. This is a natural result of putting increasingly more cache on-die vs. the amount of processing logic. The opt of the line Xeon Platinum has 66.5 MB of L2 and cache which is ~3.2 billion transistors just in SRAM for the data, not including tags or controller logic: around half of the transistor count is just cache. The reason why the extra transistors matter for the carrying legacy is how often is a cache line accessed vs. the transistors used for an instruction decoder? Removing legacy is about optimizing the most frequently used parts of the execution pipeline, not freeing up aggregate transistors to utilize else where.
  • peevee - Friday, April 27, 2018 - link

    "as far as x86 and especially x64 being "outdated" that is laughable"

    You are clearly very ignorant.
    If you look at a modern computer (for example, the one in your pocket), no critical computation if performed on CPU anymore.
    Modem? Special processor.
    Camera/stills? Special processor/ISP.
    Camera/video? Special processor.
    GPU? Special processor.
    Music? Special processor/DSP.
    Display control - special processor (not the same as GPU - it is for color space conversion etc).
    AI - special processor.
    Motion processing - special processor.
    Because the 1980s and earlier ideas and their implementation in ARM/Intel etc are GOOD FOR NOTHING these days.
    Von Neumann architecture was invented for lamp computers of 1940s, fitting in one big room. If you scale a modern PC so the ALU would in in one big room, memory DIMMs will be on the next continent, with series of caches in the next city and state so it would not be so far away. It just does not fit anymore.
  • FunBunny2 - Friday, April 27, 2018 - link

    "Von Neumann architecture was invented for lamp computers of 1940s"

    makes not an iota of difference what hardware existed back then. the architecture is the result of maths logic. the fact that processor tech has returned to single threaded performance is the tell. there just aren't many user space embarrassingly parallel problems.
  • peevee - Monday, April 30, 2018 - link

    "makes not an iota of difference what hardware existed back then"

    It does, because it influenced their thinking.
  • T2k - Friday, April 27, 2018 - link

    "Both x86 (more correctly, x64) and any RISC are outdated. Both are concepts based in the understanding from the 80s (and many ideas from the 70s or earlier)
    (...)
    starting from the main idea from the 40s of separation of CPU and memory which hit physical speed of light limitations"

    Are you high?
  • peevee - Friday, April 27, 2018 - link

    Nope. I am enlightened. :)
  • FunBunny2 - Friday, April 27, 2018 - link

    he's not entirely high. google the ti-990 and read up the architecture.
  • ilt24 - Thursday, April 26, 2018 - link

    RedGreenBlue, I'm sure AMD tried to keep him, but I think Keller felt he completed the job they hired for and wanted to try something different.

    We are talking about a guy that starting working on VAX and then Alpha at DEC, jumped to AMD to work on the K8, left AMD to go work on some MIPS processor, then to PA semi to work on ARM, Apple bought them and he worked on a couple of Apple SOC's before going back to AMD to do Zen, jumped to Tesla and now to Intel.
  • tipoo - Thursday, April 26, 2018 - link

    Looking at his history, I dont' think it's about AMD. He goes where he goes until his work is done, then moves on when a core/project is in good shape.
  • jjj - Thursday, April 26, 2018 - link

    From the start he was only gonna stay at AMD for 3 years, he did his part and moved on, just like now at Tesla. He does his job and moves to the next project, instead of babysitting it for 5 more boring years.
  • Cooe - Thursday, April 26, 2018 - link

    When you are literally the best person in the world at what you do, you pick and choose who you are going to work for and what you are going to work on, not any company. Not only did circa-2015 near bankruptcy AMD not have the kind of bankroll to afford keeping him on long-term, but with the Zen design finished, and Zen 2 laid out, there simply wasn't enough interesting stuff going on to keep him there anymore. Kind of sad he's now working for the company that pulled such egregiously illegal anti-competitive practices to torpedo the best designs he ever worked on though (K7 & K8), but I can definitely understand the allure. Intel's arch efforts have hit a massive brick wall so just like AMD they are gonna give him completely free reign with all the keys to the castle, but unlike AMD, has the bankroll on tap to let Jim do pretty much whatever the hell he wants, for as long as he wants. Zen otoh was more of a favor to save the company that launched him than a long term destination, though honestly with Jim's track record and penchant for "big paradigm shifting projects" I wouldn't be surprised at all of this Intel stint also only lasts a few years before he picks up shop for somewhere else yet again.
  • FunBunny2 - Thursday, April 26, 2018 - link

    "Intel's arch efforts have hit a massive brick wall"

    it will be really interesting to see whether the maths will support any more significant increase in circuit performance. that's separate from migrating lots o off-chip logic on-chip. at some point the maths dictate the "best " instruction set (micro or otherwise) for a given problem space.
  • Samus - Thursday, April 26, 2018 - link

    Jim Keller and Hector Ruiz saved AMD. Without either of them, AMD would have probably filed for bankruptcy at some point. Two engineers at the top of their game.
  • Kevin G - Friday, April 27, 2018 - link

    Jim didn't get everything he wanted at AMD though. Where is the K12 ARM core?
  • patrickjp93 - Saturday, April 28, 2018 - link

    Put on hold until the world shows some real demand for ARM-based servers. Even Qualcomm Centriq isn't exactly turning heads.
  • DiamondPugs - Sunday, December 23, 2018 - link

    AMD didn't let Jim Keller go. He left because he didn't want to work for AMD anymore. He is a very interesting person, the kind you cannot buy simply with money. Micro-architectures are his passion, he works for joy, and he usually looks for a big challenge. He joined AMD again because they were looking to create a new architecture from scratch capable of competing with Intel, and after it was done there was nothing of interest for him in the company. He simply left to pursue a similar challenge somewhere else. Later he joined Tesla and helped them with their self-driving car processors since that was a big challenge. Intel is now looking to create a new architecture and that got his attention.
  • webdoctors - Thursday, April 26, 2018 - link

    This isn't even notable news. In Santa Clara, you can go from Nvidia to Intel or Qualcomm to AMD all within 1-2 kms. Nvidia and Intel are right next to each other, separated by just a highway and next to Nvidia is Huawei. And if you walk along the street from Nvidia towards Qualcomm and go past Qualcomm, you'll get to AMD.

    My friends and I aren't even high level engineers like this guy and we've worked at most of these companies.
  • Ket_MANIAC - Thursday, April 26, 2018 - link

    Lol, this is the best!
  • wut - Thursday, April 26, 2018 - link

    It isn't notable news only when small no-name tater tots move around.
  • PeachNCream - Thursday, April 26, 2018 - link

    It's news when those tater tots are on your plate and they suddenly seek out employment on your kitchen or dining room floor without your active involvement in the termination process. Of course, in that case, you're probably in the midst of an earthquake and should be worrying about things other than where your tater tots are going so the point you're making still holds true.
  • lazarpandar - Thursday, April 26, 2018 - link

    Wait lol was this an analogy or not
  • PeachNCream - Friday, April 27, 2018 - link

    I'm not sure what I was going for there. I had a point, but um...it's gone now. Friday's PeachNCream has no idea what Thursday's PeachNCream was rambling about.
  • mode_13h - Saturday, April 28, 2018 - link

    Dunno, but I'm hungry.

    Now, where's me ketchup?
  • SkyBill40 - Friday, April 27, 2018 - link

    I suppose the proximity to one another keeps the travel times down should someone switch jobs, eh?
  • mode_13h - Saturday, April 28, 2018 - link

    If you think about it, they must've each located near the others to quickly attract a skilled labor force. The downside is that, once they've staffed up, their proximity makes it that much easier to lose said labor force.

    This is probably where some genius got the idea to establish tech startups on cruise ships. Once you staff up, then you just pull up anchor and set sail. Not to mention no more H1B restrictions in international waters.
  • jtd871 - Thursday, April 26, 2018 - link

    Heterogenous? As in, HSA?!
  • mode_13h - Friday, April 27, 2018 - link

    Perhaps mixing CPU, GPU, FPGA, etc. in the same package, but perhaps those dies are also fabbed at different manufacturing nodes.

    So, maybe intel takes their iGPU off die and gets adequate yield on just the CPU cluster at 10 nm. Then, they have the GPU die still made on 14 nm.
  • patrickjp93 - Saturday, April 28, 2018 - link

    Likely the other way around given 10nm isn't so kind on clock speeds.
  • patrickjp93 - Saturday, April 28, 2018 - link

    HSA is dead. Get over it. OpenMP, SyCl, and HPX were doing what HSA is doing 10 years ago, and they do it better without code bloat.
  • patrickjp93 - Saturday, April 28, 2018 - link

    And heterogeneous as in one die may be 14nm, another 10nm, or Hell 22nm to put the chipset on the package.
  • baka_toroi - Thursday, April 26, 2018 - link

    WHY DID YOU HAVE TO GO WITH THAT WHORE, JIM!? YOU BETRAYED ME! :'(
  • CajunArson - Thursday, April 26, 2018 - link

    Don't you think you're a little late?

    He worked for Apple and quit there YEARS ago.
  • alphasquadron - Thursday, April 26, 2018 - link

    These people being at such a high level in their companies, don't they know specific trade secrets of their previous companies that could be used in the new company for efficiency gains? Do they just block that out of their head when thinking of designing a new processor?
  • casperes1996 - Thursday, April 26, 2018 - link

    That's why this is California! They have relatively loose rules on all this. He can't explicitly implement ideas from competing products, but he can use general concepts for inspiration
  • mode_13h - Friday, April 27, 2018 - link

    I think the idea is that AMD patents all of his ideas they use and discuss. Then, he can't just go and use those same ideas at Intel, because Intel will try to patent them and discover that they're already patented.

    Being a top performer, he can't afford to hold back any applicable ideas from his current project, because he's got a reputation to protect. Zen wouldn't be what it is if everyone didn't bring their A game.
  • jjj - Thursday, April 26, 2018 - link

    The other very interesting bit is that if he left Tesla, he was likely done with his part and Tesla should have its own silicon in a not too distant future.

    There were also some rumors about Tesla working with AMD but , AMD or not, Tesla having its own silicon is gonna be way interesting as Tesla is maybe the most focused on machine vision. They are aiming much higher than others in this area as they don't want to count too much on other sensors.
  • mode_13h - Friday, April 27, 2018 - link

    Nvidia's Drive.PX platform does bring a lot of firepower. I doubt we'll ever hear how it compares to Tesla's own solution, but I'd be surprised if it wasn't more powerful (not to mention more power-hungry).
  • jjj - Thursday, April 26, 2018 - link

    Intel in their Q1 results press release confirms the extra extra 10nm delay.
    " Intel is currently shipping low-volume 10 nm product and now expects 10 nm volume production to shift to 2019 "
  • BillBear - Thursday, April 26, 2018 - link

    Some people deserve the description "Rock Star" in their chosen profession.
  • mode_13h - Friday, April 27, 2018 - link

    I do like the idea of a "rock star" engineer, as opposed to always focusing on the entrepreneurs heading up these companies and some high-profile venture capitalists.
  • RaduR - Thursday, April 26, 2018 - link

    I am sure this kind of person doesn't do it just for the money, but for being able to do whatever he thinks is challenging.
    I'm sure anybody would have payed him more than AMD. Even Intel would have been smart to prevent him going there, by offering something to do for more Money.

    I's sure that Intel came up with something far more interesting than AMD & Tesla and that's the thing to watch. What's that would be a bigger news !
  • Kvaern1 - Thursday, April 26, 2018 - link

    2023 will be a good year for Intel.
  • mode_13h - Friday, April 27, 2018 - link

    Heh, a lot can happen between now and then.
  • Dragonstongue - Thursday, April 26, 2018 - link

    like Jim Keller was hired back to AMD to get them on solid footing under contract, when the contractual obligations were met he went to a completely different company that was not directly competing with AMD (Tesla to avoid non compete clause) now he is at Intel..am sure he is very sought in the industry which there are VERY few people of his calibre which is absolute fact, so he like any top 1% of talent basically right their own ticket.

    here is hoping it does not make sure that Intel absolutely destroys any future plans Keller helped AMD bring to fruition because lord knows Intel has done everything possible to bury AMD even though they should not have

    (and made sure to keep their head under the sand and have foot tooth and nail to pay them anything even though they know 1000000000000% they were WRONG in doing what they have done even though they signed agreements in AMD early history to not do what they have done)

    anyways, time will tell I suppose, who knows what magic sauce they will cook up, maybe just maybe it is to make a superior Xeon PHI or whatever instead of to make substantially superior much lower power required chips or something.

    could even be that Intel has been "stuck" at 14nm far longer than they would have liked (from everything I have read) so maybe Keller is there to help them make sure their next die shrinks go off without a hitch so Intel can be back in the lead as far as lowest Uarch in nm scale for example 7nm asap push to 4nm etc or at least to have the density superiority even if only "matched" for the shrinks (maybe even have a mass market photonic compute processor market viable asap not based on "standard" architectures to forge ahead in a "new frontier")
  • IKeelU - Thursday, April 26, 2018 - link

    This is terrible news.
  • HStewart - Thursday, April 26, 2018 - link

    This guy is not only one that left AMD - also senior Marking person - not sure where going. I would be serious worry if I was working at AMD is senior personal started leaving in major areas - wonder if this has any relationship to Raju.
  • mode_13h - Saturday, April 28, 2018 - link

    What's a senior Marking person do? Is that who draws the serial numbers and logo on the tops of their chips?

    Seriously, the "320x240" guy sounds like someone they can do well without.
  • wow&wow - Thursday, April 26, 2018 - link

    Intel hired him to learn why AMD CPUs aren't vulnerable to the "Meltdown" :-D
  • HStewart - Thursday, April 26, 2018 - link

    Old news - this is advancement in this guy's career
  • beginner99 - Friday, April 27, 2018 - link

    I still think Raja gone is a good thing for AMD. I'm pretty sure there were some serious power struggles within AMD. The RTG, kaby-g, cannonlake shipping with AMD GPU, I think his plan was to sell RTG to intel and profit from that. With him gone they can now focus and actually building GPUs.
  • systemBuilder33 - Sunday, April 29, 2018 - link

    I have it on good authority that Intel gutted its CPU architecture group 4-5 years ago. "Our process tech is so far ahead of everybody else, we don't need cpu designers any more". Idiots.
  • mode_13h - Sunday, April 29, 2018 - link

    That would be strange. It's not exactly like they were getting out of the CPU business...
  • DiamondPugs - Sunday, December 23, 2018 - link

    So he created an AMD processor capable of competing against high performing Intel processors, then he helped Apple creating an ARM processor capable of competing against low power Intel processors, then he joined AMD again for creating another processor, a modular and scalable one this time around, capable of competing against high performing Intel processors, and now that Intel is in trouble he joined them so they could create a new architecture that can compete with AMD and ARM.

    Holy crap, this dude is in an arms race against himself!

Log in

Don't have an account? Sign up now