Intel Xeon W 'Sapphire Rapids': Up to 56 Cores with EMIB Packaging | Talking Tech

Intel Xeon W 'Sapphire Rapids': Up to 56 Cores with EMIB Packaging | Talking Tech

Show Video

- Hi, welcome to Talking Tech. I'm your host, Marcus Yam, and we're here today to talk about something serious, or at least serious processors for serious applications. Those are workloads that may run pretty great on Intel Core PCs but run even better on platforms with Intel Xeon.

To tell me all about the latest in the world of Xeon is Jonathan Patton. Jon, thanks for joining Talking Tech. - Yeah, of course.

Thank you for inviting me. - Tell me about your role at Intel. - Yeah, so I'm part of our Creator and Workstation Solutions team, and I'm product marketing and do some technical marketing as well. - Fantastic, okay, so we're here to talk about Sapphire Rapids, right. That's the code name for Xeon W, the latest coming in Xeon.

And for those that have been following Intel code names or following just, like, what these things mean and what's latest in Xeon, Sapphire Rapids launched with data center in January. - Correct. - So what's new for Sapphire Rapids and Xeon W today? - Yeah, so Sapphire Rapids, you know, is a whole family of CPUs, and so we're bringing that to Xeon W.

And Xeon W is our workstation processor lineup. We'll have two different flavors. We have Sapphire Rapids, or Xeon W-3400. This is up to 56 cores. And then we have Sapphire Rapids, Xeon W-2400, and this goes up to to 24 cores here. - Okay, so what's the relationship between Xeon W and Xeon Scalable? - Yeah, so they actually use the same silicon there.

They actually use the same silicon, a different motherboards, if, you know, for those DIY builders. These will use the Intel W790 Chipset, and then our server counterparts use the C741 Chipset. - Okay, so basically, like, same core, no, same foundational technologies. - Mm-hmm. - Actually, some same "Core" technologies.

We'll talk about that in a second. - Yeah. - But just adapted for the workstation market versus the data center and the server market. - Correct, yeah, and so some of our workstation, you know, processors, they don't feature some of the accelerators that our server side, the 4th Gen Xeon Scalable do, because those accelerators are more towards server workloads.

You know, data encryption, that kinda thing. There's a whole bunch of content on that already, but on these guys, these are just raw core horsepower here. So these are 56 cores and 24 cores, and each of these cores are the Golden Cove cores. They're built on the Intel 7 processor. - And that's the "Core" technology that I talked about earlier. - Yeah! Mm-hmm.

- Okay, so there's kind of server technology that's been adapted for the workstation market, so just quickly, like, what kind of workloads, what kind of applications can you kinda bring that server technology to, like a workstation and workload, like- - What is a workstation, right? - Yeah, what is a workstation? - Yeah! - And what are some examples of who would need this type of big power for something that people can use on their workstations? - Yeah, definitely, workstation sits within the space between, you know, your desktop PCs, gaming PCs. You know, I game myself as well, and they sit between that and the server side, and the users who use a workstation are generally, like, 3D artists. They'll be media and entertainment professionals, so video effects, video composition as well, and a large portion of maybe even game development as well, so software, game engine development. Then we also have our engineers, right.

These are architects and product design engineers, silicon engineers as well. Even our own silicon engineers use workstations as well. And they will sometimes, you know, design the product either big or small, so like, think architecture or even just like a shoe or a laptop itself, and then, they will simulate that design and then, you know, kinda helps on saving prototyping costs if they can do all that digitally instead of actually making a physical prototype. Then our third set of users are the data scientists and AI researchers.

These are folks who are doing the cutting-edge AI research or the data scientists who will take large datasets. They will put that into system memory and do complex data analysis on. They'll find statistical, you know, analysis on there, pattern recognition, that kind of thing. Those are the main types of users who would use a workstation here.

- And so you mentioned, you know, I was asking about the differences between the server Sapphire Rapids and workstation Sapphire Rapids. You mentioned a, you know, different chipset. Are there any other differences between them? Mainly, it's just the different motherboard chipsets as well as the Xeon W processors, each individual processor, our Xeon W-3400 actually has more CPU/PCIe Gen 5 lanes than the server counterpart there. But also know that our 4th Gen Xeon Scalable processors are also coming to workstations as well. So you can now have a dual-socket workstation, dual-socket 56 cores, for up to 112 cores in a workstation there. Those really help those very multi-threaded workloads, so think, like, 3D rendering.

Imagine if you're a 3D artist, and your, a single frame of your entire movie takes seven hours to render. Now, imagine you can shrink that down to maybe two hours, or, you know, depending on, there's a lot of factors in rendering there, but you can shrink that down, and imagine you can iterate more. You can be more creative. You can add more explosions or more smoke or that kinda thing. That's what these processors really help to design to accelerate is those innovators, those professional innovators who sit within a commercial segment. - Right, and you've been referencing the Xeon W-3400 and the 2400, so tell me about the differences.

Like, on top, there's some differences in the IHS, or the heat spreader. - Yeah. - But underneath, they're quite different, so tell me about the differences between those two. - Yeah, so our Xeon W-3400 features up to 56 cores, and how we're able to do that is using our Intel EMIB technology. So we were able to take four silicon tiles and put them on a, and a singles package there, and then have embedded multi-dies into there.

That's an embedded multi-interconnect bridge. - That's right, EMIB. - EMIB. - Very good. - EMIB, and it will connect

the four silicon tiles so they can communicate with each other, and what that allows us is it allows us to scale up to have a processor with up to 56 cores there. But when this processor is actually booted up in a system, your Windows Operating System just sees it as one CPU. The software will just see it as one CPU. So when it goes and says, "Hey, I need this job done on this core," or, "I need this job done on multiple cores," it will do that and schedule that on each of 'em.

So this one's using Intel EMIB, on the Xeon W-3400, and then our Xeon W-2400 is actually a single monolithic die here. And this will go up to 24 cores in our Xeon W-2400, and this also features our Golden Cove core. So this one's features EMIB for scalability.

That gives us more memory channels and more cores, and then this one will feature quad-channel memory versus eight-channel memory on the Xeon W-3400. And then this will feature 64 lanes of PCIe Gen 5 off of the CPU there. - Got it. - Mm-hmm.

- And the micro-architecture between them are the same. - Yeah, functionally, they're the same. Yeah, so Golden Cove cores and Golden Cove cores, yeah, functionally the same. It's just how they're packaged on the CPU is different. - Pretty cool, and that's. - Yeah.

- That's tile, so I know I asked about this earlier, right, so I think, "Yeah, I wanna see the tiles. I wanna see, like, let's dive deep them." - Okay. - And you mentioned Golden Cove, and I'm like, "Golden Cove, that's some, the P-Cores." - Yes. - That was in, you know,

you may recognize this from some of the press kits. That's Golden Cove cores in here on the Alder Lake Performance Hybrid Architecture, and we also have, you know, this is near and dear to my heart. I had this printed up. Wanted to make it even bigger. - Okay.

- This is Raptor Lake-S. - Yeah, Raptor Lake, the desktop processors, yeah. - Or the HX. Kind of very similar P-Core architecture. And I said, "Okay, I wanna see one of these tiles." Actually, you know what, before we go here, I want to look at just relative type, before we go deep into just the size of the different cores, - Yeah. - I did bring this. - Yeah, so- - For a relative size of the chip. I always call this, like, this is a chip.

Like, what is that? - I would call this a small pastry. - Okay. - Right, this is a chip like you would take out of a potato chip. This is probably a small pastry that you need multiple portions to eat. - Okay, so in relative terms, this is under this. - Is that, mm-hmm.

- Okay, but. - What do you got here? - I've got a tile, a single. - A single tile. Yeah, a single tile. - A single, a single tile, and internally, that's called the XCC tile? - Correct. Sapphire Rapids XCC tile. - XCC, whereas the W-2400.

- Is the MCC, yeah. - Is MCC, okay. So this is a single tile. A single tile. - Yes, a single tile. There's four of these in our Xeon W-3400 processor.

So yeah, you can see the cores here. On the edge, we have our PCIe lanes and then our memory channel, memory controllers on here, and each tile has its own memory controllers. So if there's four tiles and two channels per memory controller, that's up to eight channels there. And then what you can see along the bottom edge here, that is actually part of the part that goes into the EMIB connector down below. So this, these two sides, will face another set of tiles there as well. - And I recognize there's, you know, like, there's a little bit of family resemblance.

- There is! - These look, these look a lot like the P-Cores. - Yes, they do. - It's the Golden, the Golden Cove P-Cores. - Yeah. - So same, same. - Yeah, same, same architecture. - There's a lotta them. - Yeah, yeah, each of these core complexes have our AVX-512 units as well as our new AMX instructions for doing advanced matrix multiplication.

These are very data science kinda oriented instructions there. That's also featured in each of the cores as well. - Okay, so I had extra ones printed up. - Okay. - Do you mind, like, you know, constructing? - Sure! Yeah, let's go for it.

- Like a Xeon W-3400 right now? - Yeah, of course. - Okay, so I brought three other tiles. So show me how these are oriented.

- Yeah, so these are oriented with, ah, let's see here. There we go, so this right here, and then this goes right here. And then that will go right there.

So this is the total, you know, with the, in here. - This is what's inside here. - Yes, exactly. - Okay. - And, you know, for scale, you can put, where's the, - Oh, that's right. - Raptor Lake guy as well?

- I think these are approximate. - Approximate, yeah. - Approximate stuff. - They're not quite, they're, you know, probably a little bit sized off. You know, well, and our engineering friends will have to forgive me there, but yeah, so this is the Raptor Lake, you know, with the eight performance cores and the 16 efficient cores there as well.

And you can see, like, the efficient cores, right, are very space-efficient here. - Space efficient, yeah. - Yeah, and the performance cores, they take, you know, a lot of space. - This is 16 E-Cores. - 16 E-Cores in the same space as the eight.

- Yeah, yeah. - The eight P-Cores there. And so this is our, you know, Sapphire Rapids, the Xeon W-3400 with the Intel EMIB package. - So this is all P-Cores. We're talking up to, - Correct.

- up to 56, well, like, Golden Cove, like, I think of them as P-Cores, but yeah, similar performance. - Yeah, yeah. - And I'm counting them. - Okay. - There are, there are 15 Golden Cove cores on each one.

- Per tile, yeah. - So total is 60. How did we get this 56? - Yeah, so, you know, the reason why we're not using all 56 cores here is really just coming down to binning, right, and having a certain volume there. Yeah. - All right, oh, hey,

that makes us manufacturing these, especially four of them together, it's a real challenge. I know that EMIB is a technology Intel's worked on for quite a while, and it's in, you know, it's really showing what it can do and some of our upcoming technologies that were showed off recently, and of course here. So actually, can you point out, like, where does the EMIB sit? Like, in a- - Oh, where it would sit.

Yeah, so, you know, embedded multi-die interconnect bridge, right. It would actually sit in between each of these tiles, but a layer underneath it that actually sits on the CPU substrate. So right, this, there are parts of a CPU, right. We have our integrated heat spreader, or IHS, and then the substrate is the, kinda green PCB that folks, you know, if you de-lid your processor, you can see there, which, you know, I do not encourage. But not with these.

There's a lot of capacities. - Serious workload. Serious business. - Yeah, serious workload, serious business. - And you got it done. - Yeah, but the embedded EMIB dies would sit underneath these tiles within that substrate.

- So there's one between each of these. - Yeah, so there would be four. So there'd be four in between, or one here, one here, one here, one here. - Okay. - And they allow, you know, cores over here to talk to a memory controller over there to grab data from another, you know, memory stick there, and it just, you know, allows for, allows Intel.

Like, Intel EMIB packaging allows us to scale, right, allows to have a lot of different silicon tiles connected together. So, you know, it doesn't have to all be the same tiles necessarily. Like, this is the first kind of step with Sapphire Rapids that we're taking with Intel EMIB. - 56 cores on tap. - Mm-hmm. - What is performance like? - Yeah, performance, we're seeing individual cores, like, single-core performance, we're seeing up to 28% more performance over our previous generation Xeon W processors. And then, you know, comparing to that same generation, going from 28 cores to 56 cores, we're seeing up to 120% more performance in multi-threaded performance as well.

- And it's more than just core count too. I know that in the previous generation, you could have a multi-socket - Correct. - setup, but here, we're using EMIB. - Mm-hmm. - To put it all in one. - Yeah, and here, we're actually seeing this outperform our previous generation dual-socket workstations as well. So, you know, this is just a whole giant leap of performance here.

- Okay, just before we move on, like, before I wanna dive in. - Yeah. - You know, we had kinda Raptor Lake here. So clearly, like, there's a very clear difference, and you know, it's not just positioning difference of, you know, what's an Intel Core PC versus an Intel Xeon workstation? But, you know, this is a very performant design. - It is. - Can you just quickly go,

like, what are some of the overall differences other than the size? - Differences, yeah. - Like, what are the differences between them and, like, everything from performance to just, you know, how it works? - On here, we have our ring bus. So data transfers in two directions.

So there's one that goes counterclockwise, one that goes clockwise, and there's some other connections as well, but on our Sapphire Rapids architecture with Intel EMIB, these are mesh-interconnected. So each one of these have their own layers of cache and own layers of mesh interconnect. So this core can talk to this core can talk to this core. So if you wanted to go from here to maybe that core, you'd have to cross over EMIB, go to this core complex, and then hop your way over to there.

There's some, you know, latency penalties that kinda come with that kind architecture, but when you're trying to have a workload sit within the same CPU, which helps with, you know, performance instead of going to another CPU when you have these large, you know, workstations, that kinda thing. It really, you know, that latency penalty is just part of the architectural design needed to have this many cores. - And ultimately, it's not, you know, you're going for core count, you know what I mean? - Yes, you're going - Like, sure. - for core count, yeah. - Sure, workloads could be, like, you know, latency dependent, but the fact is is that there's a clear reason why this workload would need. - Yeah! - Could use 56 cores versus, you know, eight, eight P-Cores.

Well, okay, so before we move on. - Yeah. - Just, you know, can you tell me what are the workloads that are more suited towards, like, a Raptor Lake S or actually on the mobile side, like, an HX system, that it's also kind of in that, you know, entry-level workstation side. What does, you know- - What would you use this one for? - What do you use this one for? - Yeah! - And, like, what's the right tool for the job, and what's better for this? - Yeah, and that in workstation, it's really down to the right tool for the right job.

You know, with Raptor Lake, video editing does really great on here. We have our, you know, our Intel Quick Sync technology with our integrated graphics on Raptor Lake. Sapphire Rapids does not feature integrated graphics, but those integrated graphics and, you know, video editing will really fly on Raptor Lake. On- - Gaming.

- On gaming, of course! - No one's gaming on Sapphire. - You know, I sit within the creator/workstation space, so I think more about just content creation and, you know, making visual experiences, that kinda thing, but yeah, so gaming will definitely. This is your gaming processor, right. If you're there just to game, this is your processor. But if you're also doing video editing, or if you're also doing, like, 3D rendering, or you're doing video encoding.

So that's at the end of video editing, you hit render, and the video bar, you know, encodes, fills up. That's what these processors are really for. These help churn and do a lot of basic calculations very fast, you know, across a wide array of cores here. You know, there's a lot of, they're CAD workloads, so computer-aided design. So that's Autodesk, Revit, Inventor.

That will perform really well on something with a high frequency, you know, and medium core count. But if you're doing rendering, like, taking that 3D model that you've made of a building and you're trying to render that so that way, you know, producers or a marketing professional can review your design, that's where Sapphire Rapids really sits in, for that part of that workflow there. - And we talked about, you mentioned frequency, so, I mean, you know, we're looking at almost basically on this, like six gigahertz. - Almost six gigahertz, yeah. - With KS, a special edition processor. These are, you know, you're really looking at heavily multi-threaded workloads. - Heavily multi-threaded. - So, like, frequency,

what is the frequency of the highest end? - With this, we have, you know, our Intel Turbo Boost Max Technology 3.0, and that will go, we'll select, you know, in manufacturing, we know which one of the cores will perform, one or two cores will perform the best. And so that will go up to 4.8 gigahertz, so that really helps with, like, snappy workloads. "Hey, I'm loading up a project," that kinda thing.

But across all the cores, when you're loading up everything on this 56-core processor, that's gonna go up to 2.9 gigahertz there, and really, that just comes down to, you know, power, is it? - Mm-hmm. - Thermals and electrical are really, you know, the big determining factors when talking about frequency across all these cores.

But think about it: with 56 cores, right, at 2.9 gigahertz, you know, but thinking about that in contrast to Raptor Lake, which has almost up to six gigahertz. You know, there's differences there, and it's just really the right tool for the right job at the end of the day. There's some workstation workloads like I was talking about: video editing, CAD, some, you know, exploratory data science that you could do on a Raptor Lake, but if you're starting to scale up, if you're a business where you're doing this professionally, and you're, let's say, your video editor needs to render a video, five videos a day every day of the week, and they need to have a system for that, that's where our new Sapphire Rapids workstation processors are really there for. - And other, you know, there are other applications. Clearly, you could, again, these are actually 56 cores, but can't forget they're Hyper-Threaded.

- Yes, they're Hyper-Threaded. - Whereas the E-Cores don't have Hyper-Threading. - Correct. - But the P-Cores do. This is, like, you've got Hyper-Threading on all 56 cores. - On all of 'em, yeah.

- So that's, you know, if you're heavily threaded, you can definitely use all the capabilities of this. Speaking of capabilities, though, like, if you talking about data scientists, AI workloads, anything that could use AVX-512. And there's a new extension too, but, you know, just review for me, this is AVX-512, but there's also a new set of extensions called AMX. So can you tell me about what AMX is and, you know, what's new about it? How's it different? And why it's good for this segment of the market? - Yeah, so AMX is our Advanced Matrix Extensions, and what they allow us to do is add on additional types of accelerators onto that. And each one of these cores have space for AMX. And so our first one is TMUL, or matrix multiply will kinda simplify it.

What that allows to do is very complex repetitive, additive multiplication there, where I need to take this number, multiply it by this number, you know, 50 times or 100,000 times, that kind of thing. And I need to do that in a way, and really, what that helps, right, if we're trying to get to what that's really gonna accelerate, is things like AI research, machine learning, deep learning, and data science. In AI research and in deep learning, you do a lot of repetitive tasks, right. You set up an AI model, and in order to train it, you need it to.

Let's say you're training an AI for a video game, and you need to have your little AI jump over, you know, a ditch or something, else they'll fall in the ditch. They have to do that over and over again. It's the same type of multiplication and, you know, computation that's going on in the background, and with AMX and the TMUL instructions, int8 and bfloat16 data types are really accelerated. So I'm going very deep into, like, data science and, you know, computation, scientific computing there, but that's really what these are gonna help. And those are emerging workloads that we see as, like, going into the future to really accelerate that, and as well as we have our AVX-512 in here as well. - And AVX2, so basically, these are extensions for these specialized workloads, and that just is why it's found a good home, like here in this segment.

I also noticed, looking at, like, the SKU table, that these are unlocked to allow for whoever's in the segment that wants to overclock. So I'm thinking these are mission-critical applications. I'm always for having user choice, but yeah, like, what's the thinking behind having unlocked processors? - In this space? Yeah, yeah. So when we sat down and designed Sapphire Rapids, and looking at it from a workstation standpoint, really, the big thing, the feedback that we were getting is user choice, right. Give me the tools I need to do my job. Like you were saying, there is a portion of the market that will, you know, value stability and reliability, portion of the workstation market that will value stability and reliability.

Then there's another portion of the market that we looked at and said, "Hey, I want all the performance I can get no matter what the power or thermal is." So what we're allowing is, you know, this kind of frequency tuning. So you can tune individual cores.

You can tune the turbo frequencies that each core goes up to. You can also tune the interconnect. We were talking about the mesh interconnect, and even in EMIB, you can tune the interconnect speed in between those to kind of get those latencies down that we were talking about. And, you know, it's all gonna be controlled by our Intel Extreme Tuning Utility, XTU, that you can, you know, tune these processors.

And it comes from, just, you know, that portion of the market that just really wants to tune every single aspect of their system for a given workload. - Okay, and I mean, as you mentioned, it's XTU, so I've seen, my XTU can kinda see all the different cores. - Yes. - This is gonna be a really long list if you have 56 cores and can do two in each one individually.

- Yeah, depending on your resolution of your monitor, you may have to scroll down a lot to get all, you know, 112 threads to tune there, so it's gonna be great. - And obviously with, you know, AVX-512, AMX, can you set the offset on those in XTU as well? Like, you've got that level of detail? - Yeah, so when looking at the overclocking tool. Shout-out to the overclocking team. They're great. They sat down and said, "Well, how can we, you know, add as much user choice here?" So yeah, we're treating AMX instructions, so TMUL, like we did it with AVX-512, where when you're, you can tune the negative offset. So if on regular instructions, something could run at 4.4 gigahertz, you would have, like, a negative ratio offset to lower that by maybe three or four bins, so like .3 or .4 gigahertz there.

And with AMX and AVX-512, you can tune that offset, so if you have the power or thermal budget, you know, you can maybe not have a negative ratio offset and have, when you're doing AVX or doing AMX instructions, you can have it run at the same 4.4 gigahertz or, you know, whatever the given frequency is. So we're giving them that choice as well.

- Okay, going back to just the type of workloads that are great for this platform, I know, you know, we're talking about some very high, like, I/O requirements, bandwidth requirements. Like, everything is, you gotta feed the processors. So can you walk me through what I/O is available off the processor itself? And then I wanna jump into the platform. - The platform, okay. - And what's on the platform, what's on the processor? - Yeah, so with Sapphire Rapids, the XCC, the Xeon W-3400, excuse me, you get up to 112 CPU PCIe Gen 5 lanes. - Yeah. - So if we're putting that

in the context of, you know, our friends in gaming, right, that's about seven graphics cards. - Okay. - Seven graphics cards that you can fit, and these are PCIe Gen 5 lanes. So when graphics cards, you know, we'll see into the future that used PCIe Gen 5, and we'll be able to saturate that.

You can fit up to seven of them off of the processor just itself. You can also do large storage arrays if you wanna have your own NAS server, network-attached storage there. You can also do things like attach multiple hardware accelerators, so if you're doing data science again, and you have something that accelerates one part of that data science workflow, you can attach that along here as well. And then there's additional connections to the platform, the chipset, as well.

- Okay, let's take a look at the platform. I know you brought one. Let's. - Yeah, let's take a look at it. This one is a Sapphire Rapids motherboard. This is from our friends at ASUS, so shout-out to them.

Thank you so much. This is using the LGA 4677 socket. That's 4,677 pins. - How's this different than the previous generation of the same workstation? - Oh, yeah, so the previous generation was, if we're taking a look at maybe our previous gen Intel Core X Series. That used LGA 2066, so 2,066 pins.

- There's a huge increase. - It is, yes. - In pin count. And is that just down to just the sheer amount of I/O and power that that's going to? - Yeah, it's down to the additional PCIe lanes and memory channels that you need when you need to, you know, have up to 56 cores and eight memory channels there.

So yeah, this is the platform. It has eight memory channels. On this platform, it uses up to, it's one DIMM per channel, so this is eight memory channels, so each one of these DIMMs goes to one memory channel. And that will be supporting speeds up to DDR5 RDIMM 4,800 megatransfers per second. So this doesn't use the same memory as, like, a desktop Raptor Lake processor.

It uses server grade memory, and that's- - So RDIMM versus UDIMM. - Yeah, RDIMM instead of UDIMM there, and that's just, you know, due to the fact that hey, we're using the same architecture, using the same memory controller, as well as there's some physical differences with DDR5 between registered DIMMs or RDIMMs and UDIMMs, so they're actually not pin compatible. So, you know, this motherboard only supports, and this platform only supports registered DIMM, so RDIMMs. Then going on to the chipset here, underneath here is the Intel W790 chipset.

Right, all the W790 chipsets, along with the X processors, right, they're tuned and can overclock here, and you can see the robust VRMs to feed the electrical and power requirements for that. And then you can see the numerous amounts of PCIe lanes that you have here, all connected to the CPU, excuse me. You do have some, what we call x16 PCIe Gen 4 that comes off of the chipset as well, if you need that. We've also compared to previous generations. Increased the data bandwidth between the chipset and the CPU for all of that data transfer as well.

Off of the chipset, we also have Intel Wi-Fi 6E, right. These are getting put into enterprise environments, and as Wi-Fi 6E comes out into the market, we wanna make sure that these platforms are compatible with that. It also features USB 3.2 Gen 2x2, and that's up to 20 gigs of data transfer there. So that's lot of data transfer, right.

If you're working with large file sizes, you're a video editor. You're a data scientist. So that's kinda the highlights of the platform here, so. - So very feature-rich, W790.

- Mm-hmm? - Is this, like, a very close relative of, like the Z690 or Z790? - So yeah, that's actually a good little secret with our workstation. What we did with Sapphire Rapids workstation is we paired the performance and the architecture of our data center processors and paired it with a client chipset. So this is very similar to our Alder Lake chipset, similar feature set. Their, you know, USB, Intel Wi-Fi 6E, so yeah, it's a very feature-rich motherboard. - So this is, like, a great overview of, you know, what's Xeon W with the platform. But, you know, let's look outside of Intel for a second.

So what has work been with, you know, working with ISVs to make sure that their software and their usages are tuned for this? And of course, let's talk about software first, but then I wanna talk about, you know, like the partners. Like, where are people gonna get these? And where, you know, where are they gonna make, put them into the hands of the people that are really gonna, you know, make this technology useful? So ISVs, how's that been going? - It's been great. You know, our ISV partners are super-excited about Sapphire Rapids, right. This actually is a platform that can help them in their software development accelerate their code compiling as well, and it's also for their user base. So folks at Adobe and software developers there. You know, we have multiple ISV partnerships, and we're super-excited to get this in their hands, and as we roll out, and they're just super-excited to work with us as well, and so it's been a fun process.

- And of course, so, I mean, you're, you know, we're looking at the chip here. Or sorry, the pastry. - The pastry, yes, in this case. - The pastry. This is the board. I mean, these are usually, like, the foundational parts for, you know, people who like to build their own PCs. - Yeah! - But realistically in this market, like, for the most part, people aren't gonna be buying these things in the channel.

They're really just gonna be working with the other, you know, our partners to help them integrate these into their commercial or professional applications. So is that kinda the right sorta split? This is really for the partners who integrate this with businesses. - Yeah, this is mainly, I would say, mainly targeted for the enterprise, right, for, you know, buying from a thousand types, you know, a thousand workstations in an enterprise environment.

You know, but our motherboard partners, well, we will have motherboards available. We are boxing some of our Sapphire Rapids workstation SKUs as well. - So people can just buy- - Yeah, people can just buy this as well. It doesn't come with a thermal solution like, you know, like some of our desktop processors do.

You know, we've worked with our thermal solution vendors as well. I think we've worked with EK, Cooler Master, Noctua as well, to have some thermal solutions for this LGA 4677 socket as well, and working with our industry partners and ecosystem has been fantastic. They've all been anxious and awaiting, you know, for Sapphire Rapids to come to market, and we're super-excited to finally bring it to you.

- All right, that's kinda, like, everything about Xeon W, code name Sapphire Rapids, kind of in this nutshell for this launch day. Anything else people should know about this new Xeon? - Yeah, I will say that the folks behind creator and workstation, we're super-passionate and super-excited to bring this to market, and, you know, to bring this, like, ultimate workstation solution like the Xeon W-3400, this is, like, the ultimate workstation solution. This'll help power the next, you know, five years of workstations. Like, you know, with the amount of I/O, the amount of compute horsepower that we went over, you know, this is really built for those professional innovators, those 3D artists, those software developers, those data scientists and, you know, I'm just super-excited to see what those folks can create with this platform. - And you mentioned technologies too.

Like, we're still looking at, you know, even from. I'll talk about, you know, like, the more consumer side, like PCI Gen 5, DDR5. These are, like, future-proof technologies for, you know, everything else that's coming soon, right, in the future, and you've got these special extensions as well like AVX-512, AMX, which is new.

So yeah, no, you're absolutely right. This is pretty cool. Well, Jonathan, thank you very much.

Thank you for talking tech. - Yeah, thank you so much.

2023-02-21 11:54

Show Video

Other news