Computex 2025: Intel Core Ultra Update for Workstations and More | Talking Tech | Intel Technology

Computex 2025: Intel Core Ultra Update for Workstations and More | Talking Tech | Intel Technology

Show Video

- Welcome to our Computex 2025 Press Briefing. So I'm Roger Chandler from Intel. I lead the workstation segment within our client computing group.

And today, we're gonna talk about a lot of our new products. So it's gonna be led by our Intel Core Ultra 200 series products. Now these products fall into several categories. There's our 200S class product. These are for desktops. These are fantastic products for entry desktop workstations, for gamers, for creators.

And then we have our 200HX-based platform. These are for mobile workstations and mobile gaming platforms, and it delivers the absolute best performance. Then, we have our 200H-based platforms. These are for premium, thin, and light platforms. Be it for entry mobile workstations or premium consumer, these have big built-in graphics and lots of compute capabilities.

And finally, our 200V series, this is based upon our Lunar Lake platform, and this delivers stunning power efficiency and AI compute capabilities. Now before I jump into these products though, one thing we are announcing this week is our Intel AI Assistant Builder. So the Intel AI Assistant Builder is a tool and a platform by which customers can install and create customized, differentiated, vertically specific AI agents for their products. With AI Assistant Builder, it's optimized for Intel's platform. So the actual tool will help our customers identify like which models run best on our platform, take advantage of our CPU or GPU or an NPU, and they can create custom workflows and custom agents based on individual verticals they want to target. And it's very easy to use.

It provides drag and drop simplicity for folks to install these and have AI agents on their platforms. What we're really excited by though is we're releasing this as a public beta this week. And we have several customers who are rolling out solutions using AI Assistant Builder, and we're gonna show you lots of demos of this in action later on in the presentation. Okay, so I want to dig into the workstation segment.

So I'm personally biased on this segment because I lead this segment for the company, but workstations are a very exciting category of platforms. The workstation segment is growing and with the advent of AI development and data science, we're seeing a lot of demand for these platforms. When you think about workstations, you know, traditionally folks have seen workstations being the big towers that have Z on Ws in them. But you know, one of the fastest growing segments is the entry desktop segment, which is based on our core platforms as well as the mobile segment. And mobile workstations make up about 60% of this market.

And why workstations are so important, I kind of see them as the headwaters for a lot of the innovation that really moves society forward. A lot of the AI workloads that we enjoy on a IPCs are developed on workstations. The bridges we drive our cars across, the buildings that we live in and work in, these are all designed with workstations. The games we play, the movies we watch, workstations are used to develop all of these.

And in fact, you know, some industry analysts predict that one in 10 commercial user, in the next couple of years will be using a workstation of some form because the compute demands can continue to increase. We're really excited by the products we're offering in the workstation segment. The first one we'll talk about is the 200S series for the entry desktop workstation. So our 200S series four workstations deliver stunning performance. And when you look at the performance numbers, we deliver up to 13% better multi-threaded performance than the competition in this category, as well as up to 11% better performance per watt. And power efficiency is very important in desktops 'cause this allows for lower power costs, lower acoustics and just cooler systems.

But also we have built into our CPUs, our GPUs, and the neural processing unit in all of our core ultra platforms, we can accelerate AI workflows. And when you look at workflows like Adobe After Effects, which has AI capabilities built into it, we're up to 50% better than the competition with our platform. And finally, also with ray tracing workloads, which is a heavily used workload within the workstation segment, we're up to 20% faster than our competition. So this is really exciting and our customers are excited by this too. Also, within these platforms, workstations are designed for professional innovators and a lot of these systems are deployed in fleets within, you know, working environments. And so a lot of our commercial grade capabilities with supportive IvPro are in all of these platforms.

So like our out-of-band management capabilities, this is really important because often workstations are remotely accessed. We support up to 256 gigs of DDR5, ECC memory, and ECC memory is a critical requirement within the workstation segment. We also have more than 30 different security capabilities and features built into our products.

And as the workstation segment grows and as AI kind of comes into play here, we are seeing the expanded surface area of security risks. And that's why having built in security capabilities in our platform are critical. We have professional codec support built into our platforms. What this means is there are a number of these media codecs that are used in professional workflows. And by having hardware accelerated capabilities for these like Sony's XAVC, you can see a two, five or 10 X improvement in performance when actually utilizing those particular software applications.

We also provide the industry's best connectivity platform with WiFi 6E, Thunderbolt 5 built-in Bluetooth capabilities. And finally, the energy efficient performance of our newest platforms are phenomenal. They're delivering leadership performance per watt and multi-threaded CPU compute workloads. So next I'd like to talk about our mobile workstation segment.

And we have two product that fit into this. There's the 200HX series, which is for absolute performance. And then there's our 200H series for thin and light entry mobile workstations. So starting with our Intel Core Ultra 200HX platform, these are gonna be powering some of the most powerful mobile workstations ever deployed.

We deliver up to 8% better single thread leadership versus our competition, and up to 42% better multi-threaded performance leadership versus the competition. As well our power efficiency delivers up to 41% better performance per watt than our previous generation with these platforms. Each of these platforms have a built-in neural processing unit to accelerate AI workloads and they support up to 256 gigs of DDR5, ECC memory. And as well, they all support these world class connectivity, security, and manageability features I discussed earlier. So when you look at the overall performance in workstation workloads, this is a chart that shows a number of workstation specific benchmarks and individual workloads.

And on the left these are vertical specific benchmarks such as you know, AI and machine learning, energy, life sciences. And then on the right, these are individual workloads that a lot of these benchmarks are comprised of. And you can see we deliver up to 70% better performance than the current gen platforms from our competition out there. But if you look at the lighter blue bars, these actually represent AI and data science specific workloads.

And here we deliver double digit performance leadership over our competition. And when you compare our current gen platforms versus our prior gen 14th gen platforms, again you see up to 63% better performance over our last generation product. And this is huge. It's a big bump in performance and it's across a number of workloads. You see this double digit performance gain.

So really proud of this and we're very excited by it and our customers are very excited as well. Now I wanna talk about the 200H series. Now the 200H series, it's a product that actually has a bigger built-in graphics. It still delivers fantastic CPU performance, but it also balances that with power efficiency, it enables longer battery life, and delivers great graphics performance. So we're up to 22% faster than the current gen competitive systems out there in multi-threaded compute workloads.

And you can get up to 21 hours of battery life with one of these systems, which is kind of stunning when you think about a mobile workstation. Like you know, I can remember years ago when you thought mobile workstation is a big thick system that probably got about 30 minutes to an hour of battery life, 21 hours of battery life in sleek designs for folks that wanna do the most intense workloads while they're on the go, this is phenomenal. We deliver up to 26% faster creator specific performance based on both the Codex support and the AI acceleration we built into these platforms.

And they also support our Intel ARC Pro Graphics. So built into each of these H systems is a bigger, more powerful built-in graphic solution that powers graphics intensive workloads and it's also supported by a certification program. So in the workstation segment, it's really important for users to know that the applications that they rely on for their livelihoods are supported, they're validated, they're stable, and they're high performance. So we work with a number of ISVs across the ecosystem in the workstation segment to tune their applications for the performance of our platforms and to provide a certification of quality and stability for these applications running on our platform with our built-in graphics.

So when you look at the performance numbers, when you look at a current gen competitive platform out there and you compare the performance, this is the number of workloads and benchmarks that are specific to the workstation segment, we're up to 36% better than our competition. A lot of these workloads kind of blend both the CPU and the GPU performance, but some of these like Cinebench single core, Cinebench multi-core and blender are very CPU specific. And you can see a big bump in performance there. But also when you combine these workloads that use both the CPU and GPU, huge bump in performance and a leadership advantage we have. And as well when you compare this versus the AMD Strix Point platforms out there similar we're up to 26% better than their systems in market.

So we're really proud of this performance and so are our customers. Now when you dig in a little bit more and look at the actual graphics performance and specific workloads, when you look at Autodesk Inventor, Autodesk Inventor is a workload where it utilizes raytraced 3D renders for product design and engineering. With our new Intel Core Ultra 200 series platforms, we deliver more than twice the performance we did on our prior gen.

So this enables for power users and engineers and designers to actually have in sleek, thin and light systems the ability to actually run heavily compute intensive raytraced workloads. As well in applications like Chaos V-Ray from Cinema 4D, in this application, we actually worked with the ISV to utilize their ICX compiler and recompile the application and we did get a 30% performance bump over our prior gen using our current 200H series today. So also this week, we announced our new Arc Pro Graphics cards.

And so these are the B60 for AI workstations, which supports up to 24 gigs of built-in memory as well as 197 TOPS of AI performance. And we also introduced our B50 product, which is for design and engineering class workstations with 16 gigs of memory up to 170 TOPS. And these platforms are creating a lot of excitement in the industry. We're working with a number of partners across the ecosystem to roll them out, delivering powerful local inferencing scalable performance. And this they're supported by all the workstation software optimization and certifications I described earlier.

So to summarize, our Intel Core Ultra series two products are best in class platforms for entry desktop and mobile workstation users. Delivering double digit performance leadership versus our competition in single-threaded workloads and multi-threaded workloads in creator performance, in product design workflows, in data science. And it's supported by a vast array of security capabilities we have built into the platform. As well, they're power efficient, they support our latest and greatest manageability capabilities for enterprise users. They have our our world class connectivity features and capabilities.

Fast memory, high memory capacity, and ECC memory support, and the software ecosystem work that we do to certify these applications and also performance optimize them for our platforms. So that concludes the workstation segment of the presentation. So what I'd like to do now is hand it over to the demo team and we'll show you some of these systems in action running actual workstation workflows. - Hi everyone, Craig Raymond here and we're gonna show you off some amazing new workload capabilities that we have in our workstation class. So let's go ahead and take a look. And first of all, let's talk a little bit about hardware.

And on this white box system that we have, we're showing you a first look of an actual B50 Pro card, a brand new ARC offering for performance workstation. As you can see, we're actually using an application here called Twinmotion, which allows us to do all these amazing raytraced environments and simulations that we have. As you can see all of our buildings, all of the different materials and patterns so we can make a really lifelike world and really make that pop with all of this beautiful simulation and math that we're integrating into this usage model and workload.

And as you can see, we're getting about 15 frames per second actually in the live render that's going on. And we're also comparing this to our competition. And you know, a little bit of a spoiler alert, we beat 'em by a couple of frames.

So happy to tell you more about those amazing numbers that we have, not only in the B50 but also the B60 24 gig memory card that was just announced here at the Computex event. So while this is amazing and we'll have you check that out with all those new numbers and new hardware capabilities that we have from Arc, but now let's go into brand new workloads. As far as this one is a Chaos V-Ray, which is a plugin for Cinema 4D and allows us to do in a mobile workstation form factor, something that we ever haven't done before, which is incredible ray tracing performance that we're seeing.

And by using our compiler and ICX to be able to recompile that code to make it perfect for our new AI PC workstation capabilities, we're actually getting 30% increases just from the previous generation that we had before. And with that, I'd like to turn it over to Mike Bartz here who's gonna talk to us about some amazing Adobe usage models with Substance 3D. - Thanks Craig, and what we're showcasing here is Adobe Substance rain on HP's Zbook Theory 18 platform running our Intel Core 200 series HX platform. And what we're showcasing is taking a physical-based material and bringing it into that virtual world and using the power of AI on the 200HX platform to make it a physical-based rendered material.

So what I'm doing is I'm just gonna take this photo screenshot of this material we had and Adobe's gonna identify that it's a material, image to material object. And when we create it, it starts creating all of these layers to make it a physical based material. Add serviceness, roughness. And we can go ahead and also add additional layers to make it more detailed. Now we go ahead and we scale that to make it more appropriate on how we want it and then we can actually go ahead and export it over to Adobe 3D Stager.

And we already did that ahead of time with the gray material on this couch, but we're gonna go ahead and apply this wool material to this pillow. And we're gonna just gonna simply quickly scale that. And now we have a nice setup design in our Stager here. And what this just kind of brings together is the simplicity of using the power of AI to quickly take something that was originally in the real world and bring it in virtually all with the power of our HX platform. - So now let's talk about gaming.

So gaming is near and dear to all of our hearts at Intel. We love the gaming segment and we have a few updates we wanna provide here. The first one is our new Intel 200S Boost feature. So with this we are providing an overclocking profile within the BIOS for gamers out there who have bought an Intel Core Ultra 200 series desktop product.

And this overclocking profile helps overclock both the die to die, the fabric, and the memory to provide more free performance for users who like to game. And it's also supported for the first time ever by a three-year warranty. So traditionally overclocking has kind of been a wild west and if someone overclocks their platform, kind of voids the warranty. Not with this.

You know, by working with their partners, there's a simple to use profile. You just kind of go into the BIOS and turn it on and you can overclock your platform and it's still under warranty, which is really exciting. And overall it provides five to 10% better performance in certain games. Some games you might see more performance. Some games you might not see a big boost in performance. But generally speaking, it's free to the user and it's an opportunity for us to continue providing more performance with our products over time.

And we've worked with a lot of our board partners across the ecosystem as well as memory vendors to basically enable this in the BIOS as well as to qualify all the memory out there for this. So all in all, the ecosystem has come together. We really appreciate the partnership we've had with them to deliver this and all of our partners are excited to offer this to their users and our mutual customers. So now I want to talk about our Intel Core Ultra 285HX platform in gaming systems.

So when you compare a 285HX with an NVIDIA 5090 class product versus a 14th gen product with an NVIDIA 4090, you can see double digit performance gains across the board in a number of games. Now we've tried to isolate the CPU performance here, so we've turned off DLSS, we've turned off ray tracing because the 5090 is different than a 4090. But in general, with these new platforms, the CPU is helping you get double digit performance gains as well you're getting all the goodness that comes with the latest and greatest from NVIDIA.

But gamers like to do more. And we actually know that, you know, there are quite a few people who buy gaming systems out there that never play a game. They actually buy these systems 'cause they like to create. Or also there are a lot of gamers out there that also like to create. They like to stream their games.

And so when you look at the actual creator performance of our platform, when we baseline it versus a, you know, the best from Qualcomm out there and we show the performance of AMD versus Intel, you see up to four times the performance, more than four times the performance on our systems versus Qualcomm and a significant lead over AMD systems. So for gamers and creators, the 285 HX is a phenomenal platform. Okay to summarize, our Intel 285 HX platforms are fantastic for gamers and creators. Delivering up to 8% better single thread performance versus our competition up to 42% better multi-threaded performance, double digit frame rate improvements in gaming, as well as 50% better performance per watt. So we're really proud of these products and we're really excited about them shipping and gaming systems today.

So now we'll take a moment and we'll show you some of the gaming demos we have here in the press briefing. - Hi, my name is Alex Rodriguez from the Performance Marketing Lab. I'm a technical marketing engineer. And today, I have a couple cool gaming demos here at Computex. So on my left hand side, I have a beautiful MSI Titan 18 with the new Intel Core Ultra 285HX, plain Black Myth: Wukong. Now, as a gamer myself, I like to always have the edge in things and this is where AI coaching is playing into effect.

So I'll start playing the game and you start seeing a couple different things happening on the screen. So as I move onto this zone, take a notice on the left hand side, on this little left corner right here. You'll start seeing a little map show up.

Now this map is not added into the game, but this is where the AI is actually happening. So the iGPU and the NPU are inferencing and it's giving me a location, you know, saying, hey, there's a couple chess and items here that you might want to look at before entering the boss area. So you know, I'll skip this part, but I'll go into directly into the boss.

And this is also what's really cool. The AI is telling me, hey, would you like to watch a strategy video for the album change? So now before I go into it, I can look at the video and then it's telling me, oh hey, you know, this is an ability that's gonna happen you might want to watch out for it before you enter it. So I'll go in and you know, start the fight. We've already seen the video so let's see if I can dodge it hopefully.

Nice, I get that dodge right there. And now the AI is actually giving me some tips. hey, you know, watch out for different abilities that the elder boss is going to do. And as I keep going in here, I'll let it play out, but you'll start seeing the AI is actually giving me some tips that hopefully will help me in the battle.

Now as this is happening on the right hand side, I also want you to notice the little chat box that is actually has local RAG functionality for the game. And right here we're actually getting a couple words of encouRAGement. So my character has, you know, died and now giving, you know, keep going, you're almost there. So the AI is also trying to give you, you know, words of encouRAGement so that you can keep going even at times when you might not succeed.

But on the right hand side, we also have another form of AI coaching. This is from a company called GGQ and it's also as well local. So it utilizes the MPU, but in this case we're doing it on more eSports titles. So like League of Legends, for example.

And you'll see in, in this case, GGQ will actually be launching in May 26th, but they already have launched in Korea clients. So right now if you're in Korea, you have access to it. Otherwise in North America and Europe, you'll have to wait to get access to it in May 26th. But it's great AI coaching for League of Legends.

Getting you that you know, extra advantage that you might need for certain information about your skills, your champions, where you want to go in lane, and make hopefully in keep improving up to the next rank. Now if we keep moving, I'll have some really cool other cool gaming demos, right? We're talking about gaming, what about handheld gaming, right? So I have here the MSI Lunar Lake Claw. Beautiful system, amazing display, up to 120 hertz better faster than the previous generation and it's incredible for handheld gaming. And then we'll keep moving on to desktop. And you know, we've mentioned 200S Boost, We really want to take into advantage of the extra performance that we can get in here. Now this system we're playing Assassin's Creed Shadows.

And with the 200S Boost, you were getting anywhere from five to 10% performance improvement on across the different titles. But the best thing that you can see about this is that you're getting up to three years of warranty even with the increased clock speeds that you're getting with 200S Boost. So it's a easy profile, performance profile that you can select. Going into the BIOS is as easy as selecting 200S Boost enable and the system already has the pre-configured speeds that it can take advantage for that extra level of performance. So we love the fact that gamers nowadays can go in and easily select this profile to hopefully get that next level of performance on their systems.

- Okay, now let's talk about our premium mobile segment. So in premium mobile, these are thin and light systems for consumers. Now we've talked about the Intel Core Ultra 200H series and workstations. A lot of that goodness is fantastic for consumers and commercial client buyers.

As well our 200V series, which is based on the Lunar Lake platform, this delivers stunning power efficiency as well as best in class AI compute capabilities. So our 200H series for consumer class products, delivers up to 22% better faster CPU performance, up to 26% faster creator performance than the competition, up to 58% better gaming performance. 'Cause one of the things we did with the 200H is we really improved that built-in graphics.

So people who like to play AAA games can do so on a system that doesn't have a discreet card using our 200H-based systems. And finally, we're delivering up to 21% better performance per watt. And when you look at the 200V series based on Lunar Lake, one of our big goals with this platform was to destroy the myth that x86-based systems cannot be power efficient. And we feel like we have accomplished that mission. I'll show you some of the numbers. We also deliver unmatched AI capabilities and support from the ecosystem, the best built in graphics for this class of product and across the board app compatibility based on all the work we do with the global software ecosystem.

So when you look at the battery life, we wanted to do a very even compare between two systems. So we took a Microsoft Surface laptop, one of which has a 268V processor in it, which is based on Lunar Lake? and the other has a Qualcomm X Elite platform on it. And when you run the Microsoft Teams three by three benchmark, which basically tests how long your battery life will last while you're doing Teams calls, our platform from Intel delivers 10.4 hours of battery life, whereas the competitive system delivers 8.7 hours of battery life. And when you run the UL Procyon Battery Life Office Productivity benchmark, we deliver almost 20 hours of battery life versus the competition which delivers 16.3 hours of battery life. So this platform is the industry leader in battery life and power efficiency.

We're really proud of this and we've destroyed the myth that x86 cannot be power efficient. Now let's talk about the AI capabilities. So on this slide it's pretty complicated, but we have a number of systems represented here. This first column represents our core Ultra 7 258V, which is our Lunar Lake based platforms.

The second column represents our core Ultra 265H Arrow Lake platforms. The third column is AMD's current systems out there. And the fourth column is Qualcomm's X Elite. And this is showing the performance and compatibility across a number of the AI benchmarks used by the industry today from Procyon AI computer vision, Procyon AI image generation, Geekbench AI, and the Procyon AI text benchmark. And they show the performance of the GPU or the NPU in these particular benchmarks. So when you look at the numbers, our core Ultra 258V, which is our Lunar Lake platform, has absolute performance leadership.

As well our Arrow Lake platforms, the 265H also delivers stunning performance. But more importantly, if you look at the reds on our competition, these are the workloads that don't run. So there's a compatibility issue and that's a problem for users. But this is a testament to the work that we do with the software ecosystem around the world and the work we do on the software side for Intel to ensure that application compatibility is there for our users. And so finally this leads to Panther Lake.

So Panther Lake is our upcoming platform. We will be delivering Panther Lake on our 18a process technology at scale. And our vision for Panther Lake is to deliver stunning power efficiency. You know, leading the way for power efficiency for x86 computing based systems, you know, similar to what we did with Lunar Lake, but as well we wanna deliver the high performance we got in platforms like Arrow Lake. So we wanna bring together the best of this, this incredible leadership we have in power efficiency as well as this performance. It's also gonna have a next gen built-in graphics.

So you should expect better graphics performance, more AI compute performance from our GPU. And we're gonna be shipping this broadly across consumer, gaming commercial and workstation platforms. We're currently on track for production in the second half of this year and you should expect to see high volume systems shipping in early 2026 based on Panther Lake. So in summary, we talked about our Intel Core Ultra 200S series for workstation and gaming. This is a fantastic killer platform for desktop users.

Our 200HX system for gaming, creator, and workstation absolute performance systems. Our 200H systems for performance thin and light with big built-in graphics. And then finally, our 200V series, which delivers stunning power efficiency and AI compute capabilities. So now we'd like to turn it back over to the demo team and we'll walk you through some demos of the AI Assistant Builder and show you Panther Lake in action. - Okay, so now let's jump into a new application that we have here at Intel, which is the AI.

Actually, one quick second. Before we do that, I wanted to go ahead and show you something. And now that we're on site here at Computex, we've actually seeing a bunch of responses about our current generation on battery and off battery having some performance differences. So we want to just kind of dive into this.

We see, you know, there may be some changes in system per system that we see, but we wanted to prove that for us. So here we have two systems identical, Intel Core Ultra series two, this is a Lenovo and it's identical Lenovo. The only difference being is that this one is completely unplugged while this one is on the wall power. And so here what we've just ran, which was a previous test, which is our 120 points that we've received in Cinebench, which is standard benchmark that you all can run. And on power, oh, we actually beat the on power system by a point.

So you're gonna see some various variances in the benchmark performance here, but as you're going to see, you're gonna see some pretty identical results on these cases. And so we're just not seeing huge gaps in losing tons of performance while off our DC power. Let me give you another example of this. Here on this Surface book. I'm gonna go ahead and actually this is again an Intel Core Ultra series two. And we're at 122, 123 frames per second.

And while we do that, I'm just gonna unplug it. And so now we're at 134. We actually went up a little bit on this side of the track, 131, 129. Excellent, so we're actually seeing the same performance that we're running this benchmark on F1, whether I have it plugged in or not. So I would say do the test yourself, go ahead and make the comparison, try those workloads both on battery and both on the wall charge.

And you'll see some variances in a lot of systems, but I don't think we're even coming close to a lot of the responses that are out there in the market. So please try it yourself. But let's get away from that. And I wanted to talk to you about AI Assistant Builder.

AI Assistant Builder is a brand new release that you can download today, both released on GitHub and on the AI Assistant Builder website where you can download your own LLM chatbot and implement it for an individual purpose. So let me go ahead and give you some examples of how we're able to do that. So over here on the end, right here, we actually have the AI Assistant Builder website and this is something where I can immediately go to it. And as long as I have an AI PC, I can see if I have sales, something for HR, something for finance, I can pick all of these specialized capabilities out. And when I go ahead and choose one of these, it's immediately going to download this AI assistant, it's gonna package it up inside of an EXE and then drop it right on my desktop.

So now I have an LLM that I can use for chatbot specialty functions and it's all local on my PC. And did I tell you it was free? And this is something that everyone can go out and download and try it today. And so let me give you a few examples about what this experience looks like. So over here on our first PC, we're going to, as you can see the interface here, you have your standard chatbot interface that we can go ahead and ask any question that we want as well as a RAG interface so we can go ahead and introduce our documents and be able to chat with those at all as a data source.

But what I've actually done here is we actually have new capabilities called MCP or model content protocol that we're taking a look at, which allows us to have our agentic AI be able to actually go out and grab new sources of information. Think of it about as like a advanced API that's able to carry all that AI context and capability and be able to access backend data sources for whatever services that we want. So I've actually entered a prompt that said, well, I want to go ahead and do a few things. First thing, I want to go ahead and find a bunch of flights from Taipei back all the way to Seattle, Washington.

And I want to go ahead and give it a couple of dates and a basic budget for what I want those flights to cost. I'm gonna do the same thing with hotels during those exact same dates. And then I want my LLM to go ahead and package that all up together into a nice summarized email and send it off to my colleagues. So think of all the things that would have to happen in this case. But the small agents that we have involved here have actually gone out, found the flight info and a couple of options based on our timeframes as well as the hotel booking. So while all of this LLM processing is done locally on the PC with agents, I'm now able to go ahead and have them go out onto the web and get information that I don't have access to or access services or other queries and bring that information back into my LLM chatbot assistant that I have here.

So AI Assistant Builder, kind of just scratching the surface. Let me show you one more advanced feature. And this is now with our RAG capabilities, we're able to do some pretty advanced workflows like this one that we're calling resume match.

So this is an HR assistant that we've brought together with some LLM models. And I'm gonna go ahead and the problem is on today's RAG is you can chat with your docs but you're not really able to analyze or pick apart or recompile that data. Well those days are gone. And with this capability, with this feature, let me show you, is that we can take all of these resumes that I've received for a particular job.

So let's go ahead and highlight these, fantastic. And I put all these up and attach them as RAG documents that I'm accessing. Second of all, I'm gonna go ahead and add a job description that I have right here in the prompt. And now what this is doing is taking all of those documents that I put into RAG and it's analyzing them against the job description that I entered. It's gonna go ahead and score those based on their match on the criteria for this AI job search.

And it's going to summarize the top candidates for me based on the resumes that I've submitted. So something where as a human now I can go ahead and get some great data back and still make decisions. Do I want to throw everyone under away under a certain score? But no, it gives me a great kind of opportunity to use some human intelligence on top of really being able to sift through thousands of resumes to be able to find that perfect match.

So really enabling me as an HR assistant. Well when now with my new chatbot, I can go ahead and cut through a lot of that red tape. So just a few examples about what AI Assistant Builder is ready to do. Did I mention it was free? So go out there and try it for yourself.

Brand new product in a brand new way to use local AI right on your brand new AI PC. Okay, now let's get into some Panther Lake. I know it's what you've been waiting to see. So here we have running on these beautiful blue boxes that we have.

These are Intel's RVP systems and which is a reference validation platform. And it's where we take the latest and greatest silicon directly off of our lines to be able to be running for their first signs of application testing. And so here we have Mr. Mike Tilke is gonna be running through a couple of new demonstrations and features that we have here running on Panther Lake in DaVinci Resolve Studio.

So here we have actually our a piece of video where we have our skater and we're gonna go ahead and manipulate using some new AI features in the new DaVinci Studio set. And so first of all, let's go ahead and we're gonna color that sky a little bit. So give a little case. And we're doing all of these effects by using magic mask version two, which is an amazing way to do fast masks within a video editing suite. And so the second thing we want to do is let's move this text behind our skater. Give it a little bit of more dynamicism there.

There we go. And so that's perfect. And then the one last thing, let's recolor that sweatshirt. Let's maybe some a little bit more on brand.

There's that Intel blue I was looking for. And so all of those advanced features that we're able to show you, that's an incredible content experience that we have. But if there's one thing that I want you to take away from this demonstration, which is for a brand new piece of silicon that we're just testing now in applications on RVP, this is amazing. And normally I wouldn't have the guts to show such a robust and demanding workload as in a full video editing suite. So amazing features and amazing stability that we're currently seeing here on the RVP for Panther Lake.

Let me give you one more example. So do you guys remember Clippy at all? So here we have Microsoft's Clippy from way back when has now been reinvented as an LLM AI frontend. So now let's go ahead and take a look. And just with a quick click, I can go ahead and bring up, oh, I have this great chatbot interface and we're actually interfacing with Qwen2 that we're running on this RVP system, and let's go ahead and kick off some code.

And so the four, I'm gonna have it write a quick game for us. And the second that I press Enter, we're not gonna give away speeds and feeds or tokens per second, but that is pretty responsive. And so by just being able to do a simple game in Python, we're able to go ahead and get all of this information and it's way faster than I can read. So amazing responsiveness and also compatibility right out of the box on RVP and development here. So, so far so good.

Let me show you something else. So now moving form factors, and this is our Panther Lake development kit. And so as far as our brand new capabilities that we have for the silicon, this is gonna be something that our 300 plus ISV developers are now gonna be able to have an inexpensive small form factor that they can now develop. So when we come to launch for Panther Lake, they'll be launching along with us with these amazing applications.

So let me give you an example of something that we have running on this dev kit. And currently, we're actually running Topaz AI to do some photo editing here. We're gonna do some color balancing as well as upscale the picture to be able to add more detail. And just like this, I have our two puppies here and I'm gonna change it to some high fidelity and we're enhancing this picture now and it's already done. And look at the beautiful difference that we have between our coloring and upresing.

All of that detail is in there. If we go ahead and get right down into it, we actually see a pretty great effect that we have here with the sharpened up and recolored dogs there. So again, another full AI workload that we're showing here and a pretty robust set.

So we're hoping that this gives you a great education on our latest PTL testing as well as health and performance. And then one last thing, let's talk about the ecosystem. So you want to talk about the health of Panther Lake. Here it is in form factor from all of our ODS partners.

These new development platforms be our basically the trial vehicles that will become the OEM products that you will see at Panther Lake launch. So already operational inform factor. And these design cues are gonna find their way into the products that you will hold in your hands when we bring Panther Lake to market. So we have so much more to share about the amazing capabilities of our new product, especially on our new process for 18A, you know, new process, new products, new intel.

So we can't wait to tell you more. Thank you so much. (upbeat quirky music)

2025-05-25 19:40

Show Video

Other news

Primitive Technology: Belt and pulley blower 2025-06-09 11:38
The HD, WIDESCREEN Tube TV! Sony Trinitron KV-30XBR910 2025-05-30 19:30
Google's Latest Attack on FOSS 2025-05-27 17:10