How is AI Transforming the PC Landscape? | InTechnology | Intel

How is AI Transforming the PC Landscape? | InTechnology | Intel

Show Video

(gentle music) - [Announcer] You're watching InTechnology, a videocast where you can get smarter about cybersecurity, sustainability, and technology. - Hi, I'm Camille Morhardt, host of InTechnology podcast. And today I have with me Rob Bruckner, Chief Technology Officer of the Client Computing Group. Welcome to the show. - Thank you, great to be here. - So I wonder if you could actually take a second and define client.

We said you're Chief Technology Officer of Client Computing Group, what is "client"? - Client is a term intended to be what's with you. It's a, our client products are there to help you do your best work. On the mobile side, it's with you for your productivity, it's with you when you're doing communications with people. Now, on the desktop side, being a creator on all your productive work at work, school, home. It is a client, it is a with you, co-with you as a part of your essential best work and creative work you're doing.

So that's why we call it a client. - It sounds like you're also saying, you're kind of making that distinction between internet of things devices in the sense that it's a partner to the human, is that? - It's a partner to the human since it's got its own class or form factor. So I have worked in the cell phone industry, at Intel and also at Apple before as well. And there's a type of device that's very personal and consuming content in a great way. It's a great form factor for that.

But the personal computer, which has been rumored to be gone again and again and again, the way that it works, the way you interface with it, this user interface, by having keyboards, mice, and other types of accessories, having your camera audio processing as well, it is a really great form factor for doing productive work and creating work. So it brings the best out of people's productivity and creation and all the customers we work with. So yeah, it's just a unique form factor and it continues to have staying power and people can innovate off of it. - What actually, what kind of evolution, like what are the next sort of form factors and are we moving toward gesture recognition off the device anytime soon? - Wow, gesture recognition.

Yes, I mean, anything's really possible within we consider kind of our constraints and we wanna make sure to make experiences grow off of already great experiences. There's sometimes we do want to transform things completely. Everybody gets excited about that.

A lot of people like their personal computer. And when you bring a new usage like that, we want to make sure you can still do it within the established power envelops for a mobile class device, or make sure that you're not bringing too much cost into the device itself as well. So people love to see their evolution of these things, models of user experience or models of how a device might work, but they always want to have a really great product at a good price and be able to do a lot of multifunctional topics like this. There's a lot of new innovation we've seen recently: on screen technology. So if you look at some of the complications of like having radical form factors, the thickness is an issue with the battery itself. So we want to continue to have great battery technology, but it's not like the batteries are doubling in capability every year.

So as you do the batteries to keep the thickness a certain level where they get located inside the device and in the display itself, you have a lot of constraints that you have to work around. Some of our partners in the industry are creating things like just a pure screen that you can fold--foldable displays is pretty interesting. I think this is the first quarter where we're gonna see foldable displays make impacts of the form factors themselves. All-in-ones are awesome. I have an all-in-one at my work office in Arizona. I love that it's just there and I have a keyboard and mouse and everything's just working wonderfully and I'm not trying to find the box and plug stuff in, it's all in one spot with a great built-in dock.

Docking technology has been great. Thunderbolt brings some really, really high bandwidth as an interface to connect into docks and in multiple displays continues to evolve. But yeah, innovations continues to happen. It may surprise us of an old device like a PC but it is, it continues to happen. I think what'll be really interesting in the coming years is really see how the play of AI happens with the PC. We're just in the first stages of bringing AI to market with our Meteor Lake project at Intel and there's other providers out there as well bringing AI first out with Microsoft bringing the Microsoft effects package.

You can see this happening when you do things like a Teams call. But other providers also are using these AI engines to start to do clean up and improvements of your media, your audio, your imaging type of content. And then beyond that, we're also looking at how it kind of infuses all the things that you experience on a PC. An OS and the software stacks, application developers, our OEMs are super eager to innovate with these type of new technologies.

Possibly that brings in some new ways to look at the user experience itself and how that user experience evolves into modulating that form factor into something that may be novel and interesting. So it's really about what people do. People find extreme value and how do they become as productive as possible or really bring the creator forward.

You start to see the evolution start and AI is gonna kickstart that. Once again, we're gonna see another big innovation kicking in with AI. - Yeah, can you actually talk a little bit more about that? Like I'm interested in, I think a lot of AI evolution to date, we think of the cloud, right? And giant servers and large language models and central models. And if you're saying that it's coming to the client, then what kind of possibilities does that open up? Are we talking about distributed inferencing or what kinds of things are we looking at moving forward? - Well, there's gonna be all kinds of things. I mean, I can tell you, first of all, AI has been on the client for a while. It's been in the CPU.

So when you look at matrix math, which is part of how AI is functioning, these multiply and accumulate functions, our CPUs and other CPU providers have ability to do matrix math and do some basic levels or sometimes complicated levels of AI functions. It's just the total operations per second are not the same level as you can have with a GPU or a dedicated neural processing unit. What's happened now is if you look at the evolution of what I've seen personally, the cell phones built-in neural processing-- dedicated neural processing accelerators--for some time now. And you would see things like, you could have thought about different ways to deal with security, for example, and face recognition, camera image processing, real-time pipelines of bringing the pixels through and starting to create more advanced ways to process that image real-time versus kind of storing stuff off and then getting to it later. You do photo editing, you got 200 photos that you later bring in and then do processing. So cell phones and mobile devices like this really started out this big kick toward the accelerators in a product.

And they really did focus on, like I mentioned earlier, speech processing, listening to you to see if they like a "Hey Siri" kind of moment for kind of like an Apple-type device, but also imaging. And so this contained it as more like a media's booster, like a media booster agent. And AI being more deep learning type networks were perfectly suited to do this in a very efficient way. So very efficient in power perspective and doing lots of matrix math in a quick way.

Part of our initial and some of the initial bring forward in the client itself is to bring that in to all the client products happening in industry. I like to think of it as more, this is a really bring a new level of communications to a PC as well. We saw during COVID and even post-COVID the PC being a video conferencing device of choice. It has a great user experience for bringing in the video itself and also being able to productively do other things while people are talking, let's say.

It's a PC, so you're busy, you're multitasking. - Multitasking. - Of course. So you're sometimes, I know it never happens with people listening to me, of course, but you might be reading email, you might be doing some doc, you may be working on a presentation, you might be doing some image editing, you might be setting up your next Spotify queue while you're on a video conference.

So what's cool about bringing in these media effects is that the PC becomes like a really more advanced communication device. Like you know, you kind of take this for granted from your phone, it just works flawlessly now. So we're bringing those kind of capabilities into the client itself because it's such a now prominent communication device for everybody in the video conferencing era.

Now you evolve that forward and a whole, another level of acceleration happening between the CPU, GPUs, DSPs, neural processing units on a client device becomes really interesting because it is that personal device that you have with you. It has a different level of privacy versus what's going up and down the cloud. The security can be better contained as well as lower latency. So on a client device, all of these are more optimized in that personal client usage models.

This, I think, is gonna bring forward something unique versus data center. We're very heavy on inference and the side of like utilizing all that months and months and months and months and large numbers of dollars for training your models and optimize your models, bringing them into the user space for those devices you're using every single day and reduce them in a spot where application developers, OS partners can now bring these, to bring new user experiences themselves. So the world of inferencing is really about the users and how we're making our lives better and more productive versus training is more about creating those models that you wanna bring forward. So there's a pretty interesting distinguishing difference. Now, the question we have that we're all evaluating in the industry right now is how much can you put on a client? How much can you afford to put on a piece of silicon and also handle battery, things like memory bandwidths? We typically have two channels of memory on a high volume class of PC products.

Some of the more advanced language models, Transformer, Generative AI, requires a lot of memory bandwidth and to bring memory bandwidth into a client requires a lot more money to spend on memory itself. Is it worth it? Yeah, there's gonna be products that we have that are gonna bring more memory bandwidth, but there are some distinguishing things that you can and can't do super well on a client and we're continuing to evolve the IP and technology to see how far we can take that. Now, in other places on desktops and if you look at developers and creators, they may do some actual training on the actual desktop. You have a large number of pTOPS, which is potential TOPS not the efficient, what you can actually use them for on a graphics card. NVIDIA's been doing this for a long time. They are really in the lead here with what they do with CUDA, for example.

And other graphics companies, including Intel, including AMD are also bringing forward very big AI capability utilizing these graphics solutions. So there's no reason why you can't do some of that training at a smaller scale level. So what Intel, for example, is looking at and all of the other PC providers and client providers is how to look at the stack a little more holistically. So that user developer thinking through, okay, if I wanna train something, initially I'm gonna put it on that cloud, I'm gonna spend the money on that and I gotta reduce it. And how do I start to bring that forward in a common way down in the software aspect, software frameworks, tools, and hopefully then make it more seamless for you to utilize all this resets from server down to the cloud to the edge and have a more seamless experience.

You can predict this instead of using fragmented tool framework. So AI is so new, exciting that fragmentation has happened already, right? So there's a lot of, "Which tool should I use?" And we and others are trying to bring some sanity to that as well so we can get those user experiences built in. - Can you talk a little bit about privacy and security and how that's different or do you think it might be changing while AI comes to client a little bit more enthusiastically? - It's definitely changing a good bit.

And I think we're all pioneering what to do here and users vary. My view of privacy obviously is very high given that I don't have a social media presence for example. Others, privacy is just, yeah, they like to have the very public persona, right? And one of the things, if you look at AI on a client class device, there's different aspects of privacy and security.

One might be the security of a model. So if you have a developer or an application provider that spends lots and lots of money renting out servers for many, many months to create this really IP level model itself and you start to reduce it and bring it into a client for inference type work, how do you secure the model such that the model itself is intellectual property and doesn't get exposed with the actual device? That is your money you spent to develop. It's your right to actually keep that. But there's also big open source market for models. You see that this happened with like ChatGPT and Llama. Things will emerge as a new novel usage and then you've got this awesome open ecosystem of partners that start to mimic it and try to reduce it in a different way.

So I think you're gonna see both of these dynamics happen, but we're always gonna have a model where the security of somebody putting that investment in for something unique and special can be locked down using hardware level security for a model class of protection. That's difficult. Models can be very big. So we're looking at different technologies right now. For example, to do this.

Another level of security is actually utilizing the advanced capabilities of AI to bring more security to your product. With intrusion detection or some of the other things we're doing with our security partners in the application world or the OEMs themselves. So you can get a more advanced techniques to determine whether or not you have a safe and secure system and using the neural engine that we have in your product. Privacy is gonna be really foundationally interesting. And what I mean by this is that now there's plenty of discussion, papers, interest in universities as well and work in the software field of digital twinning.

So if you look at inference itself and you're receiving inference by doing different things with creator, producing, school, whatever you're doing, you're utilizing sort of something helping you with your inference. Well, one interesting model is trying to understand how it might infer you. What are you doing? What is your next action potentially? How can you expose to the user what might be more productive for them versus hunting where things are? Like I have to go search my PC for files. What if the OS or the software developers brought something that started to learn more about me? Well, in essence, they're basically twinning me to some degree. So if they're twinning me, what do I do with that locking down that the privacy of you as a twin is kind of interesting. - Who owns the twin?

My company or me ultimately, right? - You, your company. I made a joke with some partners a couple weeks ago about, well, there's also people who would probably like a marketplace for their persona. You don't know that there's a different variety of people that want to lock things down for different reasons. There's other people that were like, hey, I'm Rob, I'm a super productive worker. Maybe I wanna use Rob's twin to help train how I can be more productive or not. So I think there's just gonna be a pretty wide dynamic how people actually absorb this.

But we have to be prepared and ready that as something's being developed, it's happening to learn more about you that it's considered kind of your digital DNA or footprint. And we have to make sure that we're aware of those privacy laws throughout the world. So another wide open field. I love this part about AI, for example, because on a technology curve, this is like all new and it's happening at a super accelerated pace. So a lot of invention and innovation's happening right now in this field and security and privacy are very big focus areas for all of us. - Yeah, okay, can you talk to me about sustainability and the PC? I've heard different elements from modular to second life to looking at battery optimization.

What is, where's the industry headed or what are they worrying about or thinking about? - Sure, you picked a number of them already. I mean, for many decades, I've been involved in the PC industry to sort of continue to work on the materials themselves, the energy used to create the products themselves. Certainly battery life is useful for all of us, but it's also great for the planet in reducing power of the products. It's not just for mobile products.

Desktop products are also very important to keep the power in state. So very big initiative on power side, both for client and also server. Server right now is total cost of ownership and the energy costs are so high that we see a completely new pivot for where the optimization points are for power efficiency and bring in as many cores as possible within a power envelope versus more uncontrolled single thread, very high powers. So it's really changed how we've been thinking about our IP design points to bring battery life, not just because it's better for you because you don't like to charge, but it actually helps reduce the amount of energy we have to consume on each of the charge cycles and throughout the life cycle of the product. Partnering with our OEMs on modular designs, of course, we have concepts that we do at Intel. I'm sure partners are doing this as well with OEMs and I'm sure my competitors are also.

Modular designs are interesting because you have the dynamic of sustainability and all of us are looking at refresh cycles 'cause people do like a new PC because it does get better. When I first entered the PC industry way back in... you get a new PC, it's like twice as good. Everything was just twice as good so the pace of turnover was extremely fast. Now we have things like AI emerging in a PC.

How would you bring AI into an older product without having somebody throw away that product or hopefully return it back and somebody more needing that type of product can use it as well. So there's a type of programs that we have, I know OEMs are doing... where they can recycle a PC, not just put in the recycling of the materials but also bring another user to bear on something at a lower price point while this new user can bring another product in. Modularity could bring you the ability to upgrade a system partially but not fully.

And there are some pretty good concepts I've seen from some of my OEM partners that are really interesting and compelling. They are tricky to make, and there's sacrifices you make as a user for things like the thinness of a product. You don't get all the new technology but it's good enough for you.

So it's also the mindset of the user and what they're buying and the importance they put on the sustainability aspect. One other thing we're looking as well, at least at Intel, and I know partners are looking at this also is how do you continue to keep the user experience alive? And typically you might, a lot of hardware focused companies like Intel and some of the business work I do, people are very hardware centric. They wanna upgrade, upgrade, upgrade and bring new hardware into a PC but software is such an important factor. And the user experience itself is dominated by the software itself. So client is starting to really see both on all the SoC providers, the OEMs and the operating system, the interplay of how these all work together to keep the user experience continually to improve, instead of these big update cycles. And many PC users, including myself, can get frustrated by how a system just slowly decays in its capability, its stability, it's sluggish, the battery life gets worse.

There's just these different things that happen as you continue to load things over and over again. And how do you bring this more alive from having the user experience maintained throughout the life cycle of that product is something we're all looking at kind of carefully. And that'll help the sustainability as well. Certainly businesses like refreshes because they get the new business from that but having the user experience and keeping the user with you because they enjoy that experience and valuing that and finding a way to monetize that versus new hardware, something we're all seeking how to do better. If I'm just not updating you forever and giving you free software forever, I can't run a business.

So I think there's gonna be some, hopefully some new business and economics that emerge from this. - So two final questions. Do you have a book you've read recently that you recommend? You said you're an avid reader. - Well, besides the AI books I'm reading which might bore a lot of people, I don't know.

I tend to go back and read some similar things over and over again. "Sand County Almanac" is one I really like to read. It's about a person within a farm up in the Midwest and some of the aspects of living close and with the land. One of my favorite books, I read that pretty often. So I would say most recently, I just finished that one again.

- And I was also wondering, you mentioned you garden. So like what's your favorite vegetable or flower to garden? - Tomatoes by far. - Okay, fruit -- or fruit! - Yeah, so during COVID, I was in Florida. So I grew up in Florida near the Space Center. This is all information now becoming public I guess, right? Somebody do a virtual persona of me online or something.

So I grew up at the Space Center. My parents are still there, near the Space Center. And during COVID, I spent time there. I grew up at a two and a half acre lot with them. They always had a garden. They grew up in Mississippi.

So we knew farming. So we knew we were always being involved in a garden growing up. So I just fell in love with it as well.

So I started to get back in the garden, and got all the weeds out of there and reprovisioned the plot. And so yeah, I've been growing heirloom tomatoes, beets, carrots, okra is another favorite. I grew some Brussels sprouts. That was really cool to see that plant. It's just amazing to see little seeds nourish and sprout and grow and then harvest and then recycle that plant back into the soil.

Beans, all the different types of beans. So I try, I experiment. I like the large stuff. Just learning things are just, to me, is one of the most important things I find in life. And gardening brought another nuance to that, it was just trying things. Something wouldn't work, try it again.

Or adjust the soil, take measurements of the soil, figure out what's going on with the soil, figure out watering schemes. And so I'm a very inquisitive person. And so gardening was something that I enjoyed doing. And you get to enjoy the fruits of that through great food, great, healthy, fresh food. So yeah, that's one of my hobbies I really enjoy. - Well, thank you for taking a few moments to talk with us, Rob.

I appreciate it. - You're most welcome. - [Announcer] Never miss an episode of InTechnology by following us here on YouTube or wherever you get your audio podcasts. - [Announcer] The views and opinions expressed are those of the guests and author and do not necessarily reflect the official policy or position of Intel Corporation. (gentle music)

2024-07-26 08:43

Show Video

Other news