Intel MWC Barcelona 2022 Keynote (Replay)

Intel MWC Barcelona 2022 Keynote (Replay)

Show Video

we as the stewards of moore's law will be relentless in our path to innovate moore's law alive and very well i'm just really thrilled that we have the opportunity to take this great icon of a company forward as never before our best days are in front of us idm 2.0 is our evolution historic investment for ohio one of the largest investment in semiconductor manufacturing in american history every aspect of human existence is becoming more digital demand for semiconductors is truly unprecedented intel foundry services will open up the intel fabs for the industry with leading process technology a wide range of our and third-party ips we have a clear path for the next decade of innovation to go to one and well beyond we will be relentless in our path to innovate with the magic of silicon the intel xeon platform is the foundation it is the most pervasive platform on the market delivering everything customers need we continue to build on our leadership position from the network to the edge with our comprehensive portfolio of hardware and software solutions we are leading the transformation of the network you know the future lies in a fully programmable network that is truly open and where developers have the freedom to move at the speed of software so we're really on a mission to let those who build networks take control to be able to program them for themselves we're going to give them the hardware and the software tools to do that [Music] hello and thanks for joining us i'm nick mcewen the senior vice president of the networking and edge group here at intel and i'm here today with my colleagues dan rodriguez sachin cutty and adam burns and we're going to be sharing with you one of the biggest transformations that has been taking place in our industry and that is the move towards software-defined programmable infrastructure from the cloud through the internet 5g networks and all the way out to the intelligent edge and we're going to be sharing with you today how intel and you and your customers can share in this tool [Music] [Applause] [Music] so to kick things off let me tell you how the big turnaround of intel is going so far as you know pat gelsinger took over at the reigns as ceo just over a year ago declaring the geek is back and in the past year intel has announced the largest investments in semiconductor manufacturing ever including new fabs in arizona and ohio and expansion in europe too there are more announcements to come and by the end of this decade intel will have invested as much as a hundred billion dollars in semiconductor manufacturing so what does this mean for you and for your customers today i am committing to delivering what you expect and need from us the most a complete portfolio of products with a common software platform and this lets you build solutions today but more importantly it gives you the ability to leverage today's software investments for your future generations of solutions we're going to be more open just like we opened our fabs to the world earlier this year we're unlocking the garden gates for the developers who make our products shine and we're going to offer greater choice and design for multi-vendor multi-cloud across silicon platform and software solutions so that you get a ubiquitous stable and reliable platform for running modern applications and we're going to build greater trust so that you can count on intel to have your best interest at heart and to deliver solutions with world leading security features i joined intel to lead this transformation for the past 15 years i've passionately evangelized this movement of networks to software in the past network behavior was locked inside fixed function hardware and slow moving standards my goal is to unlock the network and hand over the keys to you the only people who know how to build and operate the biggest networks in the world so that you can continuously improve them as only you know how i've done this as a professor at stanford university by creating the software-defined networking movement and by starting several companies including nissira which pioneered network virtualization and then barefoot which introduced the first fully programmable switches my goal has always been to challenge the networking industry to think more in terms of a software infrastructure so pat reached out to me and said why don't you bring it all together at intel where we have the broadest portfolio of software targets in the world and that was a challenge i couldn't turn down okay so let's get started today we'll be making some exciting announcements new architectural enhancements for xeon our roadmap for vran and our first soc built from the ground up for the edge and then finally a new software release to improve inferencing we've also invited industry leaders to talk with us about the successes that they have driven with support from intel and as we work through all the news i would like you to come away from today with a few key takeaways first networks are moving towards software they're becoming more programmable more flexible which means that those of you who build and operate network infrastructure can more rapidly improve and optimize it to make it more reliable more secure and to simply differentiate it from your competition the second takeaway is that increasingly computation is happening at the edge an inference at the edge is transforming and automating every sector such as manufacturing public infrastructure healthcare retail and more in fact one of the biggest growth areas that we are seeing is in applications using our very successful open openvino inference platform and the third takeaway with this in mind you can trust intel to deliver programmable hardware and open software through the broadest ecosystem and that'll be in a way that best serves our customers and our partners now and into the future so each time processing moves from fixed function hardware to software there's natural skepticism i can remember raging debates about whether fixed function signal processes would ever be replaced by digital signal processors or dsps or whether gpus would ever replace those fixed function graphic boards we had in our workstations or more recently whether network functions would migrate to vms and containers running on cpus time and again our industry has shown that programmable solutions can compete on power and performance and i want to say today without equivocation this is true all across networking from the cloud to vran to the internet core and the biggest driving force are the developers tens of thousands of developers armed with the right software platform can transform how infrastructure is built and what we do with it and they've done it time and again that is the real story of the programmable network and it's a blueprint for what's to come at intel our goal is to provide a common software foundation that extends your existing software investments across generations of infrastructure we're also applying our deep knowledge of networking and cloud workloads which has been garnered over decades now of leading network transformation will use that to determine where the compute is best served this extends beyond cpus to encompass xpus that's the existing cpus and gpus as well as innovations like the new ipu or infrastructure processing unit that we co-develop with google to support an open ecosystem for more programmable cloud data centers in short we're taking into account the entire system to determine what enhancements to build into our cpu instruction sets and where to place the acceleration we intend to give you a choice of hardware under a common software foundation so a shiny example of this approach is a software-based virtualized ran or vran as you know it in the march towards a software-defined everything vran is inevitable but ten years ago it was unthinkable understandably many of us were skeptical and frankly i've been a bit surprised to see just how much of the radio access network has shifted into software the prevailing thought was that surely a software-based infrastructure would run slower and consume more power but as i mentioned earlier we've seen the same theme time and again across industries fixed function approaches eventually rise up into software where they're easier to improve and evolve software is eating the infrastructure and the ram is no exception so today we can say that vran is here to stay and can be delivered with uncompromised performance and uncompromised key metrics we're going to see it scale from hundreds of thousands to millions of vram base stations globally at intel we saw this move to software coming and so we helped create this market several years ago we invested in hardware and the flex ram software running on xeon and we built an ecosystem and that's why today nearly all commercial vran deployments are running on intel we're not stopping there we're doubling down to keep growing this vibrant vran ecosystem we're focused on deploying it quickly across the globe with a broad set of partners there's no denying it vran is ready for prime time but don't take my word for it verizon is deploying it already because carl mullady sees the criticality and readiness of vran vran is the future for how we're going to operate our wireless networks going forward because it gives us a lot more flexibility and the ability to do new products and services and functions that our customers can use to further their businesses and their lives so it's critically important for us that we partner with somebody like intel who can bring all the technology the ecosystem knowledge and the the the heft to help us do something that's so ambitious is this so they've been a great partner intel's been a great partner with us as we're going down this road co-developing it and now in the united states we have a large vran deployment that we're just starting off with now we're seeing great benefit and it's given us the ability to really do new things quicker for our customers so we really like the platform but the platform needs to be top-notch and stable and deliver the brand promise for verizon and that's really where intel's been a great great partner with us to work through some of the issues as you are virtualizing your network and making sure it's robust enough to handle the verizon uh brand so verizon's perspective on the importance and readiness of virtualized rand is absolutely spot on and they can certainly trust our commitment because we're going to continue delivering the open platforms that the industry needs for vran deployments so dan i hear you're seeing a lot of interest and momentum in these vround deployments you're absolutely right nick we're seeing interest from operators around the globe photophone recently switched on the uk's first 5g openran site using our xeon processors along with workload acceleration and connectivity technologies there's a reason why intel is leading the transformation of the ran we have a deep understanding of the workloads and we will continue to deliver a family of solutions to support vran deployments on open and programmable platforms we are committed to delivering products with the right combination of flexibility performance and power efficiency and we believe flexibility is non-negotiable it's at the heart of what operators really want so we're bringing cloud-like scalability and agility all the way through the network including the rant with flexran running on xeon servers our customers can get stats and analytics from ai and machine learning this enables closed loop feedback real-time decision making improved power efficiency reduced tco and also the ability to leverage the ran data to drive new revenue streams and of course you can change and upgrade vran software on the fly at every layer of the stack with our roadmap of products flexibility does not come at the cost of performance and power this is why once again we're delivering gen on gen performance in our platforms to double cell site rand capacity you will see this enabled in our next gen xeon scalable processor known as sapphire rapids it's packed with a 5g punch and you'll hear more about this in a minute following the launch of this next-gen processor we will continue to enhance and build out the product family i'm excited to share today that will be delivering new chips that are optimized for vran workloads these future cpus will be part of this next-gen xeon family and they feature integrated acceleration this will give customers more flexibility and optimization points as they build out their public and private networks we're working with samsung ericsson rocketin and other leading providers to bring new solutions to market that build on our next gen xeon scalable platform now sash and cody will discuss how we are architecting our products to meet the demanding needs of vran thank you dan over the last decade we have showed the world that virtualizing the ram is not just feasible on our general purpose xeon processors but also practical with several real-world commercial deployments this is a team effort we drove the vran revolution by working closely with our customers and partners evidenced by the fact that we have more than 140 licensees of our flex ran software these partnerships and real-world deployments have enabled us to build up a deep understanding of vran workloads and the practical challenges our customers have to solve when running critical 5g networks on cloud-native platforms but we are not resting on our laurels we are leveraging our experience to architect our rear and road map to continue building a sustainably leading platform one of the fundamental learnings from our journey is the realization that mira and workloads are not monolithic they are uniquely heterogeneous in that they operate on signals bits and packets all combined in a single application and the components will have different tolerances to latency and quality of service depending on network traffic and application requirements we are leveraging this in our roadmap to deliver a heterogeneous soc architecture for vran workloads which features our error correction accelerators with xeon cores that pack acceleration for signal and bit processing in a very flexible high performing and efficient package we are announcing for the very first time unique 5g specific signal processing instruction enhancements on xeon course that have been built from the ground up to support rand specific signal processing these capabilities deliver up to a 2x capacity gain for vran to continue delivering genoa gen performance gains for our customers and supports advanced capabilities such as high cell density for 64 t64r massive mimo in the most demanding lan environments in the world across our products will have options for both discrete fec acceleration as well as integrated fec acceleration with shared memory to give our customers flexibility and choice to support different needs this enables our partners and customers to map the right acceleration hardware to the right part of the vran workload rather than what others are promising in their proprietary solutions which is to shove an entire layer one or physical layer into an inflexible hardware accelerator our approach therefore preserves long-term flexibility and continues to deliver the best combination of performance and power efficiency in the market our partner ericsson is leveraging these innovations to build out their vran products i'm panora winger vice president and head of product area networks at ericsson at eric zone we constantly push the boundaries of technology our leadership position in mobile systems is based on our ability to innovate not just in-house but also with partners intel is a long time partner with us and the last few years we have seen major progress in the area of cloudround i'm today in the ericsson studio behind me i have radio units that we are deploying globally carrying a lot of idea traffic today they are mainly connected to our purpose-built compute solutions but we are starting deploying now cloudran as well connecting the israeli units especially now leveraging intel's third generation ceo and scalable processors we are able to support massive mimo products as well the mid band c band the band the spectrum that really gives extraordinary performance and 5g experience for users and we will continue to leverage the advancements of the ceo family and architecture improvements particular the acceleration technologies they are really crucial to make 5d successful we are excited about the progress having that we made and we are really looking forward and see looking forward to where this journey will take us it's great to hear what ericsson is planning for the run and it doesn't stop there one of the key benefits of having a fully programmable platform to handle the vran workload all the way to layer one is the ability to invent system level optimizations and software that significantly improve efficiency and overall tco our partner 18t is leveraging this flexibility to develop with us the industry's first dynamic class 2 pooling technology for vran workloads this enables energy consumption to be proportional to the useful amount of traffic a vlan base station is serving affording operators to build green sustainable network infrastructure the wireless industry shift to virtualized and containerized network architectures will be a powerful enabler for both ran and core allowing us to do things that weren't possible before att believes in taking advantage of the principles of decoupling horizontal scaling and cloud native to improve user experience network efficiency and resilience a key way we can do this is with distributed unit pooling running on general purpose virtualized servers by virtualizing all the way down to layer 1 this really helps unleash substantial improvements in rand capacity and efficiency it gives us the flexibility to match infrastructure sizing to actual demand it allows dynamic bypassing of hardware failures and hot spots without traffic disruption today we are pleased to share some exciting developments on a new cloud native technology called ran class 2 distributed unit pooling which att is co-developing with intel we're showing this in action with intel at mobile world congress building on the deep vran expertise that both of our companies bring will demonstrate class 2 pooling running on general purpose xeon servers and flexran that will allow us to achieve significant capacity gains just like gordon i'm looking forward to seeing how vran class 2 polling will benefit the entire industry we've talked a lot about the ram today but what about the rest of the edge like vran edge applications are also heterogeneous and there's so many different types of edge applications across networking to iot sectors one of the hottest workloads is inference over video streams think about all the examples factories that are inspecting products retail stores that are monitoring inventory and foot traffic communities creating smart and safe spaces for their residents networking applications that need crypto and packet processing like with sassy and sd-wan the applications are practically endless we're building chips to support these various use cases efficiently and flexibly that's why we're committed to taking a modular soc approach in our roadmap to give you the utmost flexibility with a general purpose platform with the right amount of edge acceleration you can build a diverse set of applications on top much faster to better meet the performance power sustainability security and ai based inferencing needs across the network and the edge to deliver on this i'm happy to share that today we're launching the next generation intel xeon d processor the first intel soc designed from the ground up for the software-defined network and edge this soc is packed with network and edge specific capabilities including integrated ai acceleration integrated crypto acceleration integrated ethernet support for time coordinated computing and time-sensitive networking as well as industrial class reliability and the gen over gen performance improvements are compelling for visual inferencing 2.4 x jennifer gen for complex networking workloads like 5g upf 1.7x genovergen

for sd-wan sassy and edge use cases with ipsec more than 1.5 x gen over gen and that's why so many leading technology partners are working with us on the new xeon d partners like cisco juniper networks rocketin and many more on designs like security appliances enterprise routers and switches cloud storage wireless networks and ai inferencing as an example rocketing will be using the newest xeon d in their cloud-native infrastructure let's hear what tariq amin has to say about the future of this product attracting creating open fully virtualized cloud native infrastructure was foundation to our strategy to deliver a reliable flexible secure and resilient mobile network for our partners and end customers today we see mobile operators around the world adopting similar open-run architecture approach to recap the proven benefit when constructing their next generation network including the radio access network whether greenfield or brownfield operators there are a host of challenges including interoperability with legacy hardware and software balancing cost versus performance ensuring agility for future services along with intel and juniper we are streamlining the development and deployment effort and making this transition easier and turnkey through simwear simwear incorporate intel's latest xeon d processor it provides the optimal performance while meeting our specification to design somewhere as a compact lightweight self-cooling and weatherproof containerized radio access network solution that is why it's going to be deployed in one of the most dense urban environment in the world tokyo so who says that you cannot run vran in dense urban environment as we roll this out operators can stop worrying about validation power consumption durability security or when to upgrade hardware and software instead they can get ready to plug and play proven solution that address any use case they need optimized for 4g and 5g brownfield 5gsa private 5g network network slicing multi-operator services and many more tarek your passion for transforming the network is inspiring and we're all going to be eagerly following your progress so today we've talked about how networks are moving towards software and how our programmable soc platforms provide an ideal combination of flexibility performance and power efficiency but how do developers take advantage of this at intel our goal is to deliver a robust edge developer experience by ensuring our hardware capabilities are accessible to developers in a cohesive easy to program manner through open modern cloud native platforms let's take a look at our edge cloud software platform smart edge smudge helps enterprises and developers build deploy and manage multi-access and private network solutions across a diverse set of use cases with cloud-like simplicity for example lenovo's pc research and manufacturing factory is deploying an end-to-end 5g private network using smart edge open's private wireless experience kit this kit offers an optimized cloud-native platform to converge multiple iot and networking workloads the private network will enable ai based defect detection augmented reality headsets for workers and large-scale rapid downloads of operating systems for production line laptops you can just imagine how this will drive efficiencies again smart edge is just one example of how intel is enabling developers to innovate faster by delivering software that abstracts away the complexity of developing on multiple hardware platforms now let's consider the network edge the 5g user plane known as the 5g upf often gets pulled from the network core into the network edge or sometimes right onto the customer premise to be able to deliver the right latency and the bandwidth out at the edge we see more and more developers wanting to develop new services at the edge including for a 5g upf these developers they want to innovate quickly and they want to take fullest advantage of the underlying hardware infrastructure and we owe it to them to make it simple for them to do that that's why we're announcing a new set of software ingredients these are modules in our smart edge portfolio that are fully optimized on intel xeon processors to accelerate 5g upf workloads at the edge these licensed modules they consist of source code that is easily consumable by developers what's really great about these modules is that they abstract away the complexities of the hardware so the developers can write applications on top that take full advantage of the packet processing capabilities in intel cpus it offers an easier path to enhance the upf performance and bring these capabilities to market more quickly the shift towards software-defined networks also expands the field of opportunity at far reaches of the edge in fact ai inferencing stands out as a critical workload for many edge applications and it's driving transformation across a whole range of industries inference enables incredibly flexible and accurate pattern matching like when detecting objects in videos or images identifying anomalies or recognizing speech ai inference is critical at the edge where data is largely generated and real-time response is paramount in fact one of the biggest growth areas we're seeing is in applications that use our openvino inference platform adam burns who drives our ai inferencing developer tools is here to share major news coming out of our iot and ai teams thank you nick that's right we created openvino to empower developers and really solve two critical problems first models that are developed in the cloud are complex and large and we have to make them more efficient in high performance at the edge secondly we know developers at the edge have to deal with diverse environments and many different types of hardware openvino allows them to leverage the compute they have available now we're taking three years of learnings from those hundreds of thousands of developers and creating a new generation of openvino openvino version 22.1 our biggest update since the initial launch openvino 22.1 is easier to use with frameworks it has broader model coverage for popular emerging models and it automates optimization millions of developers use frameworks including tensorflow pi torch and paddle paddle among many others we align the openvino api to frameworks to really simplify migration and require fewer code changes so when developers use openvino 2022.1 they can get access to performance and efficiency more quickly we've also expanded the coverage for popular models like natural language processing and audio in addition to computer vision which we started with 2022.1 means more performance across a

broader ranges of models and use cases lastly we've automated the optimization across diverse hardware customers especially at the edge use a variety of platforms systems can have many different combinations of cpu cores integrated gpus or even discrete accelerators 2022.1 automatically understands the hardware and parallelizes the application based on all available compute memory resources as a developer you don't need to hand tune and understand the hardware nuances anymore applications are high performing flexible and easy to deploy right out of the box openvino 2022.1 is our biggest upgrade ever it's all about making life easier for developers to create applications that deliver value it's easier to optimize models from frameworks there's greater model coverage and performance across a broader range and we've automated optimization while it was built for the edge openvino has also gained traction in a variety of applications in the data center in the network and client applications 2022.1 will make it even easier for developers of any application to optimize and deploy on any hardware platform our partners american tower and zablock are innovating at the edge and leveraging openvino for ai development let's see how they're developing on openvino and intel there is an amazing opportunity at the edge and it starts with connectivity 5g provides the foundation for low latency capabilities that enable innovation never seen before such as artificial intelligence and the internet of things let's talk numbers 1 trillion edge devices being deployed over the next decade are going to need low latency ai inferencing but the edge deployments are a real challenge that is why we built the ai micro cloud a cloud to edge ml devops platform that allows our customers to mix and match ai isvs and vendors at scale to deliver edge ai applications while supporting a full deployment life cycle from design to support intel is an ideal partner here the relationship with a broad ecosystem of edge hardware manufacturers and ai isvs enables us to power what we call the third generation of digital transformation that is delivering ai api assets openvino software is integrated into zedlock's ai micro cloud allowing dramatically enhanced ai inferencing performance at the best cost per insight we are excited about this new version and the broad use cases and the increased performances it promises american tower currently has six edge data centers in the us but we're not stopping there building on our existing distributed real estate our recent acquisition of coresight added 25 metro data centers enabling our edge facilities access to the cloud we are excited for the opportunity to support intel and zblock and combined with our infrastructure their solutions are ready to deploy both are a crucial part of the edge ecosystem which is needed to meet distributed network challenges and provide the foundation to enable artificial intelligence at the edge we are thrilled to be a part of what zablock and american tower are doing enabling developers across a variety of applications to harness edge compute in the coming months intel will be releasing a stream of technologies to democratize ai making it easier to build optimize and deploy models our goal is to make ai more accessible to a broader range of users so they can create new solutions help businesses thrive and reinvent experiences ai and more specifically inference is transforming many applications at the edge in the majority of the world's inference workloads run on intel in 2021 alone we saw a greater than 40 percent increase in open vino downloads by the community versus the prior year combined with this growth innovations by our partners like zablock and american tower among many others are revolutionizing business and automating operations in our factories restaurants cities hospitals and more there are no limits to what experiences are being transformed by ai adam's exactly right ai inferencing at the edge on open software built on programmable hardware it just embodies the themes of our discussion today our ecosystem partners are paving the way for new innovative services that are going to change how business is conducted this is most apparent in how private 5g based connectivity and inference are being combined to drive innovations across various sectors and we're working with a broad ecosystem of partners to help enable private network solutions with these capabilities i'd like to walk you through one example the company sturm sfs in switzerland collaborated with nokia intel and the industry fusion foundation to develop a solution based on private 5g this solution lets them connect and monitor and control all of the relevant assets tools and even the workers in the factory to do things like monitor and control their factories cutting systems in near real time and they can quickly know the locations of their cranes their forklifts and so on this all helps optimize and improve their production logistics in addition to this deployment nokia is also working to integrate openvino into their mission critical industrial edge solution this is a solution that's already leveraging intel technology this will bring ai capabilities for use cases like identifying defects in automatic welding processes in real time well there's tremendous momentum for the edge and the transformation that's coming the core of the network is already well on its way to being delivered via software on a cloud native platform that runs on general purpose processes and as 5g takes off the need for higher speeds and more agile services it only accelerates this is why a software-based infrastructure with hardware accelerated kubernetes is so important it provides the agility and automation benefits of cloud native kubernetes without compromising performance when we think about what's needed for performant and agile wireless core we're not just talking about simply the number of processor cores it takes more than delivering processor cores to meet the unique workloads of a telecommunications network our customers need the right platform performance the right security the right service assurance and the right power management this all needs to be optimized to deliver networks with cloud-like scale and agility today well into the future we worked hand-in-hand with all of you to lead the transition to mfe over the last decade so we understand the pain points like no other technology partner we know the workloads we have the most proven ecosystem in the industry and we're delivering tco optimizations from those i o intensive workloads like performance nec achieved 640 gigabits per second of 5g upf throughput per server running in a cloud-native containerized environment power management kddi and intercom use intel's power management capability to scale power consumption according to demand latency sk telecom reduced latency and jitter more than 75 percent for low latency traffic in a 5g upf security our improved vector aes instructions deliver 2x better performance for compute intensive crypto operations in vpn applications such as ipsec and tls you can rely on us to continue to deliver and to support all of you to build out the best networks in the world british telecom is using these optimizations from intel as they move to cloud-native infrastructure over the past few months we've been working with intel ericsson uh canonical on building bt's cloud native 5g core this is so important for us because it allows us to build for the future workloads that the network needs to support being able to automate being able to scale up and scale down partnering with intel ericsson and canonical has been crucially important to the success of this program so much so we're actually going to go live earlier than we planned and start migrating customers to this platform over the next 12 months what's been fantastic is using the intel networks university that's allowed us to upskill our engineers and learn the best and and boldest wave ways of making this platform and then starting to think about the future how can we take this platform to our consumers for virtual reality gaming at the edge how do we take this platform into businesses to do automated manufacturing 3d design when you're working from home at the edge and really being able to use the full power of the 5g cloud native core and doing all of that while actually being able to run a platform that customers don't know what we're working on because it's got cloud capability we can scale it up we can scale it down we can upgrade it we can change things and customers don't even know and that's what the real difference is of a cloud native platform and i'm really excited that over the next 12 months we'll be able to build much more services that really excite our customers but crucially make their lives easier as neil shared the shift to cloud native is already underway and it's vital to many operators xeon is going to play a critical role in network applications for sure but we're taking the vision of hardware accelerated kubernetes one step further we recently announced the infrastructure processing unit or the ipu with ipus we can accelerate the entire kubernetes control network and storage layers in hardware freeing up cpus for customer applications the ipu doesn't just speed things up it runs infrastructure software in a separate secure and isolated set of physical cores that provide additional security and reliability taking a step back you can begin to see our end-to-end view of acceleration in our edge compute platforms we're taking a systematic view of what part of your workloads are going to run where across the compute substrate whether it's the cpus the socs fpgas switches and necks and carefully placing accelerators for the most used and intensive components this reflects our philosophy we want to provide you with the flexibility to use the right compute and acceleration for your workloads while providing you with the best combination of performance and cost but this isn't going to come at the cost of complexity that's because we're investing in software frameworks to abstract away those complexities and to shield your developers from the inherent complexity of using a rich heterogeneous blend of socs cpus ipu's nics switches and fpgas the best example of this is the work that we're contributing to the ipdk project and that's to rally the community around a common open source way to develop networking and storage applications across our nics ipu's and switches we want developers to build once and then run anywhere let's take a look at how far we've come as i mentioned earlier 15 years ago the network from cloud to edge was built on proprietary fixed function hardware since then we saw the transition to sdn followed by the movement to nfe and now we're seeing the shift to cloud native as we head into the era of 5g and edge these industry shifts are setting the stage for a new era of innovation in the network and at the edge now more than ever this evolution is being driven by the need for more control adaptability and scalability to provide those who build and operate infrastructure the ability to quickly introduce new capabilities more computation is and will continue to take place at the edge and inference is transforming and automating every industry we talked today about how networking in the cloud core 5g and private networks will move to software and programmable platforms and customers will increasingly deploy ai inference applications at the edge at intel my team and i take our responsibility very seriously to make it as easy as possible for you our customers to develop your beautiful new ideas on our hardware you can count on us for the building blocks that you need across the entire system here at intel we're building on our historic core strengths of building programmable systems and delivering open software through a broad ecosystem we've been at your side creating new markets leading major transitions and no one else can deliver at scale like intel what you've heard today is just the beginning [Music] you

2022-02-25 17:37

Show Video

Other news