What's New in OpenShift 4.15 (Product Update)
Okay. Hello and welcome to what's new in OpenShift 4. 15. My name is Stephen Gordon.
I'm a Senior Director of Product Management in the Hybrid Platforms business here at Red Hat. Today we'll be hearing directly from our Product Management team about what's new in the upcoming latest and greatest release of Red Hat OpenShift. Before we get into that, just to level set.
Um, before we get going on that, on what's new, just a quick level set. OpenShift is Red Hat's Kubernetes based open hybrid cloud platform. Leveraging Red Hat Enterprise Linux Core OS, the underlying operating system is supported on a wide range of physical, virtual, private cloud, public cloud, and edge footprints, allowing you to deploy with confidence anywhere you need to across the hybrid cloud. Core of OpenShift, encapsulated by OpenShift Kubernetes engine, you can see on the left here, Kubernetes based, but Red Hat also offers a wide range of services to provide additional enterprise capabilities as and when you need them. By leveraging OpenShift Container Platform to add integrated DevOps services like Service Mesh, Serverless and Pipelines, or OpenShift Platform Plus to add additional advanced management and security capabilities in an enterprise class registry.
Offerings like Red Hat Application Foundations, Red Hat OpenShift Developer Services, and Red Hat OpenShift AI further expand what is possible with this powerful platform. Finally, as shown to the right of the slide, while OpenShift is of course available as a self managed offering, there are also managed cloud services offerings available on several major public clouds. With that said, focusing purely on OpenShift 4. 15, we have a lot to cover today.
Let's get going. First of all, I'm going to hand over to Karina, who's going to give an overview of the Kubernetes upstream release that 4. 15 is based on, and also some of the key customer requests we've been able to action in this release. Thank you, Steve. Hi.
So OpenShift 4. 15 at its very core is based on Kubernetes and Cryo versions 1. 28. Now, this Kubernetes release has a number of enhancements to help increase the stability, performance, maintainability, and consistency.
Of the core platform while also augmenting workload innovation for virtual machines We and the kubrick community are especially interested in node system memory swap support and for ai driven and intelligent workloads The kubernetes job api now allows for more choices in ai model training and retraining And other key production enhancements are the preconditions to make webhooks less risky as well as the ability to read caches more consistently Also, large and complicated jobs can now fail faster and more accurately. And this contributes to the overall, uh, just better overall performance of Kubernetes in production. Next slide please. Hey, that's what's happening upstream. Let's talk downstream.
Each and every release, our teams work very hard to bring you your top requested enhancements. And some of these key improvements for this OpenShift release include enhancing the OVN IPsec. To support encrypting all data between OpenShift and any external provider, the Ingress operator dashboard and the OpenShift console includes HAProxy metrics visualization now, and you can now deploy compute nodes in AWS Wavelength Zones, which are specifically built for ultra low latency applications for 5G devices. And for console improvements, you can now view the first thousand lines or full pod logs in the web console.
Node up time is no longer a mystery, and now your users can easily see if a VPA is attached to it. That's your vertical pod autoscaler, plus the recommended values generated from the VPA. And now back to you, Steve. Karina. Now we're going to take a look at some of our spotlight features in this release. I'll provide a quick summary view of these, and in a moment one of my colleagues will introduce each feature.
OpenShift 4. 15 has a focus on core platform enablement alongside improvements for agent virtualization use cases while continuing to accelerate modern application development and delivery across the hybrid cloud. In this spotlight section, we'll highlight key features in Red Hat OpenShift 4.
15 before we broaden the aperture to look at the rest of the release. We're going to start our walkthrough of the Spotlight features by looking at the Edge with my colleague Daniel Frohlich. Take it away, Daniel. Thanks, Steve.
Let's start with our smallest Edge offering, Red Hat Device Edge and MicroShift. Friendly reminder, MicroShift is not the same as OpenShift, but derived from OpenShift. It is our Kubernetes distribution targeting small form factor Edge devices.
I'd like to highlight two awesome new features. First, we now support Microshift on RAIL 9. 3, which has new hardware enablement, especially for NVIDIA Jetson ORIN devices.
OpenShift AI has the capability to embed models into containers. This together results in a powerful model serving stack for the Edge. The second highlight feature is the support of the Operator Lifecycle Manager as an optional add on to MicroShift to simplify the use of operators. We recommend to build your own catalog with just the operators you need to save resources.
Let's see what's happening in other edge areas. Mark, please. Thank you, Daniel. So we are making a great effort to extend OpenShift support to cloud providers, edge locations. Along these lines, we are We added support to AWS local zones in previous releases, and we are really happy to announce that we are now extending that support to another two AWS offerings in 4.
15, like AWS Wavelength and AWS Outposts. For AWS Outposts, it will be supported as a day two operation, meaning You can add your outposts after your cluster has been installed into the public region. For AWS Waveland, the installation will fully automate the deployment of OpenShift in the public region with compute nodes in AWS Waveland zones. This automates VPC creation in the public region and subnet creation in the AWS Waveland zone. There is, uh, also an option to use existing VPCs with compute nodes in, in wavelength zones into an existing subnet.
For day two, an existing opacific cluster in AWS public region can be extended as well by adding additional compute nodes, can be automatically scaled as well into the AWS, uh, wavelength zone. And with this, I'm now over to Erwan. Hello. Thanks Mac. So the ND Gray Superchief combines a powerful efficient arm, ND Gray CPO, with a NBH 100 GP.
So this new architecture allowed giant scale gene AI platform red that on NVI started collaborating after the N-V-G-T-C 2021 on the gray super chip enablement. Graysopper MGEX systems are available now with multiple OEM server vendors as Supermicro or Quanta Cloud Technology. Compared to two socket x86 datacenter systems, Graysopper CPU delivers 2x throughput at the same power. You can reduce your datacenter total cost of ownership and run more scaling out AI inference or scale up trainings.
We are pleased to announce the enablement of Grace on Grace Operating Systems with Red Hat OpenShift and the support of 64k Linux page size kernel with Red Hat OpenShift 4. 15. Grace CPU support both 4k or 64k page sizes. For application with large memory footprints on to get the best performance, the 60 4K page size is recommended. So CPU on GPU co memory model on NVIDIA envying T two chip interconnect increased the amount of GPU accessible memory for large language models.
So the Nvidia GP operator has been enabled for envi grace system. I running over to Deepti for OpenShift networking. Thank you. So up until now, our IP six support has primarily be focusing on the East West traffic, right? This helps you facilitate secure communication with your internal network infrastructure, be part to part, uh, cluster traffic across the nodes. In our latest release, we're delighted to announce the general availability of IPSEC for North South traffic.
This means, uh, now our IPSEC capabilities now extend, uh, securing the communication flow between internal users or clients to external resources over untrusted network. And by encrypting the data exchange in these interactions, we uphold the confidentiality and integrity of your communication and shielding them from, uh, you know, potential tampering or eavesdropping. This enhancement represents a significant milestone in bolstering the security of our network infrastructure and ensuring the privacy of your data exchanges. This is a way, however, available only with OVN Kubernetes. Next slide, please.
We're thrilled to announce the launch of our latest feature on networking dashboards. Up until now, we had a lot of related metrics that we were reporting and, um, we recognize the need to have a consolidated view to present you a summary of these metrics that can help you in providing quick insights into your network traffic, facilitate easy monitoring, also help you troubleshoot your network systems. So our networking dashboards offer dedicated views of various aspects of your network infrastructure. So we have the ingress dashboard, where you have a one stop shop to monitor your HAProxy error rates, cluster wide statistics, charts, routes, etc. You have an infrastructure board, which is basically all kinds of metrics around your OB and K control plane stats. The resource plane, uh, the resource plane, um, uh, resource utilization, sorry, the control plane resource utilization, uh, latency on your, uh, pod deletion and creation.
And of course, we have the Linux, um, networking subsystem that kind of gives you a comprehensive view on your network performance. With these, uh, dashboards, now we have a centralized hub to aggregate and visualize all kinds of critical network metrics, empowering you to make informed decision and streamline your troubleshooting process. Next slide, please. Over to Roger. So, thanks, Deepti. Um, so, in the observability area, we have two major updates that we are very excited, uh, to share.
First of all, Red Hat build of open telemetry is now generally available, and this is a great advance in our journey of providing the most open observability platform, right? That's not all, right? We are enabling many features in tech preview, and we want to help developers and developers. To build awesome and observable code and foster integration, right? So this goes from automatically instrumenting your code to aggregating metrics. From a spans or even creating alarms or sending the data between clusters and all of this is made available with the open telemetry, but that's not all what else. We're also a step by step funding capabilities.
So now, Prometheus metrics can be scraped converted and sent to multiple backends easily manage and scale with the target allocator. Um, observability signals can be consumed from or sent to Kafka. And, uh, for the first time, FileLog and Journal D receivers have been added as developer preview to gather early feedback. So we can't wait to listen to your feedback and how you use OpenTelemetry, right? And then if we move to the next one, the second milestone is that we are thrilled now, really, to announce the technology preview of power monitoring for Red Hat OpenShift. And this is our bill of Kepler that we already announced earlier, and it's fully integrated in the OCP console.
So there, what can you find the total energy consuming your clusters during the latest 24 hours, um, including much more insights. Like, uh, what's the CPU architecture number of monitor nodes was the source of this power metrics or even a breakdown of the top. Power consuming naming spaces. So now, as you can imagine, this opens the door to understand which containers and bots are consuming the most amount of power. And this enables your first steps into a sustainable computing journey.
So we can't wait to listen to your feedback. Um, so handing over to Peter for virtualization. Thank you. As you can imagine, running virtual machines on a platform like OpenShift have been very exciting. If you've been paying attention of what's going on in the industry, you've got some exciting capabilities we're showing here today in terms of virtual machine instance types.
And this is a step away from the traditional. Hey, I've got a whole bunch of templates that I have to manage. Instance types are much more purpose built for the workloads that you have. And then you pick a size, right? Small, medium, large.
And I want you to boot from this. But those three pieces of information you can spin up of any VM in any cluster. You can also customize instance types or, uh, for the types of things you may want to do differently within your organization. The other thing that we're doing is allowing and extending, uh, data protection, uh, disaster recovery for virtual machines. And this is with ACM and OpenShift Data Foundation. This is an incremental release where machine virtual machines that are created with get ops workflows will be able to be protected across different sites from from disasters.
The other thing that we're also doing is extending some capabilities that we have on other virtual platforms, such as hot plug mix, and we're actually tightening up the integration between OVM Kubernetes, which is now the default networking and OpenShift. And allowing to use that on secondary networks. And get network isolation features like I. P.
Block filtering. Now I'm handing it over to Adele to talk about hosted control plan. Thank you. Thank you, Peter. Um, so, hey, everyone in 4 14, we have G. A.
Hosted control pains on self managed for self managed on bad metal using the Asian provider as well as, um, um, the the Uh, GA, the OpenShift virtualization provider, which allows you, uh, to provision clusters using the, using VMs, uh, provisioned by OpenShift virtualization. This time around in 4. 15, we're tech previewing the ability for you to add any node type to your hosted control plane using the same flow. Uh, and the same provider, which is the Asian provider. In other words, if you have a discovery image, you can use the discovery image to bring a host, um, that is running on vSphere, that is running on any infrastructure, to your, um, uh, hosted control plane and form an OpenShift clusters.
Yes, this can be a little bit manual, but it's a step in the direction up to bridge the gap and allow you to, uh, to deploy hosted control planes on more infrastructure provider. So this is currently tech review. Have a look, try it out, and let us know. Give us feedback.
Um, I will hand over to Brad to talk about ACM and managing and scale. Thank you, Adele. And hello, OpenShift World.
I'm proud to share our team's work for advanced cluster management for Kubernetes 2. 10. Their efforts there been a lot of hard work. And so we'll jump right in. ACM is your better together.
Multicluster enhancing operator. We just heard from Peter and Adele where they had some ACM features tucked in. I'm going to go ahead and start off on our first slide with our cornerstone pillar, governance risk compliance helps you get in that desired state, you know, with our informant and force capabilities. We'll start off with compliance history that we have tech preview capabilities where we can track the compliance history for the policies across the fleet.
Next. We have our operator policy API provides a more native integration for installing and managing these operators at scale. And then we're we're next the gatekeeper operator.
We're matching the community version there. 3. 14 that gives us some additional configurability options. And lastly, on this slide, we have some improved debugging of policy violations, and this is through the diff capability. So we can see that desired state versus actual state to understand why a cluster's not compliant. Next slide.
Please. Now, here we have the remaining core pillars of some, some, uh, items we'd like to spotlight. We'll start out with our multi cluster networking. The submariner options, we have new capability for bare metal and red hat on IBM, Kubernetes service or rocks.
That's also tech preview any of these tech preview items we would. Welcome feedback coming in through the support system and we'll, we'd like to hear how we can further shape these to get them to G. A. And, uh, to compliment nextly in our application set pull model that complements the push model we already have that's reached G. A. And next we have our cluster life cycle enhancement.
You'll see some, uh, R. F. E. S. in here where we've added authentication for H. T.
P. S. O. S. images content with the assistance dollar again. You notice the better together theme.
There is where we're enhancing these for the multi cluster story with other. Components like hosted control planes, so allowing managed cluster updates to use non recommended versions. We've heard from the customers there. So that's helpful to set your particular versions.
You need. Uh, we allow managed open shift cluster versions to be updated as well. And then again, we have now to compliment our, um.
DLI capabilities, we've added console support for hosted control planes with open shift virtualization platform. So that's 2 items coming together with the ACM console. They're both the open shift virtualization and hosted control planes. So, we saw, we heard Peter and Adele talk about some items there as well.
So. We continue to enhance that that multi cluster story by bringing together other operators capabilities in a multi cluster fashion. Lastly, we have observability at scale enhancements. We have the ability to do customizations from a data set, uh, from your search results. And, um, we also have a new, uh, hosted control planes hosting cluster capacity monitoring dashboard to give you that visualization. Next slide, please.
I'd like to hand it off to Daniel. Awesome. Thanks, Brett. Continuing with scale, Red Hat Quay is our central scalable container registry that serves a fleet of clusters, and in the upcoming 3. 11 We are adding the capability to apply more nuanced granular.
Tuning policies by allowing you to define this at the level of each individual repository versus before at the organization level. So this is going to allow you to make more informed decisions about when it's safe to delete an old image. At scale, customers also often use a central identity provider like OIDC based providers. Quay already supports the OIDC protocol. But it's so far lacked the capacity to essentially map a group definition inside what you see to a team definition inside quick and free 11 is going to give that feature to users so they can essentially give permissions to users inside based on their role inside the OTC provider, like Azure Active Directory services. Last but not least, the new UI is also progressing towards parity with the old user interface.
You can use the new UI to define your image build triggers now, to view a history of your older image builds, to view all the audit events logs inside your organization and search through vast amounts of image tags and repositories by the use of regular expressions. And the new UI will now also respect your browser's dark mode setting too. And with that, I'm going to hand it over to Boris for ACS. Thanks, Daniel.
So, Red Hat Advanced Cluster Security for Kubernetes is a mouthful. We like to just call it ACS in short. Version 4. 4 is going GA in early March. Um, same time frame as. OpenShift four 15.
And of course, it's part of the OpenShift, uh, plus platform. So a few of the highlights for a CS 4.4. We extend some support on our platform. So a CS is now available for the Rosa hosted control plane, both on central and to secure clusters. The uh, core VPF is now the default collection method.
Of course, it's much simpler than the driver method that we used before, and in fact. We are deprecating the older version, and it's going to be removed in probably in 4. 5. So this now becomes the default.
Another capability we've allowed is for people to use their own Postgres database for Central DB. Now, this has been tech preview in the in previous releases is now matured and is becoming next slide, please. So following a significant effort for consolidating CLIRv4 with the ACS original StackRock scanner, we are now introducing Scanner v4. And the goal is to provide accurate and consistent scan results across both Red Hat QA, of course, which includes CLIRv4 and ACS. And also expanding our scanning support to include additional operating systems and languages. As you can see in the table, both technologies have benefited from this consolidation.
Um, and Scanner v4 is now taking the, uh, the place of PACROC's ACS scanner. In, as we roll it out, people will still have the option to use the StackRock scanner, but we are, um, highly encourage people to start using scanner before as pretty soon it will become the only scanner available for ACS. Um, yeah, we're also using osv. dev now for security data, which really improves the accuracy for language vulnerabilities.
And next slide please. Another maturing technology that we've shared as Tech Preview in the past is our Build Time Network Policy tools. So it now is going GA. It allows the DevOps team and developers to develop Kubernetes network policies. It's a very tough problem.
So with these tools, the tools analyze the Kubernetes resources of the application in a project in a folder directory, and automatically generates network policies for you. Of course, you need to test them. Uh, and compare and for that we also offer additional tools like rendering the connectivity map and being able to compare between two project versions. All of these tasks are really hard to do without tooling.
Encourage people to start using that and give us feedback. Next slide. Please. Um, so speaking for Anjali, who couldn't make it today. I want to talk about cert manager operator version 113. So, honestly, this is already included in OpenShift 414, but we didn't talk about it then.
So just as an opportunity to mention it now. The CertManager operator is a service that provides certificate life cycle management and it helps users integrate with their own external CAs, DAs, and manage certificate provisioning, renewal, and retirement. Now with, with this release, we are adding new capabilities the users have asked for, which is Very importantly, in addition to managing their application certificates, they can now manage OpenShift, uh, specifically the Ingress controller, um, and, and the API server. And with that, they can really take control and own the certificate lifecycle management for OpenShift as well. Next is in line with OpenShift's multi architecture design, uh, which allows the simultaneous use of nodes for different architectures.
We now have support on cert manager for ARM 64, as well as IBMZ and IBM Power. And finally, customers can now use DNS over HT PS inside Cert Manager. And this is especially useful for clusters running in a proxy environment where DNS resolution over traditional DNS named servers is not available. And also useful for setting, um, DNS over HTP, uh, server with certain manager in order to specify a different DNS. For certificate issues, self check. Yes, it's a mouthful, but it's an important feature that people have asked for.
You can do that without having to change any DNS configuration. And over to Roger for observability. Thank you both. Uh, we heard some from Jose about the in the spotlight about open telemetry and power monitoring. So, let's continue and discover the possibilities of open shift observability. We continue our mission to transform the data into insights.
And as we dive into the latest and greatest of OpenShift 4. 15 releases, we will explore those around these enhancements in our five pillars of OpenShift vulnerability and help you turn data into answers. So next slide, please. Um, as we look at the monitoring sections, we have now continued a release of the cluster observability operator in tech preview, and that introduces the monitoring stack, custom resource definition as additional feature set and that will allow you to run highly availability. Monitoring stacks consisting of Promeus alert managers and some additional observability components will probably be added in the future release. We are switching to metric server in the tech preview, and that is part of making the built in monitoring somewhat optional.
We are adding a kubelet stainless handling that allows you to use stainless markers for low latency stain detection. And for use of workload monitoring, we are now allowing. Users to benefit from the ability to use exemplar storage exemplars are references to the data that you can put outside of a metric set a common use cases that you can put in ideas of program traces, for example, um, another great feature is we now tolerates great. Time stamp jitters.
What it is, is that Prometheus, when it does its chunk compression, it relies on scrape, scrape times being accurately aligned. And then due to some natural delta encoding, it can be some delay in the configuring that can cause the time series database to actually occupy significant more space. And by allowing this getter, we have actually observed a 50 percent difference in some cases on the disk storage for a replicated H a pair. So some other improvements is kind of we query alerts for. Application namespaces for use workload monitoring.
We can, we'll allow the P2P operator to support alert for if the node clock is not synchronizing and some external labels that we will configure will now be visible in the alerts triggered in the console. Next slide. Please. If we look at the logging, uh, open telemetry data model is now supported for vector and low key in tech preview. And we also forward logs integration with Azure and the AWS Azure object storage federation with low key vector. Can I receive logs from our log? So a lot of convenience updates and features.
And also, we continue our roadmap on delivering. UI features in the web console, so now I can look at the log metrics and you are, and you can also search across multiple namespaces in the for the next slide. Please.
So in distributed tastering, the temp operator is now generally available as a preferred backend to store traces, and it comes with support also for the ARM architecture. Tempo is the scalable distributed tracing solution that we support. And in order to focus on this new stack, Jager will now be deprecated.
We will continue to providing critical CVE and bug fixes for these components, at least for two releases. Temple will also have the Jaeger user interface, and it's the traces, so, but in the latest release now, we also have support for spam request, counteration, and other red metrics. So, this constitutes the great leap for our customers, and that can visualize in the monitoring tab on the fly, looking on performance monitoring for their applications. Without investing like in any third party platform. So, last but not least, we have just enabled in the developer preview, the Tempo monolithic deployment. This is a drop in replacement for the Jaeger, all in flavor, allowing for the easy local deployments, but not for production.
So, I'll now hand over for, to Tomasz for the updates on insights. Thank you. Hi, everybody. This release of Reddit OpenShift, uh, Insights brings deployment validation for on premise clusters. Uh, we previously introduced this technology for our managed offerings.
Right now, we're bringing it to the general population as well. So you can get answers on whether your developers are actually setting deployments with improper resource limits, wrong disruption definitions, wrong networking policies, and more and more. To give you the power to basically check how developers are influencing behavior of the core platform. Additional features include conditional data gathering to reduce the footprint of Insights data being sent to Red Hat, only to data that we need when we see an issue with your cluster. Bleed Insights is a new integration in Red Hat ACM, which allows you to display the summary of most critical and important information from failing operator conditions, alerts, and Insights recommendations.
With that, I'd hand over to Ali with Consul News. Thank you. I'll be covering for Ali.
Next slide, please. Uh, dynamic plugin framework. So incrementally on every release of OpenShift introduced some enhancements. And in 4. 15, we have added a new detail page extension. So when customers or partners are creating a dynamic plugin for the CRs, they don't have to, from scratch, create a detail page.
There is a template and a starting point they can use and. We have added examples of that in the Chrome tab example repo for dynamic plugins that you find on GitHub. The other enhancement in this release is that we now support both PatternFly 4 and 5 so that customers have existing plugins on their own time they can start planning upgrade to PatternFly 5. so much.
Um, so let's, um, talk about what's new for the developer, uh, tools area for OpenShift. We have some exciting announcements for developer perspective, so you'll see that we have taken advantage of dynamic plugin framework. We're just talking about, and we are now released a new dynamic plugin dashboard for OpenShift pipelines. And there's also going to be a brand new, uh, there's also a brand new experience for creating serverless functions in developer perspective apartment. Desktop continues his journey on enabling better integration with open shift.
So now we can actually create clusters locally on the desktop. Uh, if we have open shift local. So this way you can go from apartment desktop into open shift local. And once you start deploying applications, it also shows you the contacts and allows you to manage them directly from part of my desktop. If the developers that prefer to use the IDE to interact with OpenShift, the IDE extensions that we have, the OpenShift Toolkit and the OpenShift Serverless, both have been enhanced with new capabilities. Helm charts, allowing you to basically do some remote deployments directly from the plugins.
That's been exciting for developers that prefer to use IDEs. And last but not the least, Developer Hub is now GA and it offers plugins and templates. That you could use to do open chip deployments, monitoring the application as it's out running, including the topology view plugin that comes into developer hub, accessing the pipeline runs the Tecton plugin, monitoring the health of the container image directly from developer hub, and then viewing the clusters from OCM as well. So some really good, exciting updates for developer tools. I want to James Faulkner to talk about runtimes. All right.
Thank you, Prague. So hello, everyone. So OpenShift customers do have access to a wide range of runtimes as part of their subscription.
So I'd like to highlight some of the new updates to several of these runtimes. But first of all, getting apps onto the platform is sort of step one. Um, and so the new version of migration toolkit for applications version 7. 0 can really accelerate that onboarding and bring and brings new support for more than just Java, including tech support for Golang and plans in the future for dot net and TypeScript and Python as a sneak peek into the future. Quarkus 3.
8 will bring new developer services support for the major new Redis 7. 2 release as well as support for Java 21 and virtual threads. And finally, the ability to build native executables targeting the ARM platform will be supported. Also, we have Node 20 and Java 21 images available on the container catalog, including a stripped down version for Java runtime only images that minimizes footprint and attack surface. And finally, we've added regular testing and verification of the upstream Spring Boot version 3. 1 and 3.
2 and beyond, as it evolves, to try and catch issues that may come up. In this popular framework on the platform on the next slide, I want to highlight 1 other major update in the runtime space. Which is the red hat build of key cloak version 22, but this is the next version of our web single sign on and identity and access management solution. And along with a bunch of functional updates, it's also the 1st version to be built on carcass itself.
Uh, which can, which does bring stunning, uh, performance improvements and developer productivity capabilities of Quarkus, things like live coding and continuous testing. If you're already a Red Hat single sign on user, uh, you'll be familiar with the enterprise capabilities it delivers. Uh, and this version is, is really no different in that respect. Um, you can find container images for the new release available through Container Catalog, and it can be also provisioned easily and managed, uh, through its operator, available through Operator Hub.
And finally, for existing users of Red Hat SSO, we published a very comprehensive migration guide for moving to this new release. So that's it for Kikook. I'll pass it on to Jamie for platform services. Thanks James.
Alright, so ServiceMesh helps you create reliable microservices with automated MTLS. Zero trust policies and visibility with metrics and traces. The first mesh 2 file will use Istio 118 and Kiali 1.
7. The release brings full GA support for service mesh on ARM clusters and makes the Kiali OpenShift console plugin GA. We've also added extension providers for Zipkin and OpenTelemetry to provide greater choice for observability integrations. Uh, such as for the new OpenTelemetry Collector Tracer. Its release includes a developer preview of IPv4, IPv6 dual stack, as well as the Kiali Backstage plugin for Red Hat's developer plug, Red Hat's developer hub.
We've also made significant update to the, updates to the, uh, SAIL operator, which is our dev preview for OpenShift Service Mesh 3. There's a new blog post on our progress, which you should check out. We're also, we're working towards technology preview in the first half of this year with general availability planned late in the year. I'll now hand off to Harriet to talk about GitOps. Thanks, Jamie.
Hi, so there'll be two GitOps releases of interest to those of you upgrading to OCP 4. 15. OpenShift GitOps 1.
11, which we released in December, and 1. 12, which is coming out in the middle of March. A couple of standout features, uh, in 1.
11 we introduced dynamic shard rebalancing as tech preview. So when you're managing a lot of applications with Argo CD, you're probably already doing scaling for your application controller, but it can be hard to manage the replicas and get a good distribution of cluster resources per shard. With dynamic rebalancing, you can set your min and max shards and let the round robin algorithm take care of the rest. In 1. 12, we've got notifications going GA, as well as three new tech preview features for you. We'll have official support for the Upstream Argo CD CLI.
We've added a Rollouts traffic management plugin for OpenShift routes. And we are very excited to launch a tech preview of OpenShift GitOps for MicroShift. We'll have a manifest based install that only includes the core Argo CD components, which enables you to optionally bundle a much smaller footprint version of GitOps in with your MicroShift image. I'll hand over to Kusta to talk about pipelines.
Uh, thanks Elliot. So, OpenShift pipelines, uh, is basically your cloud CI cd solution based on, uh, te. So with OpenShift we'll get, uh, OpenShift pipeline in one.
We have focused on, on observability and performance of Teon at scale. On the observability front, uh, in the last quarter, we released Teron results as tech review, which actually archives, uh, help you archive the Google Teron objects. Uh, it's still in Tech Review, but uh, there are certain new features that we have introduced in One Protein, uh, which includes, for example, uh, users can bring in their external DB or storages. Now, uh, we have also introduced a new API for, uh, summarizing the logs with various kinds of filters. Uh, as Farag mentioned, uh, Tektron Results now has a deep integration with OCP console. So even if you're, for example, your pipeline runs or task runs are archived, you can see their logs inside the OCP console itself.
Uh, we have also shipped, uh, as Farag mentioned, uh, CI centric dashboard, uh, for OpenShift pipelines. Uh, the other area, as I mentioned, was around performance. Uh, so we did a good amount of Tecton controller performance testing. Um, and some of our customers, they reported that, uh, for example, Tecton pods startup time was getting slow with scale. Um, so in this 1.
14, we recommend to enable HA of pipeline controllers. Which will certainly, uh, help improving the performance, uh, pipeline as code also gets new features, um, like multiple GitHub support, app support, uh, remote pipeline support, impact Resolver. Uh, we also validated, uh, secret Source, CSI driver and integration. The specific use case that we had in our mind was making real entitlements available in, we have updated. Ship with open ship pipelines. Uh, finally, uh, the other console improvements include a vulnerability column.
So basically, for example, boas talked about a CS scanners. So you can integrate a CS scanners with Teon, and once you integrate with Teon, you'll see all the vulnerabilities reported in the details as well as the listing page or pipeline runs. Uh, and also we have been, uh, introduced indicators for whether pipeline is being signed by or not. That's to nana.
Thank you, Gustav. Um, OpenShift Serverless is an add on that elevates OpenShift platform experience by offering better auto scaling and networking for containerized microservices and functions. Um, it is based on the upstream project Knative and with 1. 32 release, we would update it to Knative 1.
11. Serverless is now on platform agnostic that is tier 2 support and has fewer releases per year to support you better. Serverless functions is a programming model that increases your developer velocity by providing templates for jumpstarting your app and doing container creation for you. And here we have now added PVC use and have prominent dev console presence. Um, one of the most asked features for serverless has been multi tenancy and we now have multi tenancy support through service mesh, a stack preview for both serving and venting needs. For edge use case, serverless is now supported on single node OpenShift support and we have also added more configuration for enhanced security and performance.
For more details on this and more, please check out our release notes. And with that, I will hand it over to Ju to learn what is new in installer flexibility. Thank you, Nina. We covered that part for for installer flexibility. We offer for this case, driving onboarding experiences that provide varying levels of control and automation, automated or installed provision infrastructure where the installer controls the full installation experience, including infrastructure provisioning with an opinionated best practices deployment of OpenShift. Then a full control or user provisioned infrastructure where you are responsible for provisioning and managing your own infrastructure, allowing you a greater customization and operational flexibility and control.
The third one is an interactive or a connected experience with Assisted Installer, which provides a web based. User interface that, uh, guides you to, to create a cluster. And, uh, finally local, uh, for disconnected experiences with agent based installer, which provides a streamlined, streamlined method, uh, to deploy OpenShift in. Unrestricted network environments like fully disconnected or higher gap environments. So let's dig in and look at some of the new installer features for this new release. So, um, let's focus first on, on, on the highlights for installation on, on cloud providers for AWS, uh, alongside alongside that in support to AWS Outposts and AWS Wavelength, as I mentioned at the beginning of this session, we have also added support to a new AWS region in, in Tel Aviv.
We are also making, uh, MTU configurable at install at install time, which is specifically. Useful for customers deploying workloads on these regions on AWS Edge locations like Paveland, local zones or outposts, where the network maximum transmission unit is limited between the Edge locations and the AWS public regions. For GCP OpenShift deployments, a user can now choose their own DNS instead, while deploying OpenShift instead of the GCP Cloud DNS service.
CCM has graduated to GA, so OpenShift deployments that are running on the legacy 3 CCMs will now use the external cloud controller manager. Deploying OpenShift on Oracle Cloud Infrastructure with VMs is now available as a tech preview. Installs on Oracle Cloud Infrastructure can be performed with the assisted installer or agents. I just based our methods I mentioned before for IBM cloud VPC. We have add support to disconnected and air gap installations as well as specifying a user managed key.
So the cluster infrastructure objects can be can use this user managed key to encrypt information. And finally, for Azure, we are extending the user managed key support to also encrypt the Azure storage accounts used for OpenSea. With this, I'm now over to Ramon, who will share that, uh, what we are doing for a parallel relationship.
Thank you Mac. So we start with the agent based installer, and we have added, uh, four features. The first one now in four 15, you can configure bare metal hosts on day one. Essentially, you will introduce in the install, uh, configured YAML file, uh, what you see on the right, uh, username, password, and address of your BMC to have the notes managed.
Way, um, keeping with improvements on day one, we are also adding, uh, install conflict that features like device hints to be able to specify the device you want to install and shift on in your bare metal notes, host networking configurations and other configurations directly into this file to simplify and unify what we are doing with API now in a base installer. We can also now configure the credentials on day 1 right away, along with the installation when you create the image. And finally, we are introducing a GA for platform external support, just like we've done for Oracle cloud infrastructure, which uses platform external that you can install from the agent based installer or platforms, other integrations with new platforms will benefit from this next slide. Please.
And OpenShift on vSphere. Two main things we wanted to highlight. First, we are documenting, and this is ready to see in the 4. 15 documentation, the minimum privileges you need for vSphere. This will allow you to set granular permissions while staying secure and functional at the same time in both IPI and UPI. So here's a screenshot of the new documentation.
That you are going to see in for 15 for the spot. And secondly, control play machine sets is now technology preview for the sphere to have a simpler management of your, uh, the plane next slide. Please.
And Nutanix and Nutanix as one of the platforms that's growing the most in terms of new clusters. We are adding features with every release this time zones in Nutanix. We have the concept of Nutanix prism elements. These are basically what you would call a cluster.
And in this new feature, now you can distribute your control plane and. Also, your compute notes in multiple prism elements. That is multiple clusters, which essentially will also allow you to configure separate subnets.
On the right, you see as example of this for the different zones of failure domains that you want to distribute your cluster on and this is available in I. P. I. Deployed clusters.
Next slide please. And now open shift on bare metal in this release, we are adding support for hardware rate configuration on day one by a red fish on Dell notes. Remember that hardware rate configurations are specific to hardware vendors. Right? So we up until now supported. Fujitsu through their VMCs that are called IRMC now in Dell, uh, with iDRAC and in the future, uh, keep an eye on this. We are adding also HPE and, uh, Supermicro that's beyond, uh, 416.
Next slide. Please. And OC Mirror, I recently started covering OC Mirror. So if you have any feedback or anything you need from OC Mirror, I'm your product manager.
And in this release, we are introducing a developer preview of what we call enclave support. This means that with OC Mirror, now you can have a centralized image registry disconnected that you can update against the registry that we publish at redcat. com. And from it, you can maintain multiple other registries disconnected at the same time called enclaves from this one with this, you will save time afford bandwidth by just mirroring the images that you need to each of your enclaves to support different environments. And with this, I'll pass it over to you.
remote. So what's new for OpenShift and OpenStack 4. 15. So, 1st of all, we are debuting dual stack support, uh, on open stack, which is basically deployed with dual stack. And this is for UPI and IPI clusters alike.
In addition, we're going to introduce an improvement to how we are about to consume first data path instances. So this is basically, uh, in tech preview, we're gonna introduce an improvement on how the machine sets are being, uh, uh, deployed and catered for. Specifically for Telco customers, uh, and that's basically, uh, OVS DPDK, so DPDK Virtio and also with SR IOV DPDK. This is something that our Telco customers running OpenShift have been asking for for quite some time. And, uh, last but not least, this is the end of the road for Courier. So we have introduced in 4.
14 documentation to migrate off courier towards OVN Kubernetes. But starting from 4. 15, we will not basically entertain any more new installations with Courier, and also upgrades will be blocked.
Um, so keep that in mind. And with that, I'm going to turn it over to Mark. Hey, thanks Gil. Much appreciated. Um, for CoreOS LANS, so first up, the ARM64K page kernel is available.
As Erwan mentioned, this is recommended in most scenarios for better performance. Next up, we have tech preview support for installing our costs to the primary disk over iSCSI for devices that support the iSCSI boot firmware table driver iSCSI underscore iBFT. Um, there are some known issues in some cases, but if you have the hardware, please give it a try. It'll be tech preview.
Last on CoreOS, custom image installs are coming. Um, this is going to come via a RHEL image builder. Um, the initial release is only going to support raw format, so it won't be immediately useful for out of tree driver scenarios.
But, uh, custom live ISO support is a top priority. On the machine config operator, we've tweaked, uh, config merging logic, so that custom pool settings always take priority over the worker pool. Our prior logic depended on alphanumeric order of the pool names, and that sometimes led to undelightful results. Finally, 4. 15 brings a tech preview of enhanced MCO state reporting, which will bring more correct and detailed information about the phases of configuration rollouts and updates.
This should aid everybody immensely in troubleshooting and giving administrators an API they can rely on to understand the state of their machines. Next up, Gaurav with Control Plane News. Thank you, Mark. Enabling defuse. So, conceptually, containers are very secure, right? Nothing can seep in and seep out.
But in certain use cases, the container needs to be mounted on defuse. So in 4. 15, we are enabling defuse in containers, and the benefit, you will see the result in better and faster builds in POTS. Next slide. Deprecating ICSV.
Uh, very important, deprecation is not removal, but we are encouraging customers to start using IDMS. In 4. 15, we will support both in same cluster, uh, and we will have a migration step documented in documentation where you can just, uh, see how to migrate from ICSV to IDMS. Next slide, uh, must gather when you have might have used it.
You use it to gather the logs and saying to red hat. It runs on the master node. Um, sometimes, uh, when the notice, uh, you know, when your cluster is too big, it has too many logs and logs gets into gigs and gigs. It tends to fill up the master node. So now in four 15, we are going to gap it out of the box, like 30 percent of total volume so that it will not fill up your, uh, a massive note. It's like, uh, selective workload monitoring with V.
P. A. So, uh, V. P. A.
When you start V. P. A. It is starting on namespace and all the application deployed in that namespace. Uh, it monitors everything, right? Um, and the problem that we're seeing is it takes too much memory and CPU to monitor everything right now with 4 15.
We're giving you ability to select like you have. You can select few applications that you want to monitor. Uh, and that and the benefit that you'll get is it will save your CPU memory resources. Uh, that's all for my site. Uh, next presenter.
Thank you. So now, uh, let's look at some of the other networking highlights for this release. Uh, first up is the announcement of removal of OpenShift SD and CNI option for all new installed clusters at 415. So, you heard it right. So, starting 4. 15, OpenShift SD and CNI plug in will no longer be an install time option for all the new, uh, to be installed clusters across all installation options, across all platforms.
Of course, we have a caveat. Uh, it accepts the IBM power systems for this release, but this is however applicable to power systems, uh, at, uh, OCP 4. 16. Note that customers currently using OpenShift SDN can upgrade to 4.
15 or 4. 16 and that will continue to remain fully supported. The next up, multi network on Kubernetes or OpenShift is deployed by Multis, right? You can have multiple interfaces to your pods. So what does this provide is enhanced tenant isolation, regulatory compliance, and it also supports advanced network topologies. But you ask, what about security? So with the support of Kubernetes Upstream multi network policy in OpenShift, we aim to extend the existing network policies for non primary interfaces on the pod.
In this release, we announce the support of DualStack, as well as SRIOE KernelCNI. Next slide, please. We have the latest version of the network observability operator 1. 5 releasing very soon, and we have a bunch of exciting features that help you not only to understand and monitor, even navigate your networks better as you can see on the slide, you can observe your traffic across zones. So we're introducing a reporting traffic for cluster and also for zone basis. We also are introducing, um, reporting of around trip time, the RDT per flow basis that helps you in latency analysis.
We are also having better DNS tracking enhancements as we now support TCP as well as UDP. And this with along many, many more features, uh, which we have kind of put into this release to make install easier, configure your operator easier to observe network flow better. So this is an add on operator. Please give it a try and let us know how you feel about it.
Next slide, please. Thanks. Over to Tony for Operator Upload. Thank you. Thank you. This release focuses on making things more secure when installing operators that leverage short lived token authentication.
Think of it like locking your house door with a special code that changes every few minutes. Even if someone gets the code, they only have a tiny window to unlock your door before it becomes useless. So that's the kind of security we are bringing to operator installations. More operators are getting on board with this standardized approach, so users can benefit from this extra layer of production. I won't go over everything here, but in general, Operator Hub will walk you through entering the necessary information during the installation for those operators that support short lived token authentication.
Then the CCO will automatically manage the tokens, giving operators secure access to resources in your cloud account. So if you are looking for a more secure and streamlined way to install operators that talk to cloud APIs, keep an eye out for OCP 4. 15. Next slide, please. Here are other key improvements in this release. First off, let's talk about making your life easier.
Imagine you have dozens of operators running across hundreds of clusters. Wouldn't it be great to tell which operators are out of support easily? For that, we've added easy access to deprecation information in the OM APIs, so you can stay on top of your operators and make sure you are always staying within the support boundary. We are also working with the internal build pipeline operator team, so more operators are getting on board with these new features. We are also peeking into the features with OM 1.
0 tech preview with two key features. The ZStream auto updates automatically apply all the security fixes. Without you manually applying them. And the best part is no worries about any breaking changes from the auto updates. We've also revamped the way catalogs works to provide greater visibility to the contents with fewer resources and faster improvements.
We'd love to hear your feedback on these tech preview features and please stay tuned for more to come. Next, I'll hand it over to Gregory for storage. Hello everyone, and thanks, uh, Tony. Uh, let's start the, uh, storage update with what's new on the CSI drivers. So, um, for, um, GCP Firestore, we are now, uh, adding support for the shared VPC deployments.
That's a common deployment model that facilitates, uh, network management. On the IBM side, so we added bring your own key capability, which allows to enable encryption automatically with the key provided by the user during the installation. Finally, we are introducing an optional wiping parameter to LSO, which removes all the existing partition tables metadata. That allows environment to be easily redeployed without manual intervention. On the CSI migration side, since OCP 4.
14, all drivers that are shipped as part of OCP now have migration enabled. So we recommend switching the default storage class to CSI on upgraded cluster as this is not done automatically. Last, but not least, we are graduating full support to the retroactive storage class assignment feature. This allows PVC with no storage class defined to be retroactively assigned to the default storage class as soon as it's set by an administrator. This avoids to remain stuck in a pending state. Next slide, please.
Next up, we are adding full support to a new feature that improves OpenShift behavior when a node is shut down ungracefully. Indeed, when a node is shut down, it is not detected by Kablet, the pode and volume attachment are not properly deleted. And that requires manual intervention. By tagging the faulty node with an out of service taint, The volume attachment are automatically released and workloads can be rescheduled elsewhere. Uh, so there is not always an admin available to manually change the nodes.
So we added support for this feature in the self node remediation operator, uh, by choosing the out of service things, remediation strategy, the operator will automatically add the paint as part of the remediation process. Next slide, please. Right. Good news on the, uh, excellent. It's front with a very much expected feature that improves how we apply security context in volumes.
Uh, we are introducing as Linux context mounts as tech preview, which massively improves the time it takes to apply as Linux context, especially for volumes with a lot of files. Uh, the bottom line is instead of doing recursive labeling, we are adding the context at mount time this time. Um, so this, uh, applies, uh, only to RWP access mode for now.
And we are actively working upstream to extend that to RWO and RWX. Please note that this is driver dependent, uh, this capability needs to be enabled in the driver. We've done so on all OCP shipped block CSI drivers.
RWOP being also, uh, tech preview at the moment. Next slide please. Alright, on the LVM storage shy, we have a couple of, uh, very interesting new additions. Uh, first off, it's now designed and supported with FIPs. Uh, also we are adding multi noes support. Uh, up until now, uh, LVMS was only supported on single node deployments and with, uh, 4.
15, there are no restriction on that front anymore. Uh, that being said, just to be clear, the same storage approach remains. Data is stored on the local node disk. Therefore, there is no failover mechanism.
If a worker goes down, the pod relying on those PVs. Can't be rescheduled elsewhere. So, uh, we recommend to use the solution only for, uh, the workload that provides, um, self resiliency. Other than that, we now support software red and we added the same white option we mentioned earlier for LSO. Next slide, please. All right, let's finish with the, uh, uh, updates.
So, in this release, uh, we expand the regional dr for block and file using a C. M. console to existing deployment. Uh, we are also adding full support to the non resident storage class.
Each year, also known as replica one, uh, this is intended to support application that manage and residency at the application level or when using a resident, uh, underlying storage. Uh, what else? We also added performance profiles that can be set during deployment and post deployment with the option to change them later on. Uh, we also added an option for customers that are using internal mode to also connect to an external set on top of that. Right? So cluster, uh, can now, uh, be expanded, uh, and, uh, that's improved the overall capacity. You can also define data location or configure multiple storage tiers. And that is it for the storage updates.
Hanging over to Michael for Telco 5G. Thank you. Thanks a lot. Some ports.
Uh, which are, uh, Intel core use cases, uh, requires dedicated CPUs. Uh, it is because of, uh, they are, uh, they need to handle DPDK based data plan. Such ports usually consume full, uh, CPU if they need, for example, one of the
2024-02-28 07:04