The future of generative AI: What enterprises need to know | Oracle CloudWorld 2023
[MUSIC PLAYING] ANNOUNCER: Please welcome Greg Pavlik. [LIVELY MUSIC, APPLAUSE] GREG PAVLIK: Hello and welcome to CloudWorld. Delighted to have you here.
And this is one of my favorite topics to talk about. We're going to talk about generative AI which is getting a lot of excitement both here at CloudWorld and just in the industry at large. We'll start out by giving you a little view of what we mean when we say generative AI. So we had this early first generation of machine learning capabilities. And they were focused on working with structured data, often working with numbers, and giving us the ability to do things like predictions, classifications, and finding ways to not only recognize patterns but operationalize against them.
Very powerful techniques, still very relevant today. But generative AI is new, and it's different. And what it brings to the table now is models that can actually create content that looks and feels as if a human being had created the content. That could be something like an email. It could be even writing some snippets of code or creating videos. These models are getting larger.
They're getting more powerful. They're being able to do more interesting things. There's two distinct characteristics that you should be aware of with generative AI. One is the generative function.
It's able to create things based on patterns that it's seen you train it on large sets of data. And it leverages observations or learnings from those patterns. And it's able to create content based on those.
And the other aspect, which is significant as well, is generalization. Based on those patterns, it can be asked to do things or to create content that's not exactly like what it's seen before. So it's able to generalize as well.
Very powerful stuff. The thing about generative AI is there's a ton of misconceptions out there today. And I think one of the reasons there's so many misconceptions is that a lot of what's made the news is really oriented toward the consumer. They're really interesting in neat examples where we have models that are trained on vast corpus of internet data.
And they can do things like pass a bar exam. They can write a term paper. They can do relatively complex tasks, like for example, compare two philosophers and their thought.
Or they can just simply give you a bit more information that you would have maybe used Google to look up in the past. So you're kind of accessing information models from the vast corpus of internet data. The challenge here is that's not what most businesses need, like most businesses aren't trying to write term papers about philosophers. They're not trying to ask questions of the internet in order to make business decisions.
What businesses need are models that reflect business specificity. These are models that can be fine tuned or adapted to a specific organization's data, and they're designed to deliver outputs that look and feel like they're relevant to the business. So a lot different than what you see with the consumer space. So we're going to focus most of the discussion today on, what does it mean to take generative AI into the business? And we're going to start out here with a really simple example.
This is a custom application. It's using generative AI to solve for a number of problems that a common knowledge worker is going to encounter in their day-to-day job and show how generative AI can make that work both more efficient, faster, and more accurate. ANNOUNCER: Meet Sarah who works as a sales account executive at FoxBox.
As she checks her calendar, her manager forwards her a meeting with Vision Corp. She needs to make the best first impression and deliver a pitch with as much information as she can gather before the meeting. To get up on speed quickly, she leverages the power of generative AI.
First, she'll get her bearings and ask a general question. The large language model here understands and addresses the multiple questions that are embedded in this single run-on sentence. At a glance, Sarah can see what industry Vision Corp is in and the fact that there are long-time customer. Even more interesting, Sarah can now search over her company's proprietary sales data to find out what Vision Corp has mentioned as potential use cases and get ideas for valuable scenarios to pitch to her meeting with them soon. In this response from the generative AI system, we learn that Vision Corp has had multiple conversations related to the Fox 360 products about a call center analytics use case.
As a sales executive, this is exactly the type of signal that she wants to dig into further. Now, Sarah can pivot into searching over another internal database. This time, the generative AI system is able to understand a complex query and summarize information from an internal knowledge center with thousands of how-to articles and requests for proposals. So she learned that there's four particular products that are required for this call center analytics use case.
They are the four that are in bold on the screen here. To personalize the pitch, she'd like to know if this customer is using any of those four products based on their procurement contract. So here, you're seeing that large language models can analyze and summarize large documents, like legal contracts or procurement contracts, to find the most important words and ideas, pulling out the most essential data and presenting the information clearly and concisely here within a really nice table. Sarah now knows exactly which two missing products she can upsell to the customer to complete their desired use case.
So you can see from here on the right-hand side, it could take a half hour or more to find these information if she had to read through it herself through tens if not, in some cases, hundreds of pages. But with this generative AI system, she was able to find this information in a few seconds. With the information she will get from the generative AI system in a few minutes, Sarah tailors her presentation to the call center-specific use case, including a skew price sheet and a proof of concept trial at the executive briefing with a customer. Not surprisingly, Vision Corp team was blown away with her and wants to follow up with a deeper discussion.
Sarah is now supposed to meet with their chief architect to give a deeper dive on the product that they could adopt. So she must learn a little bit more about this chief architect Gavin before scheduling the meeting. So this time, she's getting a bit of information summarized from LinkedIn. Now, she can actually offload that work of reaching out with automation built into this generative AI system. Instead of just returning a text response, what you'll see here is that the generative AI system acts on the query from Sarah, cross-referencing her calendar with the time zone listed in Gavin's LinkedIn, finding three potential slots as requested and generating an email response.
In just a few seconds, she's able to follow up without even having to open her email or type a single word. Now, imagine how productive the next hour, the next day, or the next week of work could be for her if she has the power of these generative AI technologies at her fingertips. GREG PAVLIK: OK, so quick demo. Really brief look at a workflow that intersects with a typical day in the life of a count exact at a fictitious company.
We saw that in a matter of two to three minutes. But in reality, if you were to do all the tasks that she tried to accomplish, that could have taken her half a day, maybe even the full day to accomplish. So you can immediately see the kind of productivity gains that generative AI can bring to the table for just a typical job in a typical business.
Let's take a step back a second because I want to talk a little bit about what's going on under the covers. Let's look a little bit at how does this magic work, so you get a little bit deeper understanding of what generative AI is and then what it can accomplish. I talked earlier that we kind of had this classical type of machine learning. In that case, models were really looking at patterns and using them to make predictions or classifications and recognitions. So we have an example here where you have a computer vision model.
It's able to be-- it's trained on a series of pictures of a dog. And it's able to recognize when it sees a new picture that contains a dog within it. So it's all about pattern recognition, which is a powerful thing. This is the basis for things like autonomous cars and autonomous driving. But generative AI goes a step further. Here, the models are able to use the learnings that they have from being trained on these patterns in order to create something, almost like a child.
If you show a child a series of pictures of a dog and point them out that this is a dog, at some point in their development, they're going to be able to go, and they're going to be able to create a picture of a new dog. And that's kind of the way that generative AI works. The area that's gotten, I think, the most attention and where we've been putting the most investment from a business perspective is on large language models, so doing natural language processing.
So these models work by making statistical inferences based on what it expects the next desired word will be in a sentence. It's a very simple concept, but it turns out to be very possible. One way you could think about these models is like they're a sophisticated form of autocomplete. So by writing in strategic instructions and texts in human language, you're giving it a prompt.
And the model uses the autocomplete functionality then to solve the problem that you're asking it to solve for. Now, when you had autocomplete models in the past, they're pretty primitive. They really can't do a lot. There was something missing.
What was really missing was the context, the ability to put together the contextualization and then lead the model to create more than just the finishing of a single word. Right here, we're asking the models to create sentences, paragraphs, structures. A breakthrough came not too long ago, really. In 2017, there was a paper that was developed called "Attention is All You Need." And it was the basis for what we call modern transformer models. Using this attention mechanism, the models are able to track the context, and they're able to do much more sophisticated autocomplete techniques than we've ever been able to see in the past.
So they are statistically inferring the next best word. But it turns out, this is a very powerful technique to complete complex tasks. Now, I should say the models don't actually work with words. They work with something called an embedding. And an embedding is really a numerical representation of a word. It's encoded in a multi-dimensional vector.
This allows us to represent the words as concepts and for this model to reason about the relationship between concepts. Now, at the end of the day, the generative models, they're looking at these patterns, and they're making these inferences. And it may sound very complicated, but it's actually quite simple.
And it turns out, the most important thing to be aware of is that the better the embedding model you have, the better the natural language processing will be for generative AI. Some of the tasks you can complete. This is not complete absolute list but just some of the things you can do. And we'll look at some specific examples. Content creation, creating an email like we saw before. Things like copy editing, if you get a sentence or a text and you want to improve the way it's written, or you want to change the style.
Being able to do chat, ask questions, and get answers. Summarization is a very powerful technique, especially for long documents. Semantic search, something many people haven't thought deeply about when they hear the term generative AI. But embeddings are used to match in concepts that you're asking about.
So if you ask a search engine a query, we can do a better job of understanding what you're asking for and a better job of finding relevant content. We do things like sentiment analysis, toxicity, detection, et cetera. We'll go through a couple of examples here just to highlight what this looks like. This is kind of a neat one. There's a prompt here that we ask the large language model to generate some ad copy for a cashmere sweater. So we ask it-- we put the prompt in.
We hit the Generate button. And all of a sudden, it comes out with something that a marketer could actually use in practice. The thing that's really interesting about this is that the marketer can generate many proposed ad prompts, 10, 12, in a matter of seconds, compare them, see what the end reaction is with consumers, and really start to optimize the marketing function. People do that in real life. It takes hours, sometimes days. Here, you can do it literally in tens of seconds with generative AI.
Another neat example is summarization. You have complex documents. Maybe their contracts. Here, we took the transcripts from the Oracle earnings call.
And we asked the model in the example to go ahead and do a summarization of the earnings call. Not only does it give you a clearer summary of what our earnings look like at Oracle, but it's accurate, easy to understand. And I happen to use it in communicating-- I send an email to my wife with a snippet from the model itself to say, here's how the company results went the last quarter. So practical, easy to use, and doesn't require any special expertise in science to really summarize complex documents.
Last quick example I'm going to give here is what I mentioned before on copy editing. So here's a note. This is what I got from one of my product managers. She deliberately made it look a little bit casual and so forth. But we asked the model to go ahead and take the casual note and just rewrite it in a professional style.
What's interesting about this is it can be applied across a broad range of domains. Oracle does global support. We do support in Romania. Sometimes Romanians are answering questions in English, and English isn't their first language.
The models can actually go and work on the text and make it appear as if it's written by a native English speaker. So there's a lot of examples where you can use this in a practical business environment just to solve for basic problems that almost every large enterprise is going to encounter. Now, they said before, when we've been seeing the news and we've been reading about what's happening with generative AI, it's almost always about these consumer cases. And to date, we have not seen as a company focused on making sure that this technology is built for and adapted specifically for the enterprise.
So rather than focusing on leveraging the raw information set of the vast internet corpus, we are taking a very different approach, which is looking at the requirements of business to apply this successfully today in the immediate context of their real world problems. At Oracle, there's three areas we're investing in. We're very focused investment, and we've been doing this now for well over a year with the purpose of bringing generative AI directly into the business context. So one of those areas is on our GPU infrastructure. We have something called an OCI SuperCluster. This is a way to bring together a large number of GPUs and then use them to train large language models effectively at scale.
Remember, nobody just uses GPUs. You use GPUs together in network clusters. Sometimes hundreds, sometimes thousands of GPUs at a time when you're building these models.
Our SuperCluster technology has become something of a standard in the industry. And we have companies, like Cohere, who you'll hear from today, MosaicML, Adept, Character.ai, NVIDIA, building on top of this, building models on top of this infrastructure. So for training models, OCI as GPU infrastructure is becoming a standard.
And we think it's the best option in the internet or in the industry today. Moving beyond that, we're also focused now on providing inferencing quality, inferencing quality execution of the models in the enterprise context. And we're doing that at the platform as a service layer, as a part of Oracle Cloud Infrastructure with something we call the OCI Generative AI Service that we've been working on. And then we're applying this technology across our portfolio of applications. Those include things like our Fusion application suite, NetSuite, and all of our vertical business units. The full portfolio of SaaS offerings is embedding generative AI into it, making it immediately useful and immediately applicable for accelerating the work that you do in the context of those applications.
So this week, we're really excited to announce the availability in beta of our Generative AI Service. It's a very simple to use service. It has a graphical user interface where you can cut and paste in prompts, the natural language instructions for the model, and see the kinds of results and see the kind of workloads you can do. It also has APIs that allow you to go and integrate these into your workflows and applications.
Has some really interesting characteristics to it that make it different, so I'm going to talk about those a bit. But it is a part of the OCI portfolio and a part of how we're building our Oracle applications across the board. So what makes it interesting, what makes it different, what makes it advantage to work with Oracle in this context? Well, one is we really are focused on how we do this for the enterprise. One example is selecting models that are actually trained on business data, trained to tackle specific business problems.
And we're making sure that these models can be fine tuned or adapted to the data sets and to the problem domains that you have both as an industries and as individual businesses. The tailoring to the data. The OCI Generative AI Service allows you to do fine tuning of models, you take a base model.
You can work with the service through simple APIs and apply that data to make a model that's better for your business. So you can have a customized model that's specific to your business. One of the principles we've had from the get go, right from the start, has been absolutists about security and privacy. So we don't look at your data. If you're submitting a prompt, that data is private to you. It's still your data.
We don't touch it. Our partners that we're working with on the model side do not touch it. We also allow you to even publish the models into private network endpoints, so they can be completely protected and completely private to you. The last thing that we're offering is a dedicated deployment for these custom models. And this allows you to have a single tenant AI cluster so that your models are not only private to you.
But you pay a single price. You don't have to pay based on usage or based on a number of words or tokens that are processed. And there are no hidden charges.
You have predictable performance. So we're really making sure that this is a reliable, safe-to-use technology for real enterprise businesses today. Now, one of the strategies we've taken in terms of launching this service is we've developed a very close partnership with a company called Cohere. And we're going to hear from them a bit today. We've focused on them because they have been one of the leading providers of large language models, one of the leading creators of a lot of the core technologies within the generative AI space.
And from the start, their company has been oriented toward solving business problems, not toward the consumer space. So we look at Oracle plus Cohere as the generative AI answer for the enterprise. So I'll take a little bit more of a look here at the Cohere models. This is kind of an interesting set of data points. This is from a HELM study, which is done at Stanford University. It's kind of the standard for objectively evaluating the performance of large language models.
And what you can see here is that Cohere is consistently performed at the top of the rankings in the HELM studies. They've outperformed Davinci models from OpenAI, both in terms of the base models and on par when it comes to the command models, the models that actually accept instructions and understand the kinds of work that you want the model to perform on your behalf. One thing that's important to note here is the model sizes.
Here, you'll see Cohere has got $52 billion parameter model. The $52 billion parameter model is about a third the size of the comparable models from OpenAI and from Meta, which is notable because smaller models are a bit different when it comes to three fundamental characteristics. One is efficiency. They're able to process the data quicker and get you answers faster than larger models. The second, and this goes back to a point I was making earlier, is the smaller models are more adaptable. When you do fine tuning and you add your data sets to the training of the model, the smaller models are more heavily influenced by your data.
So if we think about adopting these models into specific industries and verticals, we think about adopting them into your business, the smaller models are more influenceable. So we can make them more custom easily as a part of the service and a part of the API set that we offer. And then the last thing I'd note here is the smaller models are also cheaper to run. There are less GPUs required, and so you're able to do it at a lower price point as well.
So you put it all together, and we think this is a great partnership when it comes to applying generative AI to the enterprise, effective models, efficient models, and models that can be adapted to your business. I'd like to invite Aidan Gomez, the CEO and co-founder of Cohere to come out. And let's talk a little bit about Cohere and the work that they've been doing. [LIVELY MUSIC, APPLAUSE] All right. Have a seat.
All right, welcome. AIDAN GOMEZ: Thank you. Thank you. Happy to be here. GREG PAVLIK: Yeah, it's great to have you here. So tell us about Cohere.
Tell us about the focus. And what's the goals for the company? AIDAN GOMEZ: Yeah. So we're a large language model developer that is explicitly focused on enterprise. Like you introduced earlier, we build two different types of models. So the first type, which folks are familiar with when using chat bots, are generative models.
And these, like you say, complete a sentence for you. You can give them instructions, and they'll follow those instructions. The second type are the embeddings models that you introduced. These transform text into a numerical vector, which you can then feed into downstream systems to do stuff like search or classify content. GREG PAVLIK: Great. So you've been a company for three years now? AIDAN GOMEZ: Four years now.
GREG PAVLIK: Four years, OK, four years. And you're starting to look at partnerships with companies like Oracle. What's motivating that? How are you thinking about the relationship between the partner and the enterprise customer base and then, of course, the science itself? AIDAN GOMEZ: Yeah. So we're super excited to partner with Oracle to build the Gen AI Service on OCI.
In terms of partnerships, Cohere is a very compute hungry company. We use a lot of supercomputers. And Oracle builds the biggest and best supercomputers on the face of the planet. And so that's a huge piece of our ability to build extremely high quality models. The second piece is going to market and actually giving these models to enterprises in a completely trustworthy way. So crucial to that is data privacy.
We've seen in the past that when using generative AI, there have been data leakages. And so what we do together is we deploy completely privately within the customer's VPC. And so not even Cohere can see the data.
It's truly your IP within your environment. GREG PAVLIK: So that's great. So there's been a bunch of innovation in this space that's happened very, very rapidly. I think one of the things that's come up recently has been this new idea of Retrieval Augmented Generation. A lot of people here probably haven't heard about that. So I thought it might be a sense to introduce the idea.
And then talk about why it might be particularly relevant for enterprise users. AIDAN GOMEZ: Yeah. So Retrieval Augmented Generation, it was coined by a guy called Patrick Lewis while he was at Meta. He's now at Cohere leading our RAG efforts alongside Sebastian Hofstadter.
So we're super fortunate to have him. The general principle is generative AI is fantastic, but it still has limitations. One of the key limitations that we hear about is hallucination. So these models can make stuff up. And that hurts trust.
It reduces reliability. And so RAG is a very promising method to resolve that. What you do is you sit the model down next to a database or some source of knowledge. And you let the model query that knowledge, pull back the retrieved documents, and then use that as part of its response. That gives you a few things.
The first thing is now the model is citing knowledge sources. So when you get a response you get citations indicating why it's answering that. That boosts trust and reliability because now, the humans, who are receiving that information, can click in and verify that what the model is saying is true. The other thing is relevance. So these models, like you described the trained on the open web. And so they know what is publicly accessible.
But internal information, intellectual knowledge, they don't have access to that. And so there's no way they're going to know it. And so this is the way to close that gap and put these models in a system where they have access to proprietary information, for instance, internal emails and documents.
So they can become much more useful and much more relevant to the user. GREG PAVLIK: So in a sense, the models are dynamically querying that information and using it. AIDAN GOMEZ: Yeah, exactly.
It's up to the-- up to the millisecond. Typically, these models there's a cutoff point at which point, anything that happens after that, they have no idea about. And so this lets them have access to the current state of the world as well as internal private knowledge. GREG PAVLIK: Yeah. And I think the key term there is private as well because the models aren't being trained on the data. They're using it dynamically.
They're not remembering it. The information can be dynamically updated. I mean that's an extremely powerful thing.
I guess you guys now, that you've been in your fourth year of business, you are seeing all kinds of interesting pressure points, all kinds of interesting use cases. What are some of the most interesting ones that you've seen to date? AIDAN GOMEZ: Yeah. One of my favorites is the idea of knowledge assistance. And so we have tons of knowledge workers out there who have to painstakingly do research. And that can take weeks. It can take months.
And I think there's a future where we'll be able to outsource a lot of that research process to models which can read documents, understand them, distill them, summarize them in an instant. So we can turn months of research into milliseconds of querying for these models. So you can imagine a knowledge worker within an enterprise being able to write a query. And the model has access to that enterprise's entire internal knowledge base, the web, the latest reports out in the public domain.
And it can go out read and distill all of that information into an answer. And we're seeing it already start to emerge. Organizations like McKinsey are building their knowledge assistants as well as Morgan Stanley. GREG PAVLIK: All right. So that sounds like a lot of powerful functionality. Sounds a little bit like what I do on some of my days, where I'm kind of trying to figure out where the AI industry is going to move toward.
And I spend a lot of time there. Do you think I'm going to have a job in a few years? Is that like-- [CHUCKLES] AIDAN GOMEZ: I don't think your job is going anywhere. I think that this technology-- there is a lot of fear about automation and replacement. I think it's going to be augmentative. It's going to be something that lets us spend our work, our jobs on the stuff that we enjoy and that we thrive at.
And it's going to level us up as opposed to displace us. There's some evidence for this. There was a study that came out about six months ago from MIT, which showed both-- really interesting-- both the quantity of output and the quality of output benefited massively from sitting knowledge workers next to these models. And so it's not just a full end-to-end automation. What it lets you do is just do way more work way faster. It becomes a tool that sits next to you and makes you dramatically better at your job.
GREG PAVLIK: Yeah. I mean, just as an observation, we are seeing people use this already inside of Oracle. As soon as we started to stage early versions of the service, we allowed people to do testing and evaluate what it would look like for their jobs.
And we got immediate feedback that they were spending less time doing better work, which was pretty exciting. So-- AIDAN GOMEZ: Really exciting. GREG PAVLIK: --I'm anticipating that as we apply this more broadly, we'll see more and more of that feedback loop happening. So we talked a little bit about, I think, the latest developments with RAG and some of the ways this is impacting businesses that you're seeing now.
If you look out three years, four years, five years, I know this is like infinite time in this space, but how are things start to look to evolve? What do you anticipate will happen? What is the future going to hold for us? AIDAN GOMEZ: Yeah. It's important to remember that we're still super early in this process. We're like on month nine of computers being able to talk back to us. GREG PAVLIK: Yes. AIDAN GOMEZ: So it's still super young, and there's a lot to do.
I'm really excited to see the product landscape change. We now exist in a reality where we can have conversations with product. We can guide it with language, which is the most intuitive way for us to interface. It's our intellectual modality. And so I'm super excited to see the OCI Gen AI Service put this into production within enterprises. The other things that Cohere are doing-- I'm really excited today to preannounce our new embeddings models.
And so embeddings are crucial for search. They are crucial for RAG where when you make-- when the model makes that query to a database, the response is only going to be as good as the quality of the results that come back. And so embeddings dictate that quality. We're releasing new embeddings models, which perform something like twice as well compared to the competition on data sets that are heterogeneous, that are noisy. And so this is a total step change in the embedding world and very importantly, to scalability.
They were trained with compression in mind. And so with something like 32-fold compression, you still retain 95% of the accuracy. So I'm very, very excited to get that in the hands of customers and get people to start building with it.
And then of course, we're also deploying on Oracle Fusion Cloud apps, which I'm very excited to see. GREG PAVLIK: Well, it's very exciting. I appreciate your time, and-- AIDAN GOMEZ: Thank you so much. GREG PAVLIK: --I forward to talking to you more at the conference. Thank you.
[APPLAUSE] All right. We're going to take a little bit more of a look at the way the Oracle is applying this technology for the enterprise today. I did want to add one comment on this Retrieval Augmented Generation. Aidan mentioned it as an important area that Cohere is pioneering. We are also working with them closely to make sure that we can bring this into a fully functional turnkey environment.
But you can just imagine something simple, a customer service representative at a manufacturing company. He gets a question in, comes in on the phone. There's maybe information about instructions on repairs.
What kind of parts are required? What is the availability of parts? Imagine being able to just ask, like you would another human being, a simple question to the model. The model is able to dynamically go out, find the information that's required to come up with an answer, and then do a fully formed human, understandable response dynamically with accurate information. That can be a game changer for almost any business. So we're expecting this to be the next step in a series of steps to seriously progress the applicability of generative AI in the business context.
And I think this will be one of the most important use cases that we will be delivering value to customers as we move forward in the near term with generative AI. So I wanted to pivot a little bit. We talked maybe more fundamentally about the technology and the service we're delivering, why we're using the kinds of models we're using with Cohere Oracle's also adding this technology across its entire SaaS portfolio, which is an enormous set of applications. We have our horizontal applications from Fusion apps and NetSuite. We have over 20 business units that are focused on vertical industries.
I wanted to look here at one example. This is from Oracle Primavera Cloud. It's a project management planning software. It's used to administer and plan out and track very complex project deliveries.
And it's used across many industries. They've been working with us and with Cohere to develop a new sense of how to accelerate efficiencies across industries using generative AI. So I wanted to bring out Josh Kanner, who is the senior director of product strategy and engineering for our Oracle Primavera Cloud offering. Josh, good to see you.
JOSH KANNER: to see you too, Greg. GREG PAVLIK: I'm going to give you the clicker-- JOSH KANNER: Thank you. GREG PAVLIK: --and have you take us through how we're applying this technology today in the context of your products.
JOSH KANNER: First off, I appreciate the opportunity to show the prototype that we've been working on with our customers. It's really exciting to show a Gen AI prototype. That's gone through some customer vetting.
And I'd also like to say thank you to you and the team at OCI. It's been great to work with them as they've been working on the Gen AI beta. And it's been great to work with them to include our requirements as that's been moving towards production. It's really exciting to see-- GREG PAVLIK: Well, they've been working hard. So thank you. JOSH KANNER: Yeah, I know.
We can tell. So at the end of the day, as an AI builder myself and a product guy, what gets me excited about that is the ability to deliver features to our customers faster by being able to partner with OCI and the Gen Ai service. But let me start by explaining who those customers are first. So I'm in Oracle construction and engineering. A lot of people don't know it, but construction and engineering is one of the largest industries in the world. It's about 10% of global GDP, over $13 trillion.
It's everywhere, from hospitals to railroads to bridges, nuclear power plants, even strikeforce fighters as our products are used by the DOD and a variety of agencies to actually manage those complex projects. Within Oracle construction and engineering, we deliver a bunch of different SaaS products to help deliver on complex and also pretty simple capital projects all over the world. What I'm going to talk about is a product called Oracle Primavera or Oracle Primavera Cloud because it's now offered through the cloud. You may hear me call it OPC for short. It's a bit of a habit. Schedules are a key part of capital project delivery.
Oracle Primavera Cloud is the leading provider of scheduling software for the capital projects industry. At the heart of every project is a schedule. This is a really simplified version. I'm sure we've all managed schedules, seen schedules. It's not just activities and starts and finishes. There's resources.
There's dependencies. There's task management. There's compliance. When you get deep into the world of scheduling, it's amazing how deep you can go. I actually just spoke at a national defense contractor conference last week, very deep in scheduling.
What we're going to do in the prototype that we're going to show in a second is show how schedules, which have a lot of detailed and confidential content, as well as a lot of prior knowledge could potentially be automated, because the questions our customers were asking us is, what if we could take a process to build a schedule from days or hours to minutes? And this isn't just a time-saving benefit. Schedules are so important that to be even able to bid on a job, you often have to provide a draft schedule along with your bid. So it's on the critical path, to make a scheduling pun, to actually get work out the door and win business. So what we've done is build out a Gen AI prototype embedded into Oracle Primavera Cloud. Let's go over to the prototype video and see the demo.
So the first thing you'll see, and you're seeing it, is you're in OPC. What I can do as a scheduler, is now start the workflow to actually create my schedule. Let's say I'm here. It's Las Vegas. I'm being asked to build a 33-story high rise. In this case, I actually have an RFP.
In the RFP document, there's a bunch of information that I need to build this schedule. And let's pause here for a second. You can see the dates, start and end dates. You can see the type of construction. You can see some of the overall description of for example, the pricing and delivery method. This is the first call to Cohere and the OCI Gen AI service.
What we're doing, and we learn this through some of our customer testing, is we don't have to rely on the customer to write a prompt. If we have an RFP, we can summarize it for them. Pull out critical data and actually present it back to them.
And then build the prompt for them on the back end. What our customers told us is, hey, wow, this is actually pretty cool. This is a product in and of itself. It takes us hours to be able to analyze RFPs and create summaries for the rest of the project participants.
So we actually added this Generate Detailed Summary button to the UI. But we're not done here. The goal here is to actually build a schedule. The Gen AI is then looking at the summary it created. And it realizes that it might benefit from some additional information.
So now, in a conversational way, it's actually prompting me, as a user, to add more data. So what I'm able to do is in this case, because I know what kind of materials we're going to use to build this 33-story building here in Vegas, I can add that as detail. And it will then iterate on the prompt in the background.
GREG PAVLIK: So you're blending together then the traditional prompts with the summary-driven information and a whole feedback loop together. Is that correct? JOSH KANNER: Yeah. GREG PAVLIK: OK, that sounds pretty powerful.
JOSH KANNER: Yeah. It's really interesting when you start chaining together these different capabilities. It adds to the overall value proposition. So at this point, I'm realizing that I'm pretty much done with the information that I can provide the AI Assist.
I mean, it's nice to have an assistant, but I actually don't know how much time I have with you on stage here, Greg. So I need to move it along. So I'm now going to click the button and generate the schedule at this point. What it's now going to do is call Cohere and the Gen AI interface in OCI a second time.
And it's going to run the generate function. So in the generate function, what it's doing is it's actually assembling in the background. And if you've attended some of the other sessions from Fusion or NetSuite or some of the other folks, we're all moving towards this paradigm of embedding prompts to facilitate the user's experience and also manage some of the risks of AI. It's now generated. By pushing the OPC API and driving it, a schedule in native OPC that I, as a scheduler, can now go in and edit with details, with activities, with relationships, and dependencies that are built off of a fine tuned model that we have-- that we've built off of our prior schedules.
With the customers we have and with how, let's just say organized schedulers are, there are thousands of old schedules that are being able to be used to actually fine tune these models. GREG PAVLIK: So this is all done with the OCI Generative AI Service through the APIs using the customer data. So then this is where you really do care about the security, the privacy, all the things we talked about not just the models being enterprise ready but the service being enterprise quality as well.
JOSH KANNER: 100%. And that message, I'm finding, is resonating really strongly with our customers, to be able to deliver a secure and private Gen AI experience that then allows for fine tuning on their data, not only gets a better outcome but also addresses a lot of the security questions and concerns that we see about Gen AI in the marketplace. So now we see the schedule has been created, I can go along my-- go along my day as a user. So as I mentioned, we've been showing this to some of our customers.
The response has been pretty amazing. So if we go back to the customer response, if we go to the customer response slide, one of the first customers we showed it to, and if you work with schedules, you'll know they're not typically very effusive people, he said, and I quote, "it's like you've seen my dreams and made them come true," which was a little awkward frankly. I'm like I'm not sure I want to be in your dreams. But if they have to do with scheduling, OK.
GREG PAVLIK: It's hard to beat that though. JOSH KANNER: Yeah. It was good.
Another customer said, you've taken a process that usually is at least five hours, and now, it's five minutes. GREG PAVLIK: Yeah. OK, that's what we want to hear. JOSH KANNER: Last but not least, the head of the overall bidding and management process for one of our customers said, because you've taken schedulers out of the critical path for us being able to bid new work, we'll be able to actually bid on more projects, because typically, there's a delay because it's a high-knowledge, high-bandwidth activity.
GREG PAVLIK: Excellent. JOSH KANNER: So with that, I'll turn it back to you, Greg. GREG PAVLIK: OK, all right.
Thank you very much, Josh. [APPLAUSE] This is a great example of how we're taking generative AI and we're bringing it directly into our product suite to solve core business problems. And we continue to invest. At the infrastructure level, we invest in data. We have verticalized data. And we're opening these models up to work with your data.
We're going to deliver AI services, like the OCI Generative AI Service, so you can safely use these models directly in your business. And we'll be plumbing this technology throughout our SaaS portfolio to get new experiences and new efficiencies across the board. So for Oracle, we're going to be continually focused on the enterprise. We're going to be offering you complete solutions to use generative AI. And we'll be ensuring that your consumption of this technology is safe, private, and secure. I do want to note that the OCI Generative AI Service is a part of a complete portfolio of AI capabilities and data management capabilities.
So you can use this together with your data. You can use generative AI for language together with other powerful AI services. So for example, we often see people do things like transcriptions using OCI Speech AI service to get a speech to text rendering of a recording or an interaction of some sort, and then feed that into the generative AI to get summaries, to get insights to use it to answer questions. So oftentimes, many of these services are used together. The portfolio is designed to be integrated, allow you to work with your data sets within the Oracle data warehouses within the Oracle data lakes as a part of an integrated whole. And we have our beta available today.
So if you're interested in joining the OCI Generative AI beta program, there's a QR code here. We take the information down, and we'd like to hear from you. Other than that, thank you very much. And have a great CloudWorld. [APPLAUSE]