AI Day: Communications, Media & Technology | Salesforce

AI Day: Communications, Media & Technology | Salesforce

Show Video

Good Morning, and welcome to AI Day for communications, media and technology industry. Welcome to everyone who's joining us online. Thank you for joining us on Salesforce Plus.

You are all trailblazers, leaders and pioneers in your organizations, bringing innovation and transformation to your industry. And today, we're here to motivate you, educate you, inspire you. But before that, we want to thank you.

Thank you to our customers, our partners, our trailblazers and our communities. When we founded this company, we brought in a new technology model, a new business model, and a new philanthropic model. That's what our 1-1-1 model is all about dedicating 1% of our equity, 1% of our time, and 1% of our products to giving back. Globally, we've given more than $621 million in grants to nonprofits and NGOs.

Our employees have given 8.1 million hours of their time to volunteer and 54,000 nonprofits and NGOs use Salesforce for free. And probably the greatest thing is that we've inspired 17,000 other companies to join us in our 1-1-1 model.

We are proof that you can do well and also do good. Our leaders and our employees are doing so much to take our customer relationships forward, especially with this game changing technology that you're going to learn more about today. 24 years ago, we never anticipated that we'd be here and have this incredible growth. We just had a very strong first quarter where we delivered 8.25 billion in revenue and we're on track to deliver nearly 35 billion in revenue this year with a 28% margin. And that's incredible.

We're a leader in philanthropy, a leader in innovation and a leader in culture, all while being one of the fastest growing software companies in the world. We're also proud to be a 100% net zero, and we want to help others on their journey to net zero too. At Salesforce, it always comes back to our core values and how we're living those values with our company and into our communities. Our values guide everything that we do from trust to customer success and innovation to equality and sustainability. And today, trust and innovation are critical parts of what we're doing with generative A.I..

Our vision and Salesforce is that A.I. plus data, plus CRM will help you connect with your customers in a whole new way. And all of that has to be built on a foundation of trust. So what do we mean by that? Well, we want to help you harness and trust the power of new technologies to navigate new business challenges and drive your success for our customers in the communications, media, entertainment and high tech industries. We know that means things like growing subscribers or reducing churn or delivering deeply personalized experiences. Those are all examples of things that can be enhanced with A.I..

So let's look at where we are in our A.I. capabilities today. Now, we're not new to A.I.. We've been pioneering A.I. for CRM with Salesforce Einstein for almost a decade. Our native A.I. is baked into every product

delivering more than a trillion predictions a week to our customers. There is no other company that comes close to what we're doing in customer relationship management with A.I.. This is Einstein. This is A.I. for CRM to spur all of this innovation. We start by investing in world class talent researchers, Data Scientists and Data Engineers.

In fact, our in-house A.I. research team has published 227 A.I. research papers and secured 300 A.I. patents. We've also acquired great A.I.

innovators, companies like RelateIQ and PredictionIO. From building Einstein to inventing new technologies like auto feature engineering, auto feature selection, and auto model selection. Years ago, we saw where the world was heading, and we knew that we needed to invest deeply in large language models or LLMs for short. And for all of you developers here today, part of that investment is our development of five LLMs and two transformer libraries. That's incredible. We're proud to offer not only the world's number one CRM, but also the world's number one A.I. CRM.

We've brought A.I. into every app, every layer of our platform, every workflow, all with trust, security, privacy scale, and ethics built into the very core. Now, the communications media and high tech industries have been leveraging various generations of A.I. for years, decades, in fact. We started with intelligent rules based systems for workflows that evolved into machine learning. Algorithms used to optimize things like network or inventory usage. We automated and personalized your customer interactions, making your marketing and your sales teams more efficient.

And we're transforming customer service by intelligently scheduling field tests to balance skills inventory and even things like road traffic. And now we all know generative A.I. is changing. Everything is the technology of this era, and it's coming at a pace that we've never seen before. It took ChatGPT two months to reach 100 million users.

That pace is incredible. This is new. This is exciting. Everyone wants to talk about this. But why? Well, there's a few reasons. First, extracting wins around customer experiences are changing.

Your customers expect options. They expect great service. Someone is going to see a new opportunity to treat your customers the way they want to be treated.

And we're here to make sure that that's you. But there's a trust gap with this powerful technology. We know industry leaders across the board want to embrace it.

We hear it all the time. It's the number one priority. But as leaders, you need to know where your data is being used and what the risks are. And there are real risks in communications, media and high tech industries.

Customers expect more, but they trust less. Less than half of the customers trust companies with their data. And that's a challenge because data is the fuel for A.I.

systems. When we talk about privacy and hallucinations, data control, bias, toxicity. This technology is so powerful, but it's also complex, and there are legitimate challenges that we have to overcome. We are working hard to understand, control and put in place guardrails around these very real challenges.

And we all want to move forward rapidly and increase productivity. But we also need trust. And today, we hope to really close that gap. We're in a new era of trust. That's why we're bringing you Einstein, your trusted enterprise A.I., built for CRM. It starts with a new Einstein Trust Layer, allowing for all of our applications to harness generative A.I.

for your sensitive customer and business data. We talked earlier about data as the fuel for A.I. systems, and it's really hard to get your data together to create compelling customer experiences. And that's the power of our Data Cloud.

It's our fastest growing cloud ever because data exists across the enterprise and it can enrich your customer experiences. But collecting that data and organizing that data and mapping that data to Salesforce can be difficult. So we do the hard work for you so that you can focus on delivering incredible experiences for your customers. And as this market continues to evolve, the power of our A.I. CRM is our open ecosystem. We will be able to open many LLM ecosystems for our customers, allowing you to solve the problems you need to the way you want and need to to tell us more. Please welcome Patrick Stokes.

Thanks so much. Hi, everyone. I'm going to walk you through exactly how we do this at Salesforce. Now, to do that, we're going to get started with the question I hear from all of you, all of the time, How do I trust generative A.I.? And perhaps more importantly, how do I achieve the productivity gains from generative A.I. while protecting my most important asset, my data and my customers data? Today, we know how to do this because we know where all of our data is. with a foundation of trust and privacy it's stored in files, it's stored in spreadsheets.

Now, what all of these things have in common is they have an inherent sense of location. We put data in a particular database, and within that database we choose the table and we choose the field. if it's going to a file, we're choosing where the file is.

There's always this sense of location to it. And on top of that location, we're able to install access controls or permissions. So I can say that one particular person in my organization can have access to this piece of information in this field, but this other person cannot.

But in A.I., it's completely different. In large language models, it's different because data isn't really stored. It's learned.

And that's a very different concept. Let me give you an example, asked all of you what an apple is, you'd probably be able to tell me. But if I asked you to point to the location in your brain where the data about an apple is stored, you wouldn't be able to do that. And that's because you don't really store data about an apple in particular. Instead, what you're doing is your brain learns properties about an apple and you're collectively taking all of those properties and using them to identify an apple. So, for example, you know that an apple is round.

You know that an apple grows on trees. You know that an apple is red, but also sometimes green. It's this collection of different properties and different patterns that you bring together that define what an apple is. But because of that, there's no one location we can point to to create those access controls on top of that. So what are we going to do? How are we going to solve that problem? Well, the good news is Salesforce has been solving problems just like this since 1999, helping enterprises use their data while also protecting it.

More than 20 years ago, you came to us and said, I can't possibly move my entire business into the cloud. need to protect my data. And the only way to do that is to keep everything on premises. We said, “No, we can help you”. We introduced multi-tenancy in our core sharing model and we showed you how we can put all of the permissions and access controls on your data to protect it.

Not long after in 2016, when we made our first foray into A.I. with Einstein, we started building predictive models. We showed you how we can train those models without ever blending customer data across customers, because your data is your product, it's not ours. And today we're doing the exact same thing with generative A.I., but how are we doing it? Well, it all starts with a prompt.

Now, a prompt is just a question. It's the question that we're going to ask the LLM or ask the A.I., and you've probably all done this before. If you've opened up ChatGPT, you've had this experience, you ask it a question. Well, the quality of the response you're going to get is directly correlated to the quality of the question. So let's pretend I'm an investment manager and I'm inviting my client, Lauren, to discuss some of our investments services.

If I'm a salesperson, this might be something that I have to do dozens or maybe a couple hundred times a day. going to take a lot of time. So I'd like to be able to improve that workflow by getting an email generated for me.

So I'm going to ask a large language model for some help. Now I'm going to get a generation or a response back a generated email, and this email is two things at the exact same time. First, it's pretty darn amazing what a large language model can return from very little. A simple prompt. I'd like you to write me an email and I kind of understand some context from that and I'm able to get this pretty impressive email.

But the second thing that this email is, is it's entirely unusable. It's unusable because there's very little context here. It doesn't understand my business, it doesn't understand my products or my customer. This kind of looks like those LinkedIn recruiting messages we all get. It's just not very useful, not something I'm likely to reply to. So how do we fix this? Well, we fix it by adding more context, by writing a better prompt.

The good news is Salesforce has a ton of context about your business across your CRM, across sales, service, commerce, marketing and then data cloud. We know how your business runs. We know your products, we know your customers.

We know your workflow and all of your metadata and all of that. Engagement data is streaming in today. Data cloud is processing over 30 trillion transactions per month. So what if we could connect all of that data to the large language model? Now, I know what you're thinking. You're probably thinking, Great, I need to train the model to understand my business.

So I can get a better response. But that's the trick. You don't actually need to train the model. You just need to go back to that prompt. So what if, instead of a simple prompt, like I'm an investment manager, what if we started adding more context? So let's add some context about me.

I'm Patrick Stokes. I'm an investment manager at Cumulus Bank. Let's add a little bit of information about my business, about my customer, Lauren Bailey. She's been a customer for seven years. Let's add some real time data.

I'm sure you've all seen how large language models are only trained on data. That's up to a year old. But we can add real time data in this. Now, if we add all of this into our prompt, let's take a look at what we're going to get this time. Now we get a much, much better response.

But we have two problems, two things that we still need to solve. First, we have PII data in here, so we need to mask that. We don't want that personally identifiable data about Lauren going back into the model. We don't want that to be learned, so we're going to mask it.

Now let's take a look at our generated email. I can take this and immediately start using it. And I could do this a couple dozen or a couple of hundred times a day. This would dramatically improve my workflow as a salesperson. But remember, we had two problems. You see, I've got a bunch of information about my business here and I want to protect this.

I need to know where it's going. And that is precisely why I'm so excited to announce the launch of the Einstein Trust Layer, The Einstein trust layer creates separation. It separates all of your CRM data from the large language model.

It enables you to securely blend all of the context found in your CRM and data cloud into a prompt in order to get a generative response in a safe way. And it does that using a number of methods like secure data retrieval, dynamic grounding masking, which we just saw toxicity detection, auditing and zero retention. So let's take a look at exactly how that works. Follow the flow of data that you see on the screen and you'll notice we start in our CRM apps. And you may have noticed before that the prompt we wrote was pretty long.

In fact, it was longer than the email itself. Now you don't want your users writing prompts to take longer to write than the email that we want to get generated. But the good news is you don't have to. Salesforce creates those prompts for you. That's the magic of how this works.

And so it starts from our CRM applications. Imagine we were in Sales Cloud looking at an opportunity and we wanted to generate an email by just clicking a button. That button is going to take the prompt and it's going to securely retrieve data. It's going to retrieve data from your CRM, from Data Cloud or from external systems via MuleSoft, and it's securely blends all of that data in a process called “Dynamic Grounding”. Then we're going to mask all of the PII data, just like you saw before. And finally, we're going to send the fully compiled prompt out to our large language model via our secure gateway.

Now, hold on to that secure gateway thought for a few moments because we're going to come back to it. Okay. So we're going to hit a large language model and we're going to get a response.

Our prompt has now turned from a prompt to a full generation, we're going to start our path back to the applications. The first thing we're going to do is we're going to check the response for toxicity, and then we're going to create an audit trail. So we're going to audit it, take some metadata about what the prompt was, who the user was, what the context was. And we're going to store all of that in an audit trail that we can go back and look at later. And then finally we're going to take that generated response and hand it back to our application. So the user experiences in writing a prompt that's clicking a button, this all happens transparently and the user simply gets a perfectly usable email that's ready to go.

Now let's think back to the secure gateway I mentioned before. We need that so we can build all of this in an open way because we may need to use different large language models depending on the use case, since many customers have different data residency requirements for their data. So there's three ways to do that. The first is with our shared trust architecture, which is a way to access large language models across the Internet. We pioneered this with our incredible partner OpenAI, where we're hitting their secure gateway without ever having any of your customer data stored outside of Salesforce.

you can also host models inside of Salesforce. So if you want to bring models directly into Salesforce and host them in our private VPC, you can do that as well. And finally, if you're already making investments in your own models, you can host those in your own infrastructure via our B.Y.OM.

or bring your own model capability and connect them via Amazon Sage Maker and Vertex AI. So three different deployment strategies. Now let's dive into a demo and show you how to build these prompts, how Salesforce is building these prompts, and how you all can build these prompts as well to show us how. Please welcome Elisabeth Markey.

Over to you, Lis. Thanks, Patrick. Patrick just showed us the power of prompts to generate highly relevant content. But creating these prompts is time consuming. Now, rather than having users write those prompts, what if the prompts were created for them? What if every admin were a prompt engineer? Well, now they can be with the new prompt studio.

From here you can select the type of prompt you want to create and the place where you would like your prompt to appear. These prompt templates come with a lot of foundational work already complete so they're easy to use. We have email to help you generate personalized emails to your customers.

Feel population to help you generate data fields like descriptions, and the sidebar gives you suggested actions like getting important leads. But don't forget we're building this to be extensible so that you can use these prompts from anywhere. Like in Flow and Apex. All right, let's get started with an email.

We give the template a name and add a description. This takes us right into prompt studio, where you can create a prompt to generate emails to fit the specific needs for your organization and industry. Here we can set the language, the style, and even the LLM provider we would like to use with the Einstein Trust Layer. It's easy to create the perfect prompt and you can focus your time on customizing. Let's do it.

What makes the Prompt Builder special is that it has access to all of your metadata and allows you to ground the prompts directly with your CRM data. Here we have user and contact and it integrates a Data Cloud. This allows you to personalize the offer delivery method to your customers preferred channel.

Once you're ready to test your prompt, you can select a record from the dropdown and then generate a response. First, you get to preview the prompt with all those fields populated from Andrea's record. This is how we verify the data that is used to create this prompt and when you click generate, “Boom”. Just like that, the assigned trust layer has generated a response. You can see that this response is personalized with the contact and user info. The Einstein Trust Layer has given this a toxicity rating of harmless.

This is the powerful capability that enables you to become a pro at prompts in no time. Now that it's ready, let's activate this prompt. It's that easy to build out these experiences with prompt studio. And with that, back to you, Patrick.

Thanks, Elisabeth. It is amazing to see the power of a prompt. So you might be thinking, where do the prompts go? Well, where they go is into our applications. And that's because Einstein drives productivity across your entire company with productivity for any workflow, for creating prospecting emails, for sales reps in Sales GPT or knowledge responses for service reps and Service GPT Or creating landing pages for marketers and Marketing GPT For writing product descriptions for commerce pros and Commerce GPT. for helping everyone explore data in Tableau GPDT.

And for summarizing across your entire company in Slack GPT. So let's dive into another demo and show you exactly how all of this comes to life and how these prompts are used across the entire platform. To show us how, please welcome Sanjna Parulekar over to you Sanjna Thank you, Patrick. Hi, I'm Sanjna.

And every day I talk to customers who are excited about generative A.I. But you know what makes them totally fall out of their seat? Generative A.I. in the workflows they use and rely on. I'll take you through three of my favorite sales service and productivity sales that spend day in and day out, communicating with customers and prospects over email. But crafting personalized outreach emails is time consuming and tedious.

The Einstein Trust Layer is built into the sales user experience so every rep can get assistance in writing these emails and automatically bring in the right context from their CRM data. The prompt template surfaces as a simple button for a sales rep to write an outreach email. It's so simple for the sales rep and so powerful behind the scenes. Each prompt passes through the Trust Layer so you can ensure your data is safe and no data is ever retained outside of Salesforce. Now, this is especially important when dealing with sensitive data like strict privacy regulations or medical information or credit card numbers.

Sales reps also spend their days on the phone with customers. And you know what's even more tedious? Writing call summaries? The Einstein Trust Layer is in this daily workflow, too, and after a call is done and transcribed, it can be automatically summarized into the key purpose action items and the moments that were discussed. So sales up to now spend more time talking to customers, closing business and worry a whole lot less about managing all those mundane tasks.

So you've seen what a game changer the Einstein trust layer is. Sales. What about service? Let's take a look. Every service rep cares about improving their resolution time and every minute spent searching for knowledge, articles, product recommendations and writing summaries is a minute that's taken from helping another customer. The Einstein trust layer is built directly into the flow of work for every single service rep, So as they have conversations with customers, replies are then recommended to them that are personalized to their end customer. And as that conversation goes on, the service rep also gets assistance with the next best offers for the customer, so they don't need to search for what's new in their product catalog.

And the rep is always in control so they can accept, edit or decline these recommendations. Now, next Best action has been a hit with customers for the last several years, but pairing generative replies with predictive product recommendations saves so much time and makes for happier customers. Now, once the case is closed, reps also need to summarize their cases, and the Einstein trust layer can help here as well. With the click of a button. This is a simple step, but an important one. With this new knowledge article now in our knowledge base, similar cases in the future can be automatically deflected saving our reps even more time.

The service agents can now spend more time helping customers, whether that's a patient or an account holder, and worry a whole lot less about managing the outcome. So we've seen what a game changer the Einstein Trust Layer is for Sales and for Service. But what if we need to leverage the collect of expertise of an entire organization to drive better collaboration and productivity? That's where Slack comes in. This is a case swarming channel where we've brought in a variety of people across the organization to solve a particular case, and this channel is equipped with rich detail from Service Cloud about the case itself, the account and the source of the issue. But it also has an automated workflow to update this case. In other systems like JIRA.

Great service isn't just about finding knowledge. It's about creating it. So as to chat goes on, various folks can jump in to provide important context that should be documented and shared to easily handle common questions, spanning everything from loyalty programs for shoppers to public health program eligibility for residents. So Einstein is at the ready. And with the click of a button, can summarize the case form and then post it at the knowledge article for others to benefit from in the future.

Now, that's how the Einstein Trust Layer is changing the game for sales, service and productivity. Back to you, Patrick. Thanks, Sanjna. It's awesome to see how all of these these cases are coming to life. Now, there's one last concept I want to leave you with.

These LLMs and A.I. in general, how do they get better? How do they learn? And this is where Salesforce is different. You see, most lems learn from usage. When you get those responses back from chat, it might ask you, Was this useful? And you give a thumbs up or thumbs down.

But with Salesforce, we do things a little bit differently. Salesforce is where you entrust not only your customer data, but also your customer outcomes. And Einstein learns not just from usage, but from your outcomes, like your deals closing and sales cloud, your service cases being resolved, your marketing email, open rates and having your company conversations in Slack. So for usage, when a sales email is generated or a service agent responses suggested, did they use it or did they edit it but for outcomes? Did the generated sales email close the deal? Did the marketing message reach the open rate goal? Did the service response close the case? Signals like these are unique to every company and this ensures that every customer will develop the best models for every use case that's specific to their industry, their organization and their task. Now, as you can see at Salesforce, we have tons of innovation happening and we're excited to bring you even more of it in the coming months. Wow. Who's ready to take all of this innovation to their businesses?

Because we have hit the hyperspace button on trusted generative A.I. Salesforce. A.I. is coming to you. Generative A.I. is changing fast, and we're helping our customers meet the moment with 16 A.I. first releases out.

Now we've mobilized the entire company around data AI plus CRM to get this incredible technology into the hands of you. Our customers. And with all of this innovation, we need everyone to be AI ready.

That's where Trailhead comes in. Our free online learning platform that lets anyone anywhere skill up on Salesforce. We have 35 AI badges on the way, plus a generative AI certification that's launching at Dreamforce next month. Any trailblazer anywhere can learn to scale up for the future. And don't forget to register for Dreamforce, the largest AI event in the world, coming to San Francisco September 12th through the 14th and also streaming for free right here on Salesforce Plus. This is just the beginning of where we're going together with Salesforce and Einstein and from all of us here. Thank you.

2023-08-27 02:44

Show Video

Other news