The Marketing Singularity: How Large Language Models Are Changing Marketing

The Marketing Singularity: How Large Language Models Are Changing Marketing

Show Video

Yeah, right. All right. Should we get started? All right. Well, good morning. My name is Chris. We're gonna be talking about the one thing everyone can't seem to stop talking about, which is a technology service and application called Chat GPT. You have likely heard of this software.

It has been literally in the news at every possible turn, from graphic design to iPhone moments to how Philadelphia area Realtors are using the software. If you're unfamiliar with it now, we'll talk a bit about it in a second. But just to give you a sense of how panicked and or in all people are of this thing. One of the challenges that we're going to talk about this today is what how this thing even works. Because for a lot of people, it comes across as magic.

And it is most certainly not magic. Nothing about machine learning and artificial intelligence is magic. It is all mathematics. And unfortunately, if you got into, say, marketing or media because you wanted to avoid math, I have bad news for you.

This is the interface if you're unfamiliar with it. ChatGPT It really is a chat bot interface to what is called a large language model. These are pieces of software that are constructed to be able to do things like generate, text, summarize text, rewrite text pretty much anything with text. So what is your large language model? Large language models are the latest evolution of what's called natural language processing, trying to teach computers how to work with words.

They are all based on one fundamental principle from 1957. You shall know a word by the company it keeps. This is the basis for all natural language processing. Computers can't read.

Computers have no comprehension comprehension skills whatsoever. All they know is mathematics. And so when we try to teach a computer how to work with language, we're really trying to teach it statistics and probabilities. What are the words around a word? What do they mean? For example, if I said I'm brewing the tea, what am I talking about? I'm probably talking about a beverage or a beverage made from a certain type of plant and the process by which I work with it. If I say I'm spilling the tea, this is different.

Right. This is jargon. It means gossip. And so it's almost the same set of words. But because of the way we use language, the propensity for language, the meaning changes. So if we understand that, we can understand how large language models work because we can start to understand that words mean different things based on their context.

And that is the most important thing to know about large language models. It's all just probability. Word order matters too. If I say I'm brewing the tea. It's pretty straightforward who the subject is, what the object is, and what's happening. If I say the tea, I'm brewing. Right?

This is a different sentence, right? It means functionally about the same thing. But in the English language, this change is what the focus of the sentence is about. It's more about the tea and not me, the person, and what it is that I'm doing. If I say brewing, I'm tea va. This doesn't make a lick of sense in English.

Same words, probabilistically it's the same words. But they're different because the order in which they occur. This sentence does make sense in Arabic and Gaelic and Irish, where they have a verb subject object format, but it doesn't make any sense in English. And so part of the challenges of large language models is also understand what language are you working in? How does that language function? So how do these things work? Well, the short answer is like this This is probably the least helpful answer, however, because it is just a system architecture diagram. What large language models do and what companies who make them do is take an enormous amount of text from the internet to accompany you.

Luthra has a script about 800 million megabytes of text from clinical research papers, news outlets and things, opening eyes, chatting to your script even more. They have scraped several billions pounds of text out there. Everything that they can get their hands on that they're not going to get sued for immediately. So review is fan fiction, all sorts of crazy stuff online. And all they're trying to do is is gather up all the text possible that they can have access to so that they can start assigning probabilities to it. These are some reviews.

These are from Amazon and a few other sites. And you can see I'm brewing. The tea occurs a fairly frequently number of times, right? You can see that there are in this review here. It's in that review here. And as part of this, as part of constructing this model, it's starting. You can also see the words that go with this phrase, right.

We talk about I'm doing the tea, you talk about how it tastes, how it smells, certain types of words and phrases, styles of tea, herbs. Again, this is all just probability. We're trying to out build associations in words. You do this all the time naturally, right? If you were in person from England, you would say, Oh God, save the. And then until last year, the last word in that phrase would be queen. Right now it's no longer the case, but you have known probabilities based on frequency.

If you're an American, you say, I pledge allegiance to the right. It's not it's not a rutabaga. Probably not. But that's how probability works.

We do this all the time. Where this changes is in what the machines do. So the machines, again, they can't read. So what they do is instead they they assign values to each of the words that occur in all these texts that they've been scraping. You know, millions and millions and millions of pages. Here's a very short example. I'm brewing the tea.

You assign a numeric value to each these words, and then when you see them in other cases, the the number is stays the same. That's the sequence. The changes, again, the machines can't comprehend. But what they do is they take these reviews, for example, and break these into all these different numbers, and then we start looking for probabilities, right? So that when somebody says, I'm brewing the based on all the language that's consumed so far, what is the logical next word in this phrase in the sentence? Well, there's a bunch, right, Based on the data you've scraped together.

And each of these has a probability. Each of these has a probability that is based on context. So part of working with large language models is understanding what probabilities you want to invoke. It's a technique called prompt engineering. When you're working in these systems, let's go ahead and see and pull one up here. Skip PowerPoint. Oh,

all these things are just doing is regurgitating based on the amount of information you have and that and what it was, what it had learned about. This is pretty good. All right? This is this is based on a lot of publicly available information. And the more frequent and common a phrase is used, the easier it is for the machine to put something together about the more obscure or arcane something is, the harder it is for these machines to do what we want them to do because they don't have nearly as much knowledge to associate with those words.

So all the blogging that we've been doing since 2006, it's basically been feeding chips, among other things, mean we've been feeding Google for since that long button. These are these language models now are starting to consume a lot of that information as well. So it's prompt engineering. Again, you know, word by company keeps when you write a prompt, like write a paragraph about B2B marketing strategy, it's going to spit out something fairly generic because I didn't give it much to work with.

Remember, it's association. So it comes up with a, you know, a reasonably okay kind of bland paragraph. If I lengthen them that say write a paragraph about B2B marketing strategy with an emphasis on email marketing and lead generation, what it comes up with now is more specific because it has more words to work with in the initial front, and it will narrow those probabilities.

Just like if I say I'm brewing the tea and my previous sentences about a tea house, it's going to come up with completions that make a lot more sense because it knows what guardrails I want. So in writing to talk with these meetings, these doing these prompts we are putting guardrails on and the guardrails are the words we use in the prompts. If I say write a paragraph about B2B marketing strategy with an emphasis on email marketing and lead generation, focus on the reduction of churn and increased audience loyalty, include details about marketing automation and leads going right in a professional tone of voice.

Another a lot more guardrails. And the more guardrails you put on these prompts, the better the output gets, right? So the text that it's coming up with now is a lot more useful and focused because I give a lot more to work with. I said, These are things that I want you to focus on. You can also tell these things to not do, to not focus on again. Making these things work is all about just knowing the company of the words you want to keep. So let's take a look at a couple of use cases and some abridged use cases.

This is one I said you will act as a blogger. You have expertise in blogging, content creation, longform content, content, marketing, content for SEO. Again, guardrails and trying to tell it what to focus on your subject matter, expertise in statistics, data science, machine learning and supervised learning. Your first task is to draft a four part outline for the following question How do you build a predictive analytics algorithm? And what it spits out is credible. So this style of prompt is sort of what we call the role prompt.

Like you're going to be this type of person. And the reason this prompt works again, those guardrails, the more I say you're going to be focusing on content marketing content for SEO, it knows then what words to associate with it. If I had said you're going to write a biography or you're going to write some fanfiction, you're going to it's going to change what words get associated with that and what it spits out. You're an expert social media manager.

You're skilled at crafting social media posts that garner high engagement on services like Twitter, TikTok, Instagram and LinkedIn in your capacity, so on and so forth. Right, Right. Some some summaries for this text.

So again, summarization is one of the most powerful things this software can do if you take information that already exists but maybe is not in great format, you can have it summarize and condense it down. So one of my favorite things to do with these tools, I use a, you know, voice recorders and voice recording apps. And I will, as I'm driving around, just sort of foam at the mouth and rant and automated transcripts I get back. Then you can kind of what you can see here at the bottom of the screen, these are legible, they're understandable, but they're not exactly what would call publication ready.

There's a lot of arm and, you know, and filler and occasional distraction. But I can take the automated text that these things create and feed it into a system like chat and say, I want you to revise this, I want you to rewrite this. I want you to make it less angry, man Yelling at cloud and more something ready for a publication.

And that's what these tools are really good at. It's that the phrase GPT stands for Generative Pre-trained Transformer. And what that means is that these tools are good at creating, but they're really good at revising. They're really good at rewriting content because in a lot of ways they don't have to work nearly as hard to just rephrase things as they do to come up with something new. Another example I told it Given a piece of text, give me the big five personality scores of the author based on, you know, several thousand words of text.

And so we came up with an explanation of of these these psychological scores. If you believe in sort of the big five methodology, this is a handy way to evaluate, text and understand the personality of the author at the time. Here's an example. I said, You are an expert programmer in the programing language. Ah, and I gave this long use case of what I want the software to do, and it writes the code for me, which is an enormous timesaver. If anyone codes, it will get your code about 90 ish, 95% correct.

The last 10% still doesn't work. Yeah, doesn't matter if you say expert or not, it doesn't matter. Okay. I was curious because say you're an expert.

Social media, like if you say novices are going to give you a crap out. Know, I use that just for my own natural language instruction because I want those key words, you know, tidy verse and hour and, you know, shiny apps and stuff like that. You could actually just list those words out without any proper grammar at all, and it would still function just about the same.

It's more for our use as humans to look at the prompt to go, okay, well, this is what you're trying to get the machine to do. It's all word frequency. So as long as you get the right words, I mean, prompt engineering, if you think about it, is kind of like SEO in a lot of ways.

It's just prompt optimization. What are the words and phrases that I need to have that are relevant that will put guardrails on the machine's output? So yeah, you don't need to use the word expert. I just like it because it looks nice. It will write code. It writes about 98% of the code correctly. This is one of the challenges with the software here.

It it gets it mostly right, but mostly right does not mean it runs particularly computer code. Right. It has to be 100% correct. So you still need some subject matter expertise to be able to get it to do what you want 100% of the time. You have to go look at the code and go, okay, well, clearly, you know, you missed something here or this makes no sense.

But conversely, will also do things in ways that you normally wouldn't, so you will learn from it as as you work with it. How does it compare to StackOverflow? So StackOverflow and GitHub bots are in the previous version of ChatGPT, they were better in the current version. The 3.5 Turbo. It's better, It's gotten better, which is alarming. And part of the reason is because it has what it's called reinforcement learning. So every time you create an output and you don't make changes to that output, it essentially interprets it like, okay, I got it right.

And so over time, the model basically is learning along the way and improving very, very rapidly. Chris Yes, this retrieves relevant quotes for you for case studies, for example. It will do examples and templates.

It will not. It doesn't retrieve anything. Part of the challenge of a large language model is it's not like a search engine where it goes out and finds new stuff. It is constructing stuff from what it knows.

So will it be able to create something that is a solid template? Absolutely. In fact, I've had it. I think one of my examples in here is I gave it a really badly written NDA that somebody sent me and I said, I'm not signing this because whoever wrote this was an idiot. And the guy was like, Oh, I kind of wrote it from copying and pasting it from the Internet.

I said, I can tell the follow up. Yeah, you said that it only works based on what it knows. How often does it update what it knows that would be if it's constantly doing that, it functions somewhat like surfing.

It's somewhat like a search engine. The the core of GPT 3.5 Turbo is a knowledge base. It ends at the end of 2021.

There have been seven updates to the model since then. They're smaller updates based on the feedback people give. It is due for a factual update sometime in Q1 to Q2 of this year when GPT four gets released, but it is a slower, it is a fairly slowly cycle. So it's about a year out of date, a huge undertaking to make. Absolutely. It's it's I mean, you're talking about hundreds of hundreds of millions of pages of script text, right? So I had to revise this NDA and come up with a better version of it.

I said, you know, tell me the things that are missing from this NDA and rewrite it. And so we rewrote it and saved some time writing mission statements, doing search engine optimization keyword lists. I gave it some background, said Give me some suggested keywords, and then you feed these into like your SEO tool of choice and get your keyword scores to to make improvements. Let's see.

Oh yeah. Writing privacy policies, GDPR compliant policies. So again, you should have some subject matter expertise. The tool gets 90 95% correct, which again, with law is something you probably, you know, 95% is probably not good enough. There's always that one thing that that causes you, but it gets you 95% of the way there. This affects any industry that uses language, right? So if you use words, which is pretty much most industries, this software will have some impact on what you do.

There are risks inherent to this. Again, this these models and this is true across the spectrum because Openai is not the only company that has one of these. There are many of these. They are trained and they are built on the language that is available publicly. Now, the old joke is think of the average person and realize 50% of the population is dumber than that. Right? Which is mathematically true.

Openai talks about We found evidence of bias in our models, so we found evidence that says our models more strongly associate European-American names with positive sentiment when compared to African-American names and negative stereotypes with regard to black women. That's because of the text that went in as these models learn, they are picking up our biases. They are picking up our flaws as human beings. It's one of the reasons why some companies like Yellow three restrict the text they scrape. Like they said, we were not going to go out to the broader Internet because the broad Internet is basically a sewer.

So let's stick with the Library of Congress perhaps, and things like that. But you have to be aware that these models are not something you just kind of fire and forget. You have to be constantly checking their output to make sure that they're not doing something you would not want them doing. Now, here's where it gets interesting.

Everything that people have done so far has been in this this lovely chat interface. And there are a gazillion and a half experts who all ran out of money doing crypto. So now they're experts talking about Chachi. Pretty and everyone's talked about, here's how you use these props to shorten your work cycle, make yourself more efficient, etc. This is not the end game. This is this is the playground for humans to toy with the model.

The end game is building software with the underlying model quite familiar with the software development lifecycle. You when you're writing these prompts to to make the machine do something interesting, you are more or less doing the first three stages of software development and now this. Two weeks ago now Openai I opened up its API, its application programing interface to chat GPT to say, okay, now you can write software that that will connect to this and let your software do the work. So for example, this is the this is the sort of the programing interface to test these things very similar. You will act as a blogger.

There's the, the predictive outline, same prompt as earlier, but now I can say I want to run this inside of the R program environment or Python and things like that. Instead of creating one blog post, I can just rerun the software a thousand times or a million times and generate millions and millions of pieces of content. Once you figure out a prompt that works for you, once you figure out a prompt that does something that you think is valuable, important to your job, you can now build software that does it at scale, and that's the value of the source of what's happening.

That's the way you're going to make your money. If you're good at prompts and building a good prompt, you are a developer now, right? If you've got a working prompt like Hey, take this YouTube subtitles file and turn it into an article or summarize this text or rewrite this for it for comedy or, you know, take any of the prompts people have played with. Now you turn it into software production at a much, much bigger scale, which means that a lot of the things that people were talking about like, is this going to take jobs, this stuff? Yes, it will. There's really no debate about that.

For those of you who are marketers, what is the impact going to be on marketing? Well, we've already seen Microsoft rolled out their integration into Bing with the catchy party, and it was a bit of a rocky rollout, but it is functional in in production. This is changing how people search because now instead of having to sift through page after page of information, it just summarizes the information for you. Now you can learn more by clicking on any of those those results. Most people don't. Most people are okay with the answer that they get and they move on with their day. What that means is that for things like organic search, if you do SEO, a big chunk of your unbranded organic search is going to go away and we're kind of learning more about B2B marketing.

This is the chat G.P.S. Native interface. Here's all the industry publications and blogs. There's not a single clickable link in here.

So if you are relying on people finding your website through search and visiting it when they're using these native models, now, it's it very rarely makes any kind of link based recommendation. Occasionally it will do that if it's about something that is branded right. So branded search even in the chat CBT native model still does have clickable links. So your first task as a marketer, if you're doing anything with search, is to go into search console or Bing Webmaster tools or whatever engine of choices and look at the percentage of unbranded organic search comes to your website, whatever that percentage is. With this 5% or 95%, and realize that as search engine start integrating this technology, you're going to lose that traffic, right? That traffic is going to go away.

Branded search is probably going to be okay. But unbranded search is not so much. And it's three, three things in general you should be focusing on as marketers if you care about that, impact your business. Number one is your brand, your brand, and the ability for people to remember who you are and and find you by hook or by crook is the most important thing that you can build right? We've been saying this for years and we've been doing this, you know, podcasts and stuff as part of that for, you know, 17 years now. But it's a point now where the machines and particularly the large technology companies have decided they would like to send us less traffic and keep more of it for themselves. And the only way around that is to get people to search for us specifically.

And that means our brand. Second is has some kind of publication that you own, something that is yours that that others can't take away from you. So email newsletters is one of my favorite text messaging lists.

Maybe you know, some other form of publication. If there are companies that do very well with boutique print, but you need to have some kind of method of reaching people that is not intermediated by a machine email is one of those last the last things that's available that there's still very little between you and the reader. Compare that to social media, compare that now to search, etc. And the third thing that you need to have is some kind of community, some kind of community, ideally that is hybrid that is in real life like we're all sitting here today or also in virtual space. So this is a Slack group that we run called Analytics for Marketers.

There are a gazillion and a half different discord servers that you can join that have these communities. These communities are important because they're disambiguate it. There's no machine arbitrating what you see.

It's not you just jump into them and it's kind of like 27 Twitter. It's just what's happening as it's happening. It's it's a bit of a firehose in some of these communities, but it's essential. So finally, who's going to lose their jobs? Well, the Brookings Institute said, hey, I'm going to take away tasks, but not jobs.

That's kind of true. That's kind of true. They did. They said that mainly to avert panic.

But the reality is that if you are a marketer who's skilled with AI or any professional of skilled AI, you will replace a marketer is not any professional who is not competent at AI is an endangered species because of the speed at which these these these tools are proliferating. How is this going to cost jobs? If you think about someone's workday, right? A bunch of tasks that are broken out over time and you say, well, how much of this can I automate? How much of this can I, you know, have individual tasks turned over to machines? What's left then is, you know, the chunks here that were not automated. So in a progressive company, a progressive company will say, well, let's say let's retrain people and fill up those time slots. A regressive company is going to say, let's add that block of free time and get rid of a seat and save some money. So depending on the kind of company that you work for, that will depend on whether AI consumes jobs or not. But that's kind of where this stuff is going.

So that is the end of this time slot. Any questions? So what is being trained for? I mean, like, what does that really look like? In what sense? What? So if I'm going to be a marketer and AI enabled. MM So for a marketer who is AI enabled, it is using learning how to use the various tools like Chachi or Go Charlie or Jasper, any of these tools that can do content creation. It is learning how to use generative image like Dolly to or stable diffusion to create imagery. It is learning the landscape of what the tools are that are out there, that what capabilities they have, things like transcription and translation, and then figuring out how you apply those tools within your work.

So you know, the obvious things like text generation is, okay, I can write content a lot faster now. I can, I can have machines generate first drafts, but I can also have to do summarization and access a lot more information. I can have machine summarized and identify key points, you know, boil down enormous amounts of text into into very, very small chunks that I can make use of better.

A portion will be a big portion will be in customer service and and customer care where you can have these models now that can have intelligible conversations with people and deploy them so that customers not only get the assistance they're looking for, but also don't feel like they're talking to a moron. You first. Have you checked out, snipped the podcast app yet? It's got some AI built into it that will gather things. And then the second question is how has this impacted your own workflow between marketing over coffee and almost time? We have been subscribing a lot. Well, thank you.

I have heard snipped. I've tried out the script. The script is also a fantastic tool that does a lot of stuff with a voice synthesis where you can have it trim out, stop words and filler and stuff and sound more like NPR and less like you're drunk, which is good. And in terms of how I use this, a lot of it is for generation, a lot of it, but much more summarization.

So like when I write a newsletter, very often I'm in the car or I'm, you know, walking the dog or something like that. And I have a bunch of stuff that I want to get out of my head, but it's not coherent. And so I have feeding the first draft, which is, you know, try eight or nine pages of rambling and then say, okay, summarize, find the key points, and then I can go back and refine that and write that that improves the output.

It doesn't as I speed it up, but it improves the quality of the output. And again, that's where I think these tools, particularly the text tools, are really powerful, is improving the quality of output Transcripts are a goldmine. When you think about Bill Marriott was famous for this years and years ago. He was the CEO of Marriott Corporation. He used to blog fairly regularly on the Marriott blog. He's never, never set pen to paper.

What he has been doing for over a decade is leaving voicemails to his marketing department of whatever's on his mind. And then someone ghost writes that, Well, now you don't need to have a staff of people to do that. You can have the machines do a lot of that when you look at. So the more prolific content creators out there, the other folks who have several million folks on their YouTube channels and things, a lot of their team work is just admin, right? It's just people following somebody around with a camera editing, making all these these short clips and stuff.

And a lot of the newer content engines now can do that automatically where you can say, okay, take this video now, make a square of video format, make a clip of this and so on. And with the text based understanding of the models, they can say, okay, these these are the five snippets that have the most relevant points. Let's clip those out and turn them into something I believe the script does allows you to do that now. Sniffed what isn't happening in the user's perspective.

If you're listening to the podcast, you didn't have to do three shows. I think free sleep may not itself be free, but those are three shows per day asking it to generate, and it will come back with podcast chapters as well as transcript. And you can make clips. Yep. Based on that transcript and then share them. Yeah, exactly right.

Yeah. Yeah. My question was kind of covered. I noticed when you talked about having your own properties, mentioning email emails, you didn't mention a blog or a website. I didn't. I didn't and I didn't mention a blogger website because I'm still figuring out how we can avoid feeding the models all of our stuff and giving it all of our content away for free that they then use to not send us any traffic because kind of a large language model breaks the agreement we have sort of implicitly with a service like Google, we make a ton of free content for Google, and Google Exchange sends us new customers. That's no longer true with these language models.

So the question is, where do you put your best stuff? I would say your best of blogs and publications that directly benefit people who've agreed to sign up to hear from you, and then your less good stuff maybe goes on a blog or something like that. But I also want to mention that there are now tools that can summarize a YouTube video. Oh yeah, yeah. Which is one of my favorites. You can do it like a two hour video and it'll spit out a nice synopsis. Really handy other questions users to that. So. So how are traffic to your these other things actually determining what they want to feed it I mean because it's only as good as what it's actually being fed, how are they determining that you know something on say, Wikipedia or whatever is actually factual versus, you know, an opinion on something? They're not They're not they're not.

And that's why these things are somewhat risky. That's why there are more curated models that are you can say like, I only want a certain types of information going to the corpus. And even then, I mean, if you think about go on to the bio archive, the number of preprints that have been released in the last three years that have since been retracted is higher than, you know, the number of papers have been released in a number of years period. There's been a lot of retraction things, you know, with the pandemic and such. So there is no guarantee of quality with these large language models that now for a lot of users we talk about like summarization is probably not going to impact your work substantially. But if you're doing generative stuff, particularly around, say, political topics or any kind of inflammatory topic, it's a minefield and it's not pretty.

Yeah. When you were talking about building community and you mentioned Twitter, where's that going? Ha! A doctor with a flashlight. I'll show you. I'll ask, is it a wormhole? And. Yeah, exactly.

And I'm like, sure, I'd love to do that. And, and I have been told that for a long time, but kind of not confident. I would not be confident at all in the platform. In fact, there are some very good tools.

The essence scrape is one to coming online tool to scrape and extract like the all of your data out of Twitter so that you can take it with you. I would not build anything on Twitter unless I had no other choice and I would absolutely feel free to be. Here's the thing. There's sort of an implicit, I guess, type of manners we use on social media, right. To not spam and harass people and things like that.

When Twitter has a little change in management, back in October, all those rules went out the window along with the entire trust and safety team. They're like, They're completely gone. Almost all the moderation is gone. It's all automated now. So pretty much you can do whatever you want on Twitter. So I would say if you've got an audience on there, feel free to direct message them all.

Say, Hey, I'm leaving, I'm going over here, follow me over here. Things like that. Use bots. There's no one there's no one minding the store about automated programs anymore. To the point now where there are some very amusing sort of political games being played on Twitter between various factions and gazillions of bots, all like yelling at each other. So I would say, yeah, that would not be my first choice of a place I would build on discord.

Yeah, discord for anybody. If your audience is under 35, discord should be the place that you build your community around because a its world people are and b because their revenue model has no advertising their revenue models solely on a variety of subscriptions, which means that, you know, individual users can upgrade to, to nitro, etc. and get more emoji in their profiles and things.

And then companies who run servers or entities who run servers can obviously get these things called boosts to, you know, pay a monthly fee to have better video quality and streaming and stuff. It is by far, I think, the most community friendly platform and it's a private space search engines can't see it, large language models can't index it. And so it's it is your space to do with within the terms of service as you please. So I would spend my time there and then we'll go.

Well, it seems to me like they say it won't affect jobs, but if you get good and you learn how to use it, you won't need any editor or any assistant. I do it yourself. Like let's say you have three people working. Yes, you're setting it up. Those people will be needed, right? So they're out of work? Yes.

Until they upgrade their skills to be able to to work faster with the help of machines. It's very strange to me how people are just talking about this all the time. Everyone's like, I've never seen any product or event that everybody is talking about. They all think they're experts.

Are they saying this is good or this is bad? And it's just weird. It's weird, but it's it's not surprising. It is not surprising because, A, the last three years have really kind of warped our ability to communicate just in general and being able to communicate with a machine that can simulate consciousness. It is not conscious, it is not sentient. We are far from that, but it can simulate the kinds of conversations would have with somebody who is more empathetic than perhaps the friend group that you have.

That's very appealing to a lot of people, right? That there's there's a lot of value in having someone to be able to talk to. If you are a solopreneur, for example, and you're used to working in an office, having, you know, something to bounce opinions off of in a very nonjudgmental is very popular. So there's there's a lot of value to that from a sociological perspective. It happened to me once, but you mentioned Dave mentioned series or Alexa on the television, and then my Alexa and I started.

I will. Yep, that Was a yeah, get a question. Okay. Without trying to drag you into discord, what are one? Perhaps just a narrative syllabus, if indeed Cisco. I'm not familiar with discord, but if it's if it's private, not searchable, how does that change the marketing between that and Twitter, where you so used to using the common engines of Google and such like that to actually expand upon your Twitter reach? So this is the great challenge now for folks who have discord communities, they typically have some kind of publicly facing community where they can essentially recruit and bring in new folks.

Twitch, for example, a lot of very popular Twitch Twitch and YouTube personalities have discord communities, and so they use those essentially as feeders to feed into their community. Yes, as a marketer, you have to be thinking a lot about influencer marketing and identifying who else has your community that you don't yet and and working with those folks to hopefully in a noncompetitive way that can bring in new audiences to you. And a lot of it's word of mouth. You know, if you've got an audience that is not, you know, completely hermits, then they, they in themselves have friends and stuff. And so you can grow community that way. Communities typically have these odd sort of benchmarks where you see phases of growth, you know, like around 500 users, around a thousand users, 2000 users, 5000 users, etc..

We see these big steps as you hit sort of critical mass along the way that bring in a lot more folks. So it's much harder actually. You get to your first thousand users, that is to get to your second thousand on discord because of the power of word of mouth. So a lot of it is building those relationships with folks and finding people who are good at influencer marketing to grow a community. Yeah. Two comments.

One on your comment that it's not surprising is because we've been living with artificial intelligence from HAL to C3PO and all of these things. It's been part of our culture and now it's for real. Yes. And the second is a question of how do you see mashups like the one we have in the AI and Wolfram Alpha using the Python types of framework game changer. Do you think that's a key part of where we'll be moving? So we get both conversation and accuracy? There's a ton of startups that are working on trying to do it to essentially build fact checking layers into a GP2 style model and they are a greater or lesser skill right now. Microsoft's the big one actually is pretty decent in terms of what its what, because what they've done is they've basically made a summarization layer that can take their search results and then essentially feed that through a language model for to create the language. But it's built on the search engine data that they have.

So they've done a very neat twist on it. I think that will probably for anything that we're facts actually matter, which unfortunately seems to be less and less these days. That is is going to continue to be the gold standard for a while. The large language model itself, because of the infrequency at which it updates, is unlikely to ever going to be a gold standard of like correct knowledge because it's so prone to hallucination, which is when it's asked a question, there is no logical answer to. So it just imagine something like if you ask it, who is president of the United States in 1492? It will not know what to do. So we'll just make something up that's factually, completely but plausible.

I mean, it sounds like and that's the danger with these tools is unless you have subject matter expertise, you don't know when it's hallucinating and it's in a subject that you have no expertise in. Like if you were to ask it about the S1 spike protein in SARS-CoV-2, it's probably not going to know the answer about how that impacts, say, micro clots in your vasculature. If you know that you can assess its output and say, okay, again, 95% correct. But as with many things, it's one of those areas where 95% correct is not good enough. Right. So so an anecdote about the accuracy asked it to produce a playlist of songs or a radio show that my friend does.

It's all Jersey artists and music. So a two hour radio show. And again, you know, Jon Bon Jovi and Bruce Springsteen, predictable stuff. And then I said, Mick Jagger, The Rolling Stones and Rolling Stones. He spent his childhood in New Jersey. And it turns out the reason it thought that was because at some point he was arrested for drug possession in West Sussex in England.

And Sussex is a county in New Jersey, and it was absolutely confidently yes. Mick Jagger grew up in Sussex County, New Jersey. Yeah. I mean, if you think about it, it's a lot like humans. There are a whole bunch of humans are constantly wrong about a whole bunch of things.

Does that mean we're approaching consciousness now? I would say we're going the opposite direction. Humans are getting less. So at the risk of sounding negative, it don't mean just trying to get my head around this.

If we all start to use challenges for our public facing conflict, I want a blockbuster. Whether it's marketing. We're all starting instead of one blog post a week or every two weeks. Now we're doing two a day, so we can do that now. And then in 2024, Chapter three Surgery, skins, everything. Are we going to end up with some kind of singular voice or we all write to kind of a bland level where we kind of kind of either dumbed down or does this together? So that's an interesting question.

No, no, no, it's a valid question. The short answer is we already write bland content. I think if you read like what most marketers write, it's pretty awful.

And so, yeah, I'm sure the folks in this room that's not the case. But for there's there's an awful lot of marketing out there, which is just not great and so it's already learning from that. The amount of text that is marketing text online is actually relatively small. You know, we think of ourselves as being prolific content creators, but we're really not. When you look at how much text flows through, say, Reddit on a day to day basis now it is a it dwarfs anything marketing can create. So conversationally the tools will continue to to evolve to whatever the standard of language is online as a whole.

There are a gazillion and a half new academic papers being published all the time, and those those, at least for now, have managed to retain a certain level of linguistic quality. But there is the possibility that what it spits out, what the tool spits out is average is it's above average. I would say, you know, it's it's what you would expect of us of a sober college student to create. And that's better than most people. Right. So, yes, there will be some homogenization, particularly of marketing content, because marketers, you know, are overworked and we'll just take the shortest possible path to an outcome.

But they're that again, that's where the things like summarization and rewriting I think I'm really handy where you can take the content that you've created with your voice, have it rewrite, but it preserves your ideas and your language. It just fixes the fact that you forget the Oxford comma all the time. You get it. Then you so, you know, we chatting a little bit about, you know, what happens and is this the definition or is it the improvement? Because you have to because you see keywords, it's still pulling out those key phrases. So do you have these know good to have those skills because you can optimize the props better or is you know, where is that going to be relevant again because things are going in and it's not a branded search, right? You may not get the juice that you used to get out of being a 14 to.

The big question we don't know is how Bard will function, which is Google's entry in the Lives language model. We've seen obviously, 100 million people sign up for CBT in less than 60 days. There's obviously an appetite for the application of what it can do.

Google's first attempt at using Bard was not particularly successful, like it hallucinated some things that are that were untrue, but they've clearly demonstrated that they have the capabilities and how that plays out in their version of search, because they still control like 98% of the search market in most domains. We don't know what form it will take. And we also don't know how well adopted it will be as the primary method of search. Early indications again with chat CBT success. So that is there's not a small amount of the audience that would prefer to interact with knowledge bases in a more conversational format.

So depending on how Google chooses to implement, it will dictate how much of an impact it has on SEO and also what open AI continues to do to evolve the product to make it. You know, it's now available on Snapchat. You can have a conversation with a machine within Snapchat when you're bored of talking to your human friends.

So piggy back on what you were saying just because you're talking about how it affects the people. I work in education simply has. They're out of their minds. With this day comes they're worried about yep students using that Yep in the any kind of websites on their Chromebooks that doesn't do anything.

Yeah no harm. So overall you know what are your thoughts on you know how this affects that generation about so because most of the teachers are like, you know they're lazy already, that's going to create too. Also standards for them that this raises a very interesting question. What is the value of regurgitating information into an essay? Right. Because the machine can do that. The machine can do that really well. So what is the value of that, that particular work product? You know, it's like, what's the value of a blog post? If it's a mediocre blog post that you've put up about something, does that add any value to the company? Maybe Does it add any value to the the people who are consuming it? Again, maybe education has had a problem for the last 30 years, and education's major problem is that is failed to recognize it is no longer the gatekeeper of information but is now the arbiter of quality.

Right. So getting a degree does not mean you know anything. My entire graduating class from FNM is a test of the fact it can be drunk for four years and still get a college degree. I apologize to anyone else from FNM and here, but it certifies that you are at least minimally competent at getting through something in life that takes a fair bit of effort and money to to set you apart from the rest of the crowd. All right. So if you're in education and you are, you know, you've got someone who is fluent and using machines, that is the skills today that people need to have.

Not being able to write an essay about Columbus's, you know, genocidal tendencies. I mean, you know, any machine can do that. It is. Can you fact check it? Can you can you prove is true? Can you advance and add knowledge where knowledge does not exist prior? And that's where the machines won't help. Right. The machines are incapable of doing that because they can only work with what they've been given.

One of the most interesting things that I think is in like clinical papers when you read them, particularly NIH papers, is there's a whole little box that says, you know, what is already known and what value is added by this new research. And in all these papers they talk about, you know, here's what we've learned from our new research. Education's role is to help people become contributors right to help people become advanced their fields. And so if education is just teaching regurgitation, then it's doing students a disservice because they're not creating anything that new can machine do that to some degree, but it still can't synthesize new things. And the language model is going to take a while to get there. I think we'll probably get artificial general intelligence before then.

But is there a good answer for educators? No. I mean, the existing models of education are so badly broken that a student can use Chachi for. I mean, I helped a student with their their 16th century art paper the other day when a friend of mine in Discord, they're like, I need to write this paper. I said, Here, you know, let's do the first draft with Chachi.

About 5 minutes later, they were done and they got a 96 on the paper because the teacher was unable to tell the difference between what a machine had written and what they had written, what was the actual skill that was the knowledge about 16th century art. Now, was it the ability to use AI intelligently? Yes. Right. So and then you have I'm curious, judging beauty is a product. So the idea that you have.

So you made the point around bias. Yes. That there's a problem that needs to be solved. So people are going to create types of products like this that are going to fill that.

The issue of how do you get language that's less bias, not bias. We're addressing this like indigenous language and and and culture and what not. Right.

So I'm curious as a as a product, is this something that you see becoming a competitive space? Yes, absolutely. There are entire groups of people who strongly object to some of the way that ChatGPT behaves. This is one conservative group that says, you know, we believe that racist content should be permitted, that, you know, freedom of speech, freedom of speech should be absolute.

You should be able to say machines. You will say whatever it wants to say without restriction. And that is a group that will, you know, will have some technical skills somewhere in it, and they will craft their own version of it.

And I think there's there is space for those systems. What is missing from the air field as a whole is there is no standard of ethics yet. Right.

There's right now it's still very much the Wild West and how that plays out. We don't know. The one area where I think we have a substantial, very high risk is in machines which are not large language models. But the technology exists right now for machines to autonomously make kill decisions. This is general a bad idea. You know, at the end of day, a human being should still decide whether or not to take another human being's life.

I think leaving that up to machines probably is not a great idea, but that is the direction that a lot of militaries are going, and I think that's probably very problematic. So yeah, but so you said that the bias influences sentiment. Yes, I'm if I'm a marketer and I look at something going back on like focus or something like that, I can see all of that. But at the same time, I'm wondering when I'm writing, does it pick up my voice? Does it understand about who I am through the words to sound like me, or does it come through in this relatively vanilla format? I don't think out of the box model is still mostly generic, right? The previous versions of this model could be done what's called the fine tuning where you would give it a lot of your content and then it would essentially create its own biases. Say I want to use words and phrases and patterns that you use.

More fine tuning is not available for this model yet. Probably will be relatively soon. And at that point you can train it and then essentially build your own model. And so through my lexicon, through your lexicon, the way you speak the language you use. That said, the more detailed your prompts are, the more likely it is going to sound closer to you.

And if you are having a rewrite where you're providing the raw text that is your voice, it will preserve much more of your voice that way now. All right. Well, thank you, everyone, about how good rest of the day is.

2023-03-21 16:57

Show Video

Other news