NYU Law Forum—AI in Law Practice: What You Need to Know

NYU Law Forum—AI in Law Practice: What You Need to Know

Show Video

- Welcome, welcome to the Latham and Watkins Forum. My name is Chris Sprigman. Today's topic is the use of AI in your lives as lawyers.

So, this is a topic that I think is developing really rapidly. We're at the beginning of it, but we gotta a few people here today who are incredibly well qualified to tell you a little bit about this and to give you some things to think about. So, let me talk a little bit about me. First, introduce me if you don't know me. I'm the Murray and Kathleen Bring Professor of Law here at NYU. I teach a variety of courses, IP, Antitrust and others.

I also practice law somewhat unusually for tenured faculty here at the law school. I, along with some of my IP colleagues, have a small kind of specialized law firm that does IP work. I spent a long time as a lawyer, both in private practice and at the Justice Department and Antitrust division before I became an academic. And so, you know, I didn't use these tools because they weren't there for me to use. But I definitely learned a lot about legal technology as a lawyer and have been following because I am still a lawyer and because I'm very interested in what lawyers do have been following the development of these technologies. So let me introduce the people who've come here to join us today.

First is our graduate Ghaith Mahmood, who's a partner at the Los Angeles office of Latham and Watkins. He's global vice chair of the firm's data Technology and Transactions practice. And he also leads the firm's artificial intelligence task force. And when he has busy with those jobs, Ghaith advises clients on all aspects of IP and technology transactions. I should mention that Ghaith is in New York this week, as are the others on the panel, not just for us, but also because American Lawyer is hosting its Legal Week conference and he was nominated for the American Lawyer Innovator of the Year Award.

To my left is Anna Gressel, council two to my left is Anna Gressel, counsel in the New York office of Paul Weiss and a member of the firm's Digital Technology Group. Anna advises clients on a wide range of matters related to ai, blockchain, and other innovative technologies. And she's active importantly in industry efforts to establish best practices in the areas of ai risk management, compliance and governance. A topic we're gonna talk a bit about today. Anna's work in legal AI has been publicly recognized. She was recently named by Rework as one of the top 100 women leading AI in 2023.

Now notice I said Rework not WeWork. If she was named that by WeWork, I probably wouldn't be talking about it. Anna is also a leader in the development of women in the profession, currently serving on the American Bar Association Commission on Women in the Profession.

And previously a member of the board of directors of Ms. JD, a national nonprofit organization dedicated to the success of women in law school and in the legal profession. She also co-chairs the National Association of Women Lawyers Next Level Affinity Group, Betny Townsend.

Immediately to my left is the head of product at DISCO a company that provides artificial intelligence, cloud computing, and data analytics tools to lawyers and to law firms. Betny graduated from Harvard Law School in 2012 and practiced at the San Francisco Litigation Boutique, very fine litigation boutique of Keker and Van Nest. She now spends her time at disco thinking about how to incorporate AI into litigation workflows to free up attorneys. And we'll hear a bunch about this for higher value work that they might actually enjoy. Okay, so with that, please join me in welcoming the members of the panel. (audience applauding) So let's get started with some technology just to make sure we're all on the same page.

You've probably heard a couple key terms, artificial intelligence, machine learning, and you're wondering, you know, what are the difference between these things? Exactly what are they and what role do they play in legal practice? So let's just start by defining those terms. I'm gonna ask Dave to get us started on that. - Yeah, hi everyone. It's an honor to be back on campus and lots of nostalgia. So this is the first thing that we get into when we're advising clients or we're thinking internally about risk with using AI tools internally, is what is the AI tool exactly. AI is this really broad category of stuff that basically means any system that allows a computer to mimic human intelligence.

Well, there are variety of ways that you can do that, right? And the two dominant ways that I like to kind of explain it is using games because I'm also a video game lawyer and do a lot of video game work. So two games that we talk about chess and go. Right show of hand, folks who play chess regularly, anybody on chess.com good client

do a lot of cool stuff with them. How about Go, anybody play Go? Okay, Go. Okay, and Go, okay? So chess has probably a set number of permutations for how a game can go, how a game can operate. And in 1997 there was an AI tool Deep Blue that beat the then reigning Gary Casper of chess champion, right? That chess, that AI system was called an expert system. An expert system. One big category of AI are basically where experts come in and they write the rule sets, they write the opening gambits, they say, if Queen is here, probably a really good move is to do this.

And they really fed millions and millions of rules into this AI system. And the AI system really sophisticated to be able to decide when to apply a rule that not to underplay expert systems, highly sophisticated but a little harder to scale. And what do I mean by that? Then you get into Go. So Go has billions of potential permutations for how a game can can flow.

And it's really not practical to just code each and every rule for what to do when a piece is here or when a piece is there. And instead they came up with a tool. What we now really relate to as machine learning, that's where all the action is right now in AI machine learning. And to explain what machine learning is, think about what they did for Go. For Go, they fed into the machine just millions of how games went.

So first turn was this second turn was this, third turn was this. And they might tag the games, they might say in this game somebody played really aggressively in this game that we're playing really defensively. But we're not actually gonna tell you machine what to do, we're just gonna tell you what happened and we're gonna rely on you machine to extrapolate from that rules on your own.

That's called the AI machine learning model that you might hear of. A machine learning model is basically a set of weights and parameters of what the most likely outcome should be. And it is trained on a corpus of data, the corpus of data.

Here are the millions of games that were fed into it about how to play Go. And from that corpus of data is generated a model. And that model is what is used. It's the brain for how to tell an AI what to do. That distinction between expert systems and machine learning is critical. And it's why you often hear right now, and you've heard for years, data is the new oil or data is what everybody needs because it's that data that is fed into systems to generate the ML model and the accuracy, the reliability, the non-bias of that data is what generates a really quality machine learning model.

- So I wanna talk now about how these technologies are in use or will soon be in use by warriors. So, and to tie it to the first question, when I was in practice, we had sophisticated search tools, for example, to go through large piles of documents to pull out terms that we were looking for in documents. So that's definitely a electronic aid to discovery. What can these new technologies do that those technologies couldn't do? So maybe to get an understanding of the use cases in law firms or for lawyers generally wherever they work, it would be, it would be good to understand what does AI and machine learning allow us to do that we couldn't do before.

- I can take that one. So something that you touched on right there, right is discovery. The process of figuring out what documents you have to turn over to the other side and then reviewing those documents that you get from the other side. So this can include hundreds of thousands or even millions and millions of documents in some of the big cases. So lawyers have actually been using artificial intelligence to assist them with this for quite some time.

And that hasn't looked like any of the generative a AI stuff, which we will, which I will talk about in a second, but has essentially relied on like a recommendation algorithm. The kind that you see with Netflix, right? You watch a bunch of movies, you say which ones you like, and in the background Netflix is saying, oh, this person likes action movies maybe with a certain actor. And so that technology has been used much more sophisticated than the algorithm that Netflix is using. But essentially learn from the behavior of lawyers to recommend documents that they might want to review and to suppress documents that they don't want to review. So, lawyers start looking at documents, they start tagging certain ones as relevant, the AI watches.

They say, okay, they want documents with these characteristics. I'm going to push those ones to the top and then ones without those characteristics, I'm going to suppress. Something about that. Well it can make attorneys significantly more efficient, it still requires a lot of human input.

What we are starting to see with the rise of generative AI is an entirely different workflow because generative AI you can kind of interact with like you would another human, right? And so what we're starting to see is the ability to say, hey, I am looking for documents with a certain description, right? Ones that relate to financial records or ones that have relate to corruption or fraud. And you can sort of give a really high level description of what that looks like. And then you can start showing documents to a large language model, right? Like GPT-4, which powers chat GPT and essentially say, Hey, does this document meet this description? And the LLM can give you an answer. LLMs are also really good, right? Generative AI at providing text. And so you can say, can you give me an answer on this? And also give me a reason why. You can imagine if you were gonna have a room full of attorneys do that on a million documents, that would take an outrageous amount of time and also be incredibly expensive.

And LMS are not cheap, right? Like the technology is still quite as expensive and I expect the cost to come down. But you can do that in a fraction of the time and in a fraction of the cost. That's one of the things that I am particularly excited about seeing expand over the next couple of years. LLMs are also really good, right? Generative AI at doing things like summarizing right, lawyers deal with really long documents all of the time. Deposition transcripts, right? Can be hundreds and hundreds of pages.

And one of your tasks is a junior associate, often after you go out and take a deposition is to get the transcript back and then summarize it and send it to the client. That can take hours, right? It was pretty common to bill, I don't know, four, eight hours of associate time, which can be many thousands of dollars. Generative AI is really good at giving those types of summaries.

So those are a couple things that I think we're likely going to start seeing in the next couple of years. - So bey discovery, dealing with deposition transcripts, these are all tasks that are mostly focused on what litigators do. Yeah, so Anna, are there tasks that AI can do that are more tasks for corporate lawyers and what are those and how does AI address 'em? - So I'll answer that, but I also think that you should weigh in on this as well because I am a litigator, but we do. So my practice focuses on advising people within corporations on their adoption of ai.

And I just to give you a little bit of context and where I'm coming from, a really typical type of conversation that we have with our clients these days in addition to like how do we integrate AI into a consumer facing model and what do we need to do from a regulatory perspective to make sure that that or an IP perspective to make sure that's defensible and that is within the risk tolerance of the organization. We also work with our clients on their internal adoption of ai. And so to take a step back for a moment, why are companies really interested in internal adoption of AI and particularly generative ai? It's for all the reasons that Betny mentioned, which is this is a tool that unlike I would say prior iterations of ai and I've been in this space now for like six years prior iterations of AI required very clean data sets. They required often comp structured data that existed in Excel or very clearly tagged data that it may not have been necessarily an expert system, but required a huge amount of human intervention to make sure it was usable and actually could come up with reliable predictions. Generative AI is totally different, as many people here probably know it's trained on most of the internet.

It really works by finding semantic connections between words and using that to either generate a lot of text or understand meaning in text. And so for corporations, why is this super helpful? It's because they sit on a massive amount of internal data, right? They have internal policies, documents, sometimes going back 20, 30, 40 years, some of which has been scanned and who knows what form it's laying around in. I mean this, there have been efforts at within companies to digitize, their records over the past several years and make them more useful.

But now they have tools to actually discover information in those documents. So we see a lot of different use cases that are being explored. I mean, in addition to some of the contractual ones, we'll discuss other types of, I would say corporate use cases are, for example, compliance.

So can we figure out what the right policy is on gifts that we should be telling people about in X jurisdiction for x kind of purpose, right? I mean these are companies that are global. They may have hundreds of policies that could be applicable to a certain situation and how do you find the right policy quickly and get it to someone. They're also, we're looking at, we see a lot of use cases around in or internal investigations actually that's an area where I think people are quite interested in saying, we have this employee that came to us and has a grievance. How do we figure out the chronology of what happened very quickly that used to be given to outside counsel to do? And now I think there are tools that are making it easier to come up with chronology is based on a document set immediately without anyone having to look at the documents.

And it's not going to replace their review of those documents, but it points them in the right direction. And it's that ability to find a document, find the information point back to a source, particularly around some of the newer gen AI iterations that is really creating a lot of efficiencies within companies. - I want to get your views on this, but I did want to ask you something specific about law firms.

It's a big firm like Latham and Watkins. So I've worked at large firms like that and one of the things that's always a little bit tantalizing for lawyers at the firm is on virtually any question that you might be asked to address. Someone at that firm has thought about that question once upon a time. The person who thought about that question may not even remember thinking about that question.

'Cause it might have been months or even years ago, so much water under the bridge, all the documents that the firm produces get dumped into a system, right? They're there, but firms that I worked at, I'm sure Latham has a search facility, but searching is often difficult because different lawyers have different ways of tagging different arguments. There's no set set of standards. And so knowledge management at law firms is potentially a huge way of unlocking value, right? All that capital of the law firm kind of stored away on a hard drive we want to get at it. Yeah, does AI have a role to play in that? - AI is I think already playing a role in a lot of that.

There are already algorithms that are, when you type a search and iManage desk site, which is a tool that a lot of folks use for loading all their documents, it's actually gotten a lot better because algorithms are trying to predict what they think you actually mean rather than what you typed. Sure you typed this, we think you actually meant to type for that. And they are additional layers that are being used to search what they are not doing yet, at least to my knowledge. And if they are, that would be probably an issue that we gotta dig into is actually taking all of the information that we have and generating a new model from that. That is the next unlock that a lot of law firms I think are being very careful and deliberative about.

You gotta look at client engagement letters and rules of ethics. Am I allowed to take a client's materials and generate my own model from it that I'm now using? Or can I generate, kind of an agnostic generic model and then apply that to what I'm doing with my client materials? That question is what a lot of law firms are struggling with. - Yeah, I, this leads exactly to something I wanna talk about, which is the risks of these technologies, right? So we see how they can make discovery quicker, we see how they can make summarizing depositions quicker or they can make knowledge management more efficient within the firm. All of these are big things and there can be more like you can search corpuses of contracts to find what kind of debenture language actually works well in deals where you need that, right? There's lots of things we could figure out to do, but there are risks.

So you just mentioned one, right? Is it possible to use the documents that the firm has, some of which actually may be according to client agreements, may be the property of the client. Ultimately, is it okay to use these to train a model that's specific to a task this law firm Latham or another law firm wants to do? That's one risk, but can we talk a little bit about what risks are generally? - Yeah, I can go and then definitely. - Ghaith and Anna, I know you both had some views on this and Betny please join in if you do as well. - I think so the categories of risk that I would categorize are probably maybe three main categories. The first one is the one that gets all the headlines and that is hallucinations, right? So that is some lawyer didn't realize that what a machine learning model does is it doesn't necessarily memorize information.

It takes information and it extrapolates rules and predictions and based off the information it got, now you could get into a rabbit hole. There is some memorization happening. It's why the New York Times is trying to sue open AI right now. One of the claims is literally just is regurgitating things that came in.

But in general, that's not what the machine learning models are doing. They are predicting what it thinks the next literally next word should be in a sentence. So if you ask it, please give me the top five cases on slip and fall in New York involving whatever a puddle it might give you what it predicts those cases are, in many cases those are hallucitations. They are not true.

And as folks may have seen in some, there was a one in 2023 Meta v. Avencia that involved an individual who got sanctioned by a court for literally putting in cases that they claim. I thought it was like Google, I thought it just pulled information when and using GPT-4 to pull up cases and in fact they were wrong. Now there was a lot of work happening on hallucinations. There's one of the companies that I delved really deep into is a tool called co-counsel. When we helped Thomson Reuters acquire co-counsel this year, or casetext, I'm sure folks have maybe heard of Casetext, co-counsel is a GPT-4 enabled AI tool that is built on that, that is made available.

And a lot of what they'll tell you it's public is they spend a lot of time thinking about hallucinations and having a source of truth or some other filter that the output of whatever the model gives then gets filtered to an additional seed to try to filter out stuff that just for whatever reason doesn't pass the smell test of what should be true and trying to take that out. So hallucinations number one. Number two is confidentiality, as Chris was mentioning, and that's where it really matters.

So for example, at Latham we have our own GPT-4 model. If you go to chatgpt.openai.com, you'll get a little popup that comes up that says, are you sure you want to use this? Why don't you use something that we have internally? It's on our own servers.

It is GPT4, but it lives solely on a dedicated server that only Latham has access to. It's not talking to anything else. And the reason we do that is because we as under the a BA rules, you have a duty of confidentiality to your clients if you are feeding in client information into open AI systems, what's happening, actually what's happening is you are routing information to open AI servers and there may be legal promises that they might make you, depending on which version you're using that says we're not gonna retrain, but in fact your information has been transferred outside of your environment into another server.

And that machine learning models often retrain based off what comes in. That's how they keep growing. It leads into the third risk that I wanna talk about, which is model drift.

So model drift is the phenomenon in which models actually change over time. The more information that comes in and the more that people engage with it, the model might actually think, oh wow, everybody keeps telling me this was bad output. Maybe what the really correct output should be something totally different. And what ends up happening over time is something that worked really well when you first started using it after six months, it's totally changed. It's drifted in a way that we can't predict because remember machine learning models, they're not rules that we wrote like expert systems used to be chess. They are mathematical formulas that were generated based off data that we input.

And in many cases, interpretability is not common sense. We don't know what the rules are that it generated based off that. We just see the output that it comes out and we say, hey, that was pretty good, that was pretty bad. And then it learns and evolves over time. - So I might build on this for all of the folks in the room who are thinking, this is kind of interesting, AI seems like something I might wanna explore when I'm in legal practice. And I would say absolutely you should do that.

It's one of the most fascinating substantive areas of the law because it actually encompasses so many substantive areas of the law. And that comes through really when we're advising our clients on the kinds of risks that they might take on or the kinds of things that they should think about when they're either developing models internally or procuring models from third parties that might have developed them. And that includes like the open AIS or Microsofts of the world.

So I just wanna break down, what are the kind of buckets of risks, but I think you could also think about them as buckets of substantive legal areas that you might touch on if you're advising clients. So the first is confidentiality. I would bucket within that confidentiality, security for example, and privilege. All of these are, and trade secrets actually goes within this.

All of these are kinds of doctrines about what obligations you need to think about when you're disclosing information potentially to a third party. So remember that a lot of these models work because there's a vendor on the other side of the agreement that's actually providing a model. They might be licensing the model, they might be helping you set up a separate instance and the information that is shared with that vendor is critically important for companies. Some of them may have regulatory obligations to keep information private and that might include like confidential supervisory information for banks, there may be, substantive obligations that, or affirmative obligations they have.

Some of it may be privileged information where there could be arguably a waiver if that information is disclosed. Same with trade secret information for example. And so, we're always looking at the confidentiality risk when information is sent to a third party for the purpose of running a model.

Now the good news is a lot of those risks have actually been solved for, as Ghaith mentioned in how the models are architected and set up. The confidentiality risk is much lower with certain types of deployments than it was when all of the AI model like gen AI models hit the scene. But that's certainly, I think probably for our clients, one of the top concerns that they have. And it may be the number one deal breaker for them to actually go in and license model or license a product that incorporates ai.

You know, another set of very important considerations, slightly different are the privacy risks. And so there, what we're really thinking about is the use of personal information or certain kinds of sensitive information that may have obligations attached either under health privacy laws or financial privacy laws or general privacy laws. There may be different kinds of obligations depending on where the servers exist for the model and where the data is coming from. But that's a big bucket. A lot of companies that really think through, what information should be put into a model and should we give specific instructions to our employees to not do that or to only do it in certain circumstances to make sure we're complying with data privacy laws. The third is, consumer protection discrimination for example.

And that comes up in a bunch of different ways. We may have civil rights laws that apply to the use of ai. We may have automated decision making obligations that come up through privacy laws. But when we're advising clients on the deployment of these tools, in particularly in highly regulated context or in higher risk context like housing employment credit for example. Those are all areas where we really want to think very carefully about the potential bias risks of these tools. Because as you know, of course, certain data sets can be biased.

The internet is not an unbiased data source. I would put that out there. And so how you actually architect these tools and the kinds of guardrails you put around them can be really important in terms of thinking about your risk, for example, of an action from a regulator or from a private plaintiff that might have a private right of action to bring a case based on the use of ai.

And then finally, I think, on the hallucination risk, I would just put that under a large, large category of different kinds of obligations we might call regulatory defensibility. So we advise different highly regulated companies in a number of different areas that are always thinking what are our regulators gonna think if we're gonna put AI into this tool and can we defend it? This comes up a lot actually in the legal profession. But I just wanna draw the parallel to how it comes up for companies in the legal profession. There are all these cases now, or not all these ca a handful of cases in which different, non-existent cases have been cited to courts. And the courts always say, I think when they're sanctioning these lawyers, how come you didn't read the case? Because that is part of our obligation set as lawyers is to make sure whatever we're writing to the court is accurate. We actually have an obligation to read the cases we're citing to courts for example, that there's a parallel to that in what our clients are thinking about.

Which is, if I were going before my regulator, how would I defend the standards that we're holding ourselves to as companies in terms of how we're using AI and does that comport with the general standards or specific obligations we're under? And so banks for example, insurers, medical device makers, pharmaceutical companies are always thinking about those kinds of risks and defensibility. So that's my, I would say argument and why it's so interesting to practice in this field. There's so many different legal areas and I'm happy to chat afterwards with anyone. - Betny, I wanna bring you into this discussion about risk because you guys at DISCO are developing very powerful tools and selling them to lawyers and law firms. Do you have at disco a discussion within the company and then with your clients about responsible use? Because you know, like a lot of tools, right? Someone gives you a power tool there, there are certain ways in which it's good to use it and certain ways in which it's dangerous to use it.

It's probably no different for lawyers and AI assisted tools. So would love to hear your thoughts on that. - Yeah, we are having those conversations constantly with every new person who looks at our product sort of no matter whether they fall, whether they're at a corporate legal department or whether they're at a law firm, if they're at attorney, if they're a paralegal. And the way we are thinking about handling sort of the hallucination problem is ensuring that you are putting in the links to the underlying sources constantly. If you're gonna provide a deposition, a summary of a depositions transcript, each piece should be linked back, right? If you're gonna ask a question about documents in an e-discovery database, right? Every single sentence should have a source, right? Like that is the type of stuff that attorneys need and to meet their ethical obligations, right? As I mentioned with when LLMs are reviewing large bodies of documents, they can provide their decision and also an explanation. So there are some ways that you can sort of use the technology to help combat that.

That being said, right, we are putting all sorts of disclaimers around the use of it, right? Verify it in the same way that if you ask 10 different people a like complicated question, you're gonna get 10 different answers. That's what we see with the generative AI models, right? Not even just over six months. But if you ask the same question of the same LLM, two different days or even two different times in a row you might get a different answer, right? We're not dealing with the rule-based deterministic system where you put in an input and you get an output, right? It's a statistical, I think it's stochastic is the technical word my husband's data scientist process where it's looking at what is likely to be and it's sort of tracing different pathways. But if it's not a certain one, it's gonna go in different directions on different days. There's also different things that LLMs are good at and different things that they're not so great at, right? I think with co-counsel, at least sort of what I'm hearing, co-counsel is supposed to help attorneys write essentially their briefs, right? Help them do their case law research and then help them analyze their facts in the context of those cases is that it can do quite a good job of like summarizing case law.

But if you ask it to say, okay now apply the law to my facts, you're gonna get something that feels very mechanical, right? At least as of now, generative AI does not have sort of like the deep thinking critical flare that we expect from attorneys, right? That's why you go to law school for three years. That's why you work so fricking hard and then work so hard, you know, as a junior associate or as a junior attorney. And so I think starting to understand that and hopefully I'm hoping that the legal profession will start to really develop intuition and understanding about when they can trust AI and when it's something that is better for them to do themselves. - So just a quick story about hallucination. A few days ago I did a little experiment in my office.

I was slightly bored and decided that I would, there was a case I was interested in and I wanted to find cases in a particular circuit that had discussed the holding of this case I was interested in. I ran two searches on ChatGPT-4. One was please give me cases within this particular circuit from year to year that are that analyzed in some way the holding of this other case I did second search was the same search except if these, and I said and do not hallucinate, I got very different results. - Interesting. - And when I checked all the cases, there was a lot less hallucination in the search result where I said do not hallucinate.

- Yeah. - Then there was in the one where I didn't say it that. - Yeah.

- Which was a bit of a surprise. And that's actually so right, the people that built these models, they don't know all the tricks, right? To some extent it's a black box for them too. And so what I think people have discovered over the past six months is if you say, if you don't know, say you don't know, gets you much better answers. - Yes. - And that was the only way we are figuring this stuff out is 'cause people are experimenting. - It's a huge. - Millions of times, yeah.

- Huge, huge area focus right now is prompt filtering, which is what can we do even without the user even seeing it. We're thinking about it Latham, I know other firms and other companies are thinking about it. I wanna attach a filter so that when Chris inputs his prompt, it goes through something that just he doesn't even need to see or worry about. And then that prompt gets filtered with don't hallucinate explain like I'm five use analogies using Lego blocks.

Whatever you wanna filter when you can click. And then what you will get will actually be in some cases potentially more accurate. That's the work that data scientists are doing when they're working with companies establishing these models. - It's also interesting, like there's a whole layer of knowledge management that sits on top of how you use these tools.

So we have a whole knowledge and innovation team that's actually working on preserving our best prompts and sharing them with other people so that other people can benefit. It's like you're no longer just sharing the research, you're sharing the actual prompt architecture. But this is, I know we'll get there later, but this is a place where I think that if you're in law school and you have mastery of these tools, you have so much of a potential value add for law firms being able to come in, be really thoughtful about prompts, be ahead of the curve in terms of how these tools are used.

Everyone in firms is still figuring it out and learning and you may have a better prompt than anyone else in a firm. - Yeah, I gotta say that's where I wanna end up in a discussion of what can students do to prepare themselves for the world that AI is helping to make at the moment there just to kind of roadmap where we are. So I wanna leave plenty of room for questions at the end, but there are really two things that I want to get to.

The first was the one I just mentioned. I wanna get to that last, before that I wanna talk about law firms, right? So a number of you will end up in law firms at least for part of your career. Maybe a lot of your career and law firms have, they law firm economics are diverse and changing. But law firms have economic structures that we probably don't think about that much in law school. I don't know how much you know about how law firms actually work as businesses, but I am really interested in your views of how AI assisted tools across the entire spectrum of things that law firms do are going to change how law firms work, the economics of law firms and the experience of the people working in the law firms. So to the extent you have thoughts about that, I'd love to know 'cause these are people who have a stake in that.

- So I think I always tell folks 'cause I get the question a lot and I've seen the ups and downs and the hype cycles of different technologies to know one thing I as a quick example, I was told when I was a summer in picking which law firm I went, I was told you should really come because x, y, Z is on the cutting edge. Pick that firm and then, or pick that practice. Then when I started a year and a half later, totally different. It was wildly not the same. By the time you are done with your 2L summer or whatever it is and you start, things change.

And this space especially is changing so dramatically. I would say it is critical to engage with these tools but with this mentality of keeping curiosity always you could be right now, I'm gonna be the best prompt engineer. I'm going to learn all the best prompts so that when I start at law school, when I start at my law firm, I'm gonna know all the cool prompts and it'll make me this amazing lawyer. Well then what ends up happening is a year and a half later people are like prompting is dumb because we've automated all that internally and you don't really need to know that skillset anymore because a computer can now do it way better than than the skills. But in the course of doing that engagement with the technology, you learned how the technology works, what its deficiencies are, how you can further improve with it.

Hopefully you're researching what are the good sources of truth out there. 'cause it's really hard right now to find good sources of truth unless you really engage with it and you learn, okay this person is credible, this person, just a lot of what they're saying is just not that deep in terms of the things that I need to understand. Engaging is the best thing you can do right now, but not with a filter of I'm loving up my skillset. It is a filter of I just wanna understand how things work so that when they change I'm resilient because I understand I have a skillset of how to adapt to new technologies and how to filter what is good and what is bad about them. That is the number one thing that I think folks can be doing right now and being less focused on, I can't keep track with the new products, the new names, the new next big thing and I gotta stay up on that.

I wouldn't worry so much about the individual products but rather just the themes of how they work and why they are so impactful to law. - Yeah, I mean I think there are a lot of conversations about how AI is gonna challenge the law firm model and we've been internally within our firm really giving that some thought. And I just wanna pull on two threads that I think come up pretty significantly and pretty often with respect to that. One is the billable hour.

There, I think there's a lot of discussion about how and when the use of AI is gonna really put pressure on the billable hour and what that's gonna look like for firms. And just for folks out here who, I assume you all know this, but usually big law firms bill by the hour and if there's a different rate structure for depending on seniority and there's an incentive, you know to have big matters come into big firms because they tend to be revenue generating based on the structure of the firms. But there are questions now what happens if we can't bill for junior associate time because clients think well junior associates are replaceable by ai.

I think it's those kinds of questions that are really causing some angst broadly within the legal profession. But from my perspective, I'll just take it one step back and say I think the bigger question that the legal profession is going to ask me to ask itself is what value do we really add as lawyers? Where do we come in and really deliver impeccable client service? Is it in finding the case? Is it in analyzing the case? Is it being able to defend the case to the court? There are a lot of different ways that we add value as lawyers and some of those ways are going to probably be replaced by AI and some of the ways of adding value are gonna become even more important because of the use of AI defending the use of the case becomes more important when the question of whether the case should be in there is on the table. It's the same reason we defend the use of tort, right? We go into court of tort's predictive analytics used in e-discovery. Our role is not just using it, it's defending it and having the right metrics for thinking about whether we can represent to the court that our use of this tool in the client's interest and it is in the court's interest and in the interest actually probably of our adversary. And so, you know, I think about that as you know, are you always looking for the place where you add value to the client, that is gonna keep you on the edge and it might be knowing the technology the best it might be understanding their pain points and what they're bringing in house in terms of the technology and how you can help them. And it might also just be figuring out like where as a lawyer can we be the best client advocates? And so it's kind of about all of those points where I think we're gonna have to change but that some of that change may be for the good and you guys are gonna be right at the edge of that.

- Can I just, Betny I'm very interested in your views on this, but let me just say one thing about the things that AI can do versus the things that lawyers can do. So it might be that junior associates who used to spend a month in a room, let's say company A is merging with company B and it's a reportable merger to the justice department and there's a huge number of documents that have been requested and a second request that the Justice Department issues and you're just reviewing these documents to try to figure out what it is you're about to hand over to the justice department, right? So once upon a time when I was in my first months of being a lawyer, I literally sat in a room, the merger was explained to me that the competitive sensitivities were explained to me and I sat there with a bunch of other people about my age under the kindly lash of an associate maybe three years older than me. And we looked for the stuff, I mean like by hand, right? So that changed over the course of my practice into, we didn't look for it by hand. We used assisted searches that got much more efficient, right? So instead of having like 15 associates looking for stuff by hand, there were like two associates and some contract lawyers, right? The structure of the job changed, right? Fewer associates doing this kind of stuff. Now maybe it's quicker and less associate hours are needed to perform it. Associate work is diminished to that degree.

It makes associates more efficient. It also means firms need fewer of them. How does this change the way the firm works at the end of the day? Are there just fewer people at the bottom producing more value for the people at the top? So the people at the top stay very wealthy, the people at the bottom are fewer, maybe more valuable. How does it work do you think? - I'm actually very interested to hear Ghaith's take on that as somebody who's yeah, in the leadership at a firm.

- So we talk a lot, anybody ever do software development and hear the term 10x coder, you familiar with that term? 10x coder is this like thing of this developer is so adept at using different tools that they are 10 times more efficient than somebody who's not adept at using tools. And something that I use a lot at law firms is there's probably going to be a very real time when we're gonna evaluate associates, not just based on their ability to just critically think, but on their ability to tap into the tools that are at their disposal at the right time. And they're in fact, here's what is changing right now at law firms, there are a number of attorneys that are coming out of being billing attorneys that are directly facing clients and are moving into the management side of the shop, which are training attorneys on how to effectively use their tools. Because as I said, the the number of tools that are coming out is just such a spike of what we're seeing this this week and what we're seeing at the Hilton right now.

If you go over to the Hilton on 53rd and sixth and you just walk in the lobby, you are gonna see just an incredible amount of new legal tools that are available and that are being made available. I think for associates, you're going to get in there, you're gonna get in the trenches, but then over time you might end up thinking about am I still on the client facing side or am I now more on the technology side? And that's something that we're thinking about. I can't predict for you though what it's gonna mean in terms of retention rates or hiring rates or how we're hiring. I know this 'cause I talk with, I sit on a task force at Latham that thinks about it. I also talk with people at other law firms that are doing the same.

We're not changing our billing and our hiring practices right now. That's where it's everybody's thinking about it and everybody's obviously trying to evaluate what it could be. But that's not changing.

Now what we are changing are our training practices and what we expect, what our core curriculum are. It's no longer just, we're gonna go through all the cases and we're gonna teach you through all the cases. We're gonna do that. But we're also our chief innovation officers or our chief technology specialist, they're like the superstars of the firms now for in the past year.

Everybody knows their name. Whereas I talk with the one at our firm, he's like, I used to walk by, nobody knew who I was. I just kind of hung out and everybody's like, hey I wanna talk to you about this new tool. It's absolutely happening. And that the way we train is changing a lot. - Yeah and something that I just wanna note 'cause I know that we've, people have sort of touched on the anxiety sort of felt by I think probably many of you in the room about like, will there be junior associate jobs for me? - It is possible, it is possible there will be fewer junior associate jobs in the future.

I just want to note though that the parts of the job that are being eliminated were profoundly miserable. Like so I spent five years at two different litigation boutiques and I didn't leave because I had like a passion for legal technology. I didn't actually move into any legal technology right away. I was just so burnt out. I was at firms that staffed cases really leanly. I was on a team of maybe five or six attorneys and we had 2 million documents to get through and you're reviewing things late at night, they're not relevant.

And you wanna tear your hair out and you're like this isn't what I went to law school for, right? Like this isn't fun or you know, with a deposition piece, right? You'll be on a case that has 50 depositions and you'll be flying around the country for two months taking all these depositions and then you're expected to find time to sit back and read a 300 page document so you can get the client their summary. You know, within three days of the deposition finishing. There are so many opportunities to instead get to do things that are fun, right? Like have the summary and then get to talk to the client about what it might mean for their case. And so that is one thing that I would like to know. While I don't know that your anxiety is entirely misplaced, there are some ways that I think this is gonna dramatically improve the quality of life for associates. And if you decide to go on that second track and be somebody who's helping attorneys figure out the technology, I mean that's a big part of what I do and it's actually, it's really fun.

It's really fun to get to sort of straddle this world of your legal expertise and the tech world. There are things, it's so exciting, it's so cool what we're seeing. And so you know, that's gonna be, that's a whole new career path that is going to open up to attorneys.

- I'd also mention like to the extent you can get up that learning curve quickly, it's the process of supervising the technology and supervising the process of using the technology that is super, super important. So if you're implementing a document review, you might not, I mean you might be able to add value by looking at the documents. It's certainly one way in which we add value, but you can add a huge amount of value by understanding how a document review should run, how it should be staffed, what the economics are of staffing it for the client and for the firm. And like how documents can be lost that you should be finding that is actually, you know, when I'm looking for like mid-levels to staff on my cases, that is a core skillset. And so it's not that those skill sets go away, it's just that you actually can probably be at a supervisory level earlier in your career because you have to spend maybe fewer years just looking at the documents because that has to be the way things are staffed.

But when you have more flexibility on staffing, you can actually kind of level up more quickly and take on responsibility earlier. And that's what you wanna be looking for. I think like at least in a litigation role, you wanna be looking for opportunities to deliver value, to take responsibility, to understand how things should be run, how things should be reported out to clients. Like no one will tell you that in law school, but it's hugely, hugely important and it's really builds that trust within teams that things are running the right way and you're not missing things and you're doing everything kind of according to the firm's principles and able to then defend it in court. - Yeah, I gotta say when we look at the economics of this, it just seems there are so many possibilities and we don't know yet. So let me just mention some of the possibilities and get your reactions.

So one is that law firms become more efficient at producing legal work quality, legal work, fewer lawyers can do more work. Most of the cuts will come from the bottom because a lot of the stuff that requires judgment and experience how to convince a judge of a particular point, how to pitch something to a jury, how to structure a deal in the way that's both most efficient and going to last. You know, these are things that the kinds of judgements that AI are really suited to do now and maybe in the future, but you know, we're not there.

And so, it'll come out of the people at the bottom. Fewer people at the bottom being more efficient, generating more revenue. Those people at the bottom are gonna be more valuable.

They probably get paid more. The partners do as well or better the clients get better service for less. Everyone's happy except the people at the bottom who aren't employed, right? A lot of markets work like that when they become efficient they shed employees. Another way of thinking about this is AI will just diversify the way people do this job. So in other words, AI could potentially create enormous opportunities for smaller firms to compete against larger ones.

Why? Because Ghaith mentioned, I think it's absolutely true that most clients right now, if there's a large document intensive litigation, they're not bringing it to a small specialist litigation firm. At least not without a lot of help because the big litigation firm is able to handle just the massive processing work that goes along with discovery, taking lots of depositions, et cetera. If people using AI driven tools become much more efficient, maybe the small firms can start to punch above their weight and can start to put effective competitive pressure on their larger siblings, right? That's a possibility. Don't know which way it's gonna go, but it's a time of a lot of change I think, and AI will be at the root of some of these changes. - Can I just add like one more pressure point? 'Cause I think there's one scenario that you haven't mentioned that's a little bit different. One is that the third scenario is that in-house law departments internalized work that would've otherwise gone to law firms.

I think it's important to recognize that that is actually already happening and is possibly going to continue happening. I don't think that puts the same like pressure on the pyramid of the law firm. I think what it might mean is certain practices end up shifting more to the in-house side to than to the law firm side. So just that the economics don't work to hire outside counsel for that. But that also has been happening and continues to happen as in-house law departments become bigger and more sophisticated and that is as much a function of how law legal departments are run in the economics of corporate legal departments as it is the economics of law firms.

But there is a, I would say demand side piece to this as well. - Okay, so I wanna spend five more minutes before we get into questions. Just to touch again on something that Ghaith started talking about, which is, if you were in law school now, what are the things you would want to be doing to put yourself in a position to understand and to operate in the environment that is emerging in front of us with respect to AI assisted legal practice? Anna, Betny wants to start. - You wanna start? - I can go ahead.

So something that I think, and hopefully you are well poised to do this, is to just start using every single thing out there, right? We need to develop an intuition around what different tools are good at and what they're absolutely not good at. And the only way you develop intuition, right, is by doing things over and over. If you're putting together your resume, even if it doesn't end up helping you, using one of the zillion of tools out there to help you do that, I think would be a great exercise, right? You need to, well, not for your schoolwork of course, but for anything else, you know, you should write a letter.

- Wait, wait, wait. - Are there a, or is that something. It's that simple, but go ahead. - But in terms of figuring out like where do you get good answers? Where do you get bad answers? When can you, how can you sort of use this to help you be more efficient in general? And I think right, it's gonna make you less efficient at first, right? When you're trying to use a new tool. But I think now is the time to put in that extra effort so that you can start developing that intuition.

- I would say I think that like the substantive legal areas are becoming less and less siloed all the time when you're talking about technology law and being able to be a good counselor on technology legal issues requires understanding IP law and understand privacy law and cybersecurity and regulatory obligations. And so just think of yourself as building a well-rounded base for understanding how all of these issues intersect in. I mean, increasingly I would say we are building our practice to mirror the kinds of issues that our in-house product counsel lawyers are dealing with, they have to look around corners on 10 different, 10 different substantive legal areas. So do we, they can't keep their practices siloed. It just isn't practical for them to view things in a vacuum. And so even if you love, I don't know, even if I, and I love copyright law, don't get me wrong.

I love copyright law. Even if you love it, you should also be thinking about all of the different substantive tools to add to your pocket so that you can provide well-rounded guidance. Why is that so important? I just wanna make one particular pitch here.

It's because sometimes there are trade-offs in how you're balancing risks between certain areas. Privacy risk and IP risk are not always aligned. Regulatory risk is not always aligned either. And so, really good judgment means being able to weigh some of those risks and help kind of guide your clients to the right landing point. So the more you really internalize from across those areas, the better.

And to the extent you can take practical, clinics obviously, that's the common reframe, but I think it is very true. That's helpful as well. And then I would just say stay on top of the technology if you like this area there is like nothing more fun than trying things out, going to conferences, thinking about like what is the next ChatGPT moment, like what do you guys think it's gonna be, what do you think the next cool tool is gonna be that's gonna break open this? And I'll say personally I just think AR and VR is super cool right now and like could not be more excited and like interested in that area. Get excited, like let that drive you. - Yeah, it's a big week for VR this week Vision Pro. So for folks who have read the tomb, that is the Biden executive order that came out on October 30th and have seen all the ink that was spilled on that.

One thing that jumps out from that executive order is a phrase that now has become quite known amongst those that are getting more into the space, which is red teaming, right? And red teaming was a large folk part of the emphasis in the Biden executive order. What is red teaming? It's basically just are you employing a team that is dedicated to breaking the model and finding all the vulnerabilities that it has? And the Biden executive order is asking developers of really large, sophisticated models to now report on their red teaming practices. Red teaming is something that law firms are beginning to be engaged to do where the lawyers are sitting with the technologists to evaluate the technologists say, I found a problem. Is this a big deal? Like do you really, is a software developer the right person to make that determination? Oftentimes not. It needs a cross-functional group of folks that are philosophers or economists or lawyers or other, just people that can really weigh in on how do you classify when the thing breaks, does it matter or does it not ma

2024-02-15 06:02

Show Video

Other news