Can The Law Keep Up With Technology? | Josh Fairfield | Wondros Podcast Ep 170

Law is a couple of things, but mostly it is the language, the social technology of how we have decided to live together, of how we've decided to work things out as a group. It's a meta language, right? We work as groups. That's our superpower. And, you know, here's another point of hope. This is E.W. Wilson's point that: cooperation in this sphere really does well.
And he says there's an iron law of evolution that that selfish individuals beat out altruistic individuals but that cooperative groups beat the living snot out of selfish groups. So when we've got law, law enables us to, essentially because it takes force and it puts force in the hands of the state, and it stops people from using it as a private matter, it lets us work together. It lets us, it gives us a framework for using our super power while not being under threat.
And because of it, it means that we can evolve law like we evolve language. And that's the center of the book. What I said was look, when we deal with technology and law, there's this story that law can't keep up with technology and it's rampant nonsense law can keep up with technology because law evolves as fast as language does. And if you've ever looked around and said, "Wow, I've never heard that word before" or "I've never heard that perspective before," or "what does this particular hashtag mean?" You realize how fast language evolves.
Law can evolve that fast when we see what it is. It is the language of how we decide to cooperate and work together. Hey Josh, how are you? Thank you for coming on. Really appreciate it. Thanks so much for having me. My sort of starting question for you, Josh, is you talk about what do we humans have as our superpower is our language and our ability to speak with each other.
And I think, you know, that was sort of a foundation for I mean, many things. Your books and, you know, a lot of the work that you do and have done. But how do we use language now in 2022 and what does language mean when we have all of these other capabilities? We talk about artificial intelligence. We have communication,
it's beyond speaking. We have entire elections won on Twitter, we have virtual commu-- you know, we have-- So where is language now? How is language impacting the way that we're communicating and what are the sort of, not dangers, but I guess that's a big question. I'll start there. I mean, that's the question, right? I mean, I might as well put my cards on the table. I was watching the national language shift on Twitter and twist and become something other than the country that I grew up in.
That caused me to really want to look into what was happening in the interchange between law, technology and language, especially since law is language, right? It's just the language of how we decided to work and live together. And so it's exactly those changes: Twitter, things like #MeToo, Black Lives Matter, and other memes and other symbols and language that have come and have really changed our polity, have really changed our body politic. That's one of the reasons why I'm writing a new book on how to have the kinds of communities that can generate language that's going to help us thrive, starting with the examples of what went wrong, which is where you started. You know, what do we do with this, and then how do we get it right? So how do we use language in 2022? Well, not well. The present circumstances are that our national language, our worldwide language, our global language is not shifting towards conversations that cause us to thrive and conversations that let us grapple with the problems of the future. Looking at the planet, looking at civil rights, and civil liberties, pick your issue.
It doesn't seem that we're having a constructive conversation about it. So what do we do? How do we do that? My take is--there's sort of two takes, right? One is straight up, you know, Luddite. This whole thing was a mistake, roll it back, stop. The other might be to say, wait a minute. A lot of technologies do have a maturing period and merging hard technology with social technology, which I talk about in the book a lot. Things like this conversation
might have happened in one room a couple of years ago. Now it's happening over Zoom. We can develop social technologies, ways of being, that turn this stuff toward our good, that turn this stuff to thriving. And I think that what we have to do is have a measure for how technology implements our ability to have conversations just like this one, right? Where we develop our language and anyone listening is part of that conversation and they're going to develop their own reactions to what we're saying and it'll further develop their language. How to have conversations like this one that tend toward human thriving rather than toward where we're headed now, which I'll say is the opposite of thriving.
Can a nation succeed with groups of people having differing views about the definition of language? you know, the right and left as a or--I don't know how we would define these terms. Just call them right and left for now-- They think about the same words very differently. So they have, you know, a different myth about what America is, a different understanding of what America is. You know, how do we reconcile these views so that there's common there's a common understanding? Does a nation need a common understanding, a common myth? I mean, a nation is a common myth. That it's existence.
Look around. There's, wherever you look, you're not going to see the United States of America. I'm in Virginia. It doesn't exist either, right? It's a, it like any other law or any other polity, is a consensual hallucination. We agree on it and we make it true. And when we don't agree on it, it starts to come untrue, which is how nations don't make it.
And, you know, I don't want to be grim, but that's that's absolutely the path we're presently on, both historically and technologically. I mean, would that be would that be an evolution? Would we be evolving into a different state? Which ultimately will end up being a more positive state? Or are we at a place where you feel like we're definitively in a problematic? I think that the end of that story hasn't been written. Obviously, I wrote the book to give myself some hope, but that began from a place of deep concern about where we're going and-- Go ahead, Priscilla. Sorry, but there's something that you just said struck me so to my core, it's this hallucination. And I think we've all, at least I was raised to believe in something.
Like this idea that I had was the reality and now that you just pierced a bubble in there, I mean, a hole into that balloon. And so if suddenly we don't know what our myth of our country is or our city or citizens, like, then it is a very just it's dis-whatever. Dis... combobulating. It's a disorienting moment, yeah?[/] It is. Then again, there's also deep hope in it, right? Once we know what we're dealing with, we can use our superpower to generate visions of a just world. I mean, look, that's what that's what we've been doing ever since we've said-- I say in the book, there's a deeply problematic phrase: all men are created equal.
No part of that phrase is unproblematic, but it's served as the basis for a series of iterations of ideas about justice, ideas about equality, that gave birth to a widespreading of democracy, and then as those ideas have come under attack to the worldwide democratic recession, which is what we're looking at right now. You know, I think this goes back to the question of can we survive if we don't agree on what terms mean? And the answer is, of course we can survive if we don't agree what terms mean. Anybody who's had an argument with, you know, an aunt or an uncle at Thanksgiving will understand that you've never disagreed more furiously with somebody than when you realize they've got a slightly different meaning that they're attaching to words.
The problem is what stops us from using our superpower, from acting together. There are kinds of communities that generate life giving language that tends to have different takes. I mean, that's the first point of hope. They tend to have diverse cognitive toolsets.
It doesn't work to have everybody with the same approach coming in, laying down a definition for language. That's the way you don't get evolution because you're not getting new information into the system. So they have to have completely diverse perspectives, but they also have to have shared goals. And in my view, they have to have a commitment to something, you could say nonviolence. you could say to safety in the creation. We have rules around the world.
For example, you can't arrest a legislator while they're legislating. That doesn't mean we don't need police. It just means you can't bring force to the table when people are trying to come up with ideas about how we live together because if you do that, then the force just wins, right? I mean, you've got, we've all seen legislatures who legislate at gunpoint. That kills the entire process. So so I do think there's a lot of hope in our superpower, but we're going to have to build communities and do it intentionally so that they can engage in this diverse, humble, repeated experimental process where we can develop language that then gives us the shorthand to grip the problems of the future.
Because we just don't have that language right now. What is the law and what is its purpose in society? Right. Right. Which is, of course, what, you know, what makes a book about law at all.
So I posit that law is a couple of things, but mostly, it is the language, the social technology of how we have decided to live together, of how we've decided to work things out as a group. It's a meta language, right? We work as groups, that's our superpower. And, you know, here's another point of hope. This is E.W. Wilson's point that: cooperation in this sphere really does well. And he says there's an iron law of evolution that that selfish individuals beat out altruistic individuals, but that cooperative groups beat the living snot out of selfish groups. So when we've got law, law enables us to essentially because it takes force and it puts force in the hands of the state and it stops people from using it as a private matter.
It lets us work together. It lets us, it gives us a framework for using our superpower while not being under threat. And because of it, it means that we can evolve law like we evolve language. And that's the center of the book. What I said was, look, when we deal with technology and law, there's this story that law can't keep up with technology and its rampant nonsense.
Law can keep up with technology because law evolves as fast as language does. And if you've ever looked around and said, Wow, "I've never heard that word before," or "I've never heard that perspective before," or "what does this particular hashtag mean?" you realize how fast language evolves. Law can evolve that fast when we see what it is. It is the language of how we decide to cooperate and work together.
Now, what it's not is what a lot of people think it is, right? Which is dusty words written in tomes that no one reads and has no practical impact on anyone's life. Often when you hear this story that law can't keep up with technology. People will point to laws that are on the books.
You know, there are a couple of favorites. There are still laws on the books in some towns that you have to go ahead of your, you know, horseless carriage and, you know, sort of cry before you cry out before you, as you're going through or carry a lantern so you don't scare the horses of the other carriages. There's all kinds of laws technically, quote, "on the books" that don't have any impact on our lives anymore. And that's where I offer a definition of what law is not. For a law to exist, it's got to impact humans, it's got to change human behavior, it has to be a living part of the conversation between us.
Which means that all of these old laws that don't impact anybody's behavior anymore, and we see a lot of them, for example, one of the biggest ones--it does still unfortunately, impacts on behavior, it's just it's just a terrible law, the Stored Communications Act-- deals with when and how the police can access people's emails, usually without a warrant, unfortunately, based on bulletin board technology, you know, from like the 80s and 90s, right? It's antiquated. Those kinds of laws are the subject of the critique that law can't keep up with technology. The answer to that is no. Law is the living conversation between us as to how we're going to deal with the problems of the future.
That meta conversation is what we can adapt at speed. And here you can really see it. You know, I often say to my students, the oldest cases often make the newest law. So, for example, my kid brother was being recruited. He works for Waymo, the autonomous car group but he was being recruited for for a planetary mining initiative where they were trying to create essentially drones to to mine asteroids.
And the question was, who owns the asteroid? Like, who owns space mining? And the answer is, I said to him, look, this the first class I teach in law school, it's a case about a fox. And the question was, "between the person who was chasing the fox and the person who caught the fox, who gets the fox?" The answer, it's the person who caught the fox. It's the law of first possession. These are rules about how humans act. It doesn't matter what technology we use. And when you hear people say, you know, how can we evolve the law to keep up with changing technology, almost always the answer is humans don't change.
Right? What do you think people are going to do with Bitcoin? Defraud people, buy drugs, also invest, also start companies, also build empires like they're going to do all of the range of beautiful and horrible things that humans do. None of this is a surprise. And so when people come into these circumstances saying, oh, well, this is a new technology, we expect not to be regulated, the answer is no, actually, you already are, that most of the law already in place, we've had this conversation, is another way of putting it.
We've had this conversation. And if you try to start your company, for example, with an initial coin offering the SEC will knock on your door and very bad things will happen to you. And if you follow the rules, then you're going to be able to take advantage of a new technology and you're going to and you're going to go vertical. And if you follow the rules, then you're going to be able to take advantage of a new technology and you're going to and you're going to go vertical.
So the answer here has to do with changing our definition of what law is, moving it away from this idea of old statutes that nobody reads and nobody cares about, moving it into the living conversation about how we're going to recognize that humans are doing the same thing they've always done, but with new technologies. So artificial intelligence, you know, there may come a point, there may already be a point where, you know, if we look at certain forms of artificial intelligence, they really have their own decision making process. So the thing is, in certain forms of it, we can't even tell what the decision making process is. That's right. How do you assign liability to that? Is it the person who's, you know, hitting the "do the query" or is it you know, how do we, you know, how do we figure that out? Right. How do we figure that out? Hard question of new law.
Surely we haven't thought about this before. Right.[/] Wait a minute. Wait a minute. Actually, we have two major frameworks. Either one will work fine. We just have to think about it.
So oftentimes we talk about tools and other times we talk about agents So if we talk about a tool, then if I use a chainsaw and or a patent generating algorithm or what have you and it does damage or maybe, you know, a stock buying algorithm and I crash the market and it does damage then under the tool analogy, because that's how law operates. It operates by analogy. Then the nearest human to the AI is responsible. And we see that a lot in bleeding edge questions, things like distributed autonomous organizations, which is a blockchain-based organizational structure that is trying to sort of free itself from human influence.
And it never quite works. Yeah, it hasn't worked. Those also aren't based in America, and they seem like if you do it in America, you're going to possibly go to jail.
So it'd be... to do a DAO that way, you know? I mean, there are there are U.S. DAOs, but then they're all, they really rapidly acquire, you know, quotation marks because there are humans involved and there is human oversight. Humans are how an entity like that gets access to the legal system, right? Without any humans involved, if it's just purely code floating out there, it can't have any access to law.
The other model would be an agency model, right? Saying things like "if I hire someone and they do something in the scope of their employment, then, or the scope of their purpose, then, again, I'm liable for it." And if they do something that's wildly out of scope, so wildly out of scope that no one could ever have thought of it-- these are my favorite words in law-- they commit, "frolic and detour," right? you can imagine one of your employees just kind of frolicking, buff. Frolic and detour, if they're just doing something you couldn't possibly be accused of not paying attention to, right? That's where the law cuts off. The law cuts off when humans can't adapt their conduct to follow the rules because they just couldn't predict it.
Fine. Then we let you off the hook. But we have other ways of trying to manage that. Which is to say that behavior itself is beyond the pale and we'll impose liability directly on the on the agent. So we've got models for this. And what do we do with AI? We do a real mixture of these.
I mean, in a lot of senses, AIs are tools. In some senses, they are so complex, this is the point you raised, they're so complex we don't know why they're doing what they're doing. That might make them like an agent, except there's a third step to it, which is these AIs are not entirely operating on their own.
They learn what we teach them. Neural networks learn on colossal datasets of human behavior. Remember, like in 2010 through 2012, when, like, Google voice to text suddenly really got good, like it just wasn't before.
And then, boom, right? What was going on there was they were using neural nets and a bunch of other stuff. To parse gargantuan datasets of how people spoke. And they were learning and learning and learning from human behavior.
So one thing we've got to be aware of when we talk about liability is also if we're going to try to displace liability, try to displace responsibility onto the AI We also have to recognize that the AI learns what it's fed. We see this problem all the time, for example, in racist redlining for loans. Yeah.[/] Right? Because if you feed racist data into the AI, that's what it learns. Right.
I mean, that's been a problem with sentencing guidelines, yeah? It's a huge problem with sentencing guidelines. Yeah, and I think that this is really important and a key point, because also, who has access? Who are the builders of AI technology? Again, is it an elitist crowd? You know, who is able to go in there and start, you know, writing the codes or asking the question or whatever that is? It's a self-perpetuating, as you said in the beginning of this conversation. Humans do what humans do, have been doing forever. And so these same problems are applied in this new space. The question is, is it stoppable? And then can there be a law to say that is racist, you know, kind of behavior and how does the law then defend and protect? Tough to make a racist law because it's like.
No, no. What I'm saying is it's the AI behavior might be, you know, racist. How do you, how does the law then protect the citizens? How do we deal with that then? Well, I think the sort of the fundamental question there is how do we come up with these laws? You know, they're pieces of the Torah. They're pieces our grandfather taught us, like what are the what are the underlying things that that builds these that laws are built upon at their very core, you know? And that's, these two questions are linked. So, first of all, I love the question, how do we come up with these rules? Because we often think of laws, again, as the dusty book model, as something that happens when legislatures come together or when courts decide.
But that's not actually how law is generated. Law is generated every time humans get together. We're using rules right now.
We're using rules about who is going to talk over whom. Now, these are rules we've learned culturally, right? Yeah, we don't quite understand that between me and Priscilla, but with you-- [laughing] Well, yeah. Yeah, me neither. Yeah. We're working it out.[/] Me neither. Working it out for 20 years, we're no closer to an understanding yet, but we're trying.
Right? So, I mean, like anybody who's got kids, anybody who's, for example, in Chicago, which is where I, where I went to law school, and had my first kid, if you dig somebody's space out from the snow in Chicago, it's your space. You know, God help you if you take somebody else's space. There's a great paper on this-- I didn't write it--a great paper on this, whereas in other cities, you know, you dig a space and it's kind of your contribution to the common good. And like other people can park in the space, we work out these rules every time we go to the theater and say, hey, can I step past you? Or, you know, or we go to a restaurant, we say, is this seat taken? Can I borrow it and move it to that table? We work out rules for living together all the time. Now, every once in a while, there's a dispute and we need to begin to elevate these questions.
Here, I'm using Bob Cover's method for anybody who's interested in the jurisprudence here But the idea is that we don't, that judges don't generate law. What happens is two viewpoints-- Taking my shoveled out space was okay, taking my shoveled out space was not okay-- Two viewpoints with backed up cultural reasons come in front of a decision maker and the judge doesn't generate those rules. The judge just decides which one of those rules kind of gets elevated, becomes the rule of decision for this case. And there are other cases and other judges and sometimes they decide quite differently. But where law comes from is that basic conversation we have.
Is this seat taken? Can I talk now? Is Joshua finally done talking? Can I jump in? That is where rules come from. That's where our language of working together comes from. It comes from us, it doesn't come from anywhere else. And so then the question is, how do we stop, the same thing with AI? The rules that the AI follow come from us. And if we can detect it, there are whole piles of initiatives right now on how to develop non-racist AI It turns out to be a quite difficult problem. Because data, in a sense, heals itself.
The AI will learn what you did. If there is a part of town tha has gotten worse mortgage rates, the AI is going to learn that almost no matter what, you can take any data field out you want. You can certainly take race and zip code out of the data field. And the AI will just go back over the remaining data and begin to find it. So it does take some doing, but there's a lot of work on that, on how to keep our AI children from learning our sins because it all comes from us, which was my answer to the question.
You know, how will we extend the law into smart contracts? You know, NFTs, virtual communities, all these new spaces where, the difference in what, you know, we're always talking about the potential of the smart contract. So far we don't see many real extensions, but eventually we're going to see them for insurance or your health statistics or all sorts of different ways that you might want to use your information online. So how do we extend it and make sure it's it's just? Right.
Love it. Love it. So I mean, the first answer that I've got is we need to not do what we often do, which is go to technologists and say, "what is this technology really?" And if you go to a technologist and ask them, "what is an NFT really?" they'll say it's an entry on a distributed database like an Excel spreadsheet, but not kept by any one person. Maybe tied loosely through a link or through direct upload to that database, tied loosely to some asset like a JPEG or like a, you know, like a car or maybe a token to buy somebody's house. Right? We can tokenize almost anything.
They'll tell you what that is and then you're left confused as to what to do about it, as to how to apply the law, because this is something we haven't had before. We haven't had a way of keeping track of who owns what that is both distributed and really hard to pack. But if we flip all that around and say, wait a minute, whoa, hang on a second here, I'll just ask you a few questions. Let's say that people are buying and selling NFTs as if they're personal property, as if they're, you know, any other piece of property, any other painting on your wall or anything else.
Then let me ask you a couple of questions that have come up to court. Should your NFT pass to your children after you die? Yup. Courts had no problem with that. If you steal it, is it theft? Yep. Courts had no problem with that. If it goes up in value, you sell it and you don't pay taxes on it.
Do you have a problem? Yep. Courts had no problem with that one either. So if we focus on how humans use it, many of these questions of like, what's an NFT? What is it really? Let's look under the hood. Because if you take a look at anything. Take a look at, for example, what is really your ownership interest in a house? It's an entry in a database down at the county courthouse. There's no yellow line around your property that says you own it.
It's a consensual hallucination like all of this is. I think when we move our theorizing about law away from looking at what the technology really is to looking to how it works in human systems, we then go law has this tremendous rich tradition. We've just got, we are the custodians of endless narratives of how humans have dealt with this stuff. And there's almost nothing new under the sun that we can deal with. So then we say, well, how do we deal with smart contracts? And here we have to go the other way. I said, How do we deal with NFTs? The answer, we deal with them like any other kind of personal property.
The law of NFTs is the law of the headset I'm wearing. That's it. It's personal property, like anything else. But sometimes we, technologists unfortunately name things incorrectly. A smart contract is neither smart nor a contract.
So for example, if I choose to sell you my house and I say, oh, I'll sell it to you for seven, and you say, Great, and you buy it and you send me $7, I meant no, $700,000. No court in the world would say That's a straight up valid transaction. They would say that Scrivener's error.
Somebody made a mistake, and the intention of the humans in the agreement is what makes a contract, not the technological execution of that agreement. And so, for example, there, a couple of months ago, you know, a person listed one of their NFTs, and they listed it under a smart contract and for sale, and they listed it for way less than it was worth. And sniping software immediately nailed it, picked it up, and bought it. The question was, was the code or was the intention of the humans the contract? And to a lawyer, the answer is sort of like lawyers have said with one voice, Yeah, the code is just not the contract. It's the intentions of the humans and entering into the agreement.
And so sometimes, again, the answer is not what is the technology really doing? It just does the language of contract apply here? And the answer in most smart contracts is no, it doesn't. It doesn't any more than when I buy gas. Is that a smart contract? It's an automated technological execution of an agreement, but it's the agreement that's the issue. If I put money in and don't get gas, we have an issue. Never mind that the technology said, Oh, you don't get gas. I paid. I didn't get gas.
Our intention was for that exchange. You can't just ignore that intention, go wandering off, and ask what the computer did. And there really is this dangerous trend. I just wrote a series of articles, but one in the UCLA Journal of Law and Technology on this, saying that it's really problematic.
There's this push to say, in these spaces, the law of smart contracts is just whatever the code does, that was the smart contract. That was the agreement between the parties, and it just can't be true. I don't want to bore people to death, but basically code without bugs is impossible. We know it's impossible. It's mathematically provable that it's impossible.
And so there's no such thing as knowing that the code will do exactly what the humans intended. It's always going to do something squirrely. It's always going to do something nobody expected. And people are always going to go to court and say, that's a breach of contract. So those two examples are perfect because one of them says, What do we do with an NFT? We treat it using the law of property. The analogy, the legal analogy is just to the same stuff that we do with this wedding ring or this shirt or these headphones or any paintings on any wall that you're looking at in the room you're in.
What do we do with smart contracts? We recognize they're not smart or contracts. They simply don't answer to the point of human intention, and that's what contracts are. They're a way of creating private law between two parties. And we let people create private law between two parties because they intend to.
There's no other basis for doing that. And so we say, Nope, that just doesn't meet the standard of a contract. You're going to need to dig into.
If we invest in the DAO, This was a big scandal a number of years ago. People invested in the DAO, the distributed autonomous organization on the Ethereum blockchain. Somebody promptly hacked it because software cannot be free of bugs.
They drained the money out of it. And the community, like this is the blockchain community, right? They're the ones most dedicated to the strange idea that the code is everything. And yet what happened when the DAO fell apart? They got the community together. They said, what do we want this to look like? They talked it out and then they forked the blockchain. They created a new community that had a new consensus that undid that transaction. And that's inescapable.
So, you know, the Jim Crow laws, just switching topics for a second, they invalidated, you know, a whole generation of lawyers. Are we in danger of that with, at the current time where we have the Supreme Court is so weighted in one direction that we are, you know, going to see some decisions here that are not the collective of the United States, they're sort of an extreme position? You know what I mean? It shouldn't be political, but it appears that it's political, you know? It is. It is political. Respect for the rule of law is unfortunately, yeah, just not equally distributed. There is a real present threat. I think most lawyers see it and a lot of us are agonized about it. To the legitimacy of the courts, to the legitimacy of the rule of law, to the legitimacy of elections, which have been inequitably and unfairly attacked, to the legitimacy of voting. And I think that there's a rising tide of whataboutism there that makes, that is going to attempt to do what authoritarians usually do when they're trying to subvert a democratic system, which is to make people to shrug their shoulders and say, I don't know, like I don't know what really the election result was.
Maybe it was this, maybe it was that. I'm just going to go with my tribe. Once you do that, that's the tried and true historical solution for undermining a democracy from within. And that's one of the reasons why I say in the book that no democracy that doesn't come to grips with these issues is going to make it into the 21st century very far.
Are you optimistic for the future? Do you think we'll sort these problems out? You know, as America always does, reinvents itself in a new era? I'm pretty grim about the United States in particular. I think that we've got a rough road ahead of us, and I still have a lot of hope. I hope people can come together, create new words. There are moments when we can see that we've really shifted the paradigm.
Then again, we're standing at a moment where for the first time in American history, the United States Supreme Court stands poised to strip major rights from half the population. And that is a new paradigm for law. And if those rights can be stripped, then there is no civil liberty that can't be. And that is a new paradigm. That is a new approach. That is a new thing. As to whether there's hope worldwide, there always is. One thing that I cling to, I've got a book on my coffee table here
that just has statistics for things worldwide. Look, if you chose to be a citizen of the world, the question is, did you want to do that now or in the halcyon days of American power? The answer is you want to be alive now. If you're just broadly a citizen, somewhere in the world, people are healthier.
People are being lifted out of poverty. There are many, many trends that are headed in the right direction. And I hope that we can use the massive parallelism of our ability to talk to each other through, especially as augmented by information technologies to do, to do just amazing things that are going to crack the code to individual medicine. Right?
We do medicine now as if we're buying jeans from the Gap, whereas we actually need medicine that's going to be individually tailored to each one of us. We're just beginning that. We're just beginning to get into actually understanding the chemistry of the mind.
We're just beginning to understand nanotechnology there's so much that we're just beginning to get into. If we can keep our head about it and say that this is a fundamentally human process oriented towards human thriving and not, for example, wild and unequal wealth generation, then we can survive this stuff. And I remain confident that we will. But it's, in the United States, it's becoming a harder one to be just sort of blankly optimistic about.
This isn't the same conversation I would have had, you know, in the 1990s. Not by a long shot. Josh, I want to make sure that first of all, thank you for, even in a gloomy sort of world you are calling out what is possible, which we are really committed to here.
We have worked on the idea of precision medicine as a fundamental human right forever. This is, you know, you're talking to people that really believe in what is possible. For those watching, please give us the name of your book so that people can go and learn more about all of the things that-- We'll also put it in the... And we'll put it in there, but just so people know what. Sure. Yeah. I mean, the books from Cambridge University Press, it's called Runaway Technology: Can Law Keep Up? The answer: Yes. Yes, it can.
Yeah. Lovely book, Josh. You know, congratulations on that. And thank you so much for spending some time with us.
My pleasure. Thanks so much for having me. Yeah. Thank you. Thank you. Bye, you guys. Thank you very much.
2022-08-07 01:19