PANEL | Brave New World or Better New World?
New technologies have given way to unforeseen societal changes affecting the way we communicate, connect and spend time in our day to day lives. This panel will take a look at the newest influential technologies while discuss discussing the ethics of generative AI virtual beings and metaverses. We are going to have as our moderator and Founder from Post Reality Labs Jesse Damiani, along with Deputy Director of the New York Public Library Branch Programs and Services, Dr. Brandy McNeill, founding General partner of Pandemic Venture Capital. Mahrinah Shije, Lead Researcher of Tales of US. Kenneth Norwood, Director of the Rise of the Responsible Technology Team at Armada VR.
Thea Anderson Plus, a very special guest. So welcome to the stage panel Hello, everyone. Thank you for being here. Thank you to Games for Change. Thrilled to be here on the 20th anniversary of a really wonderful organization. Today, we have a really great panel for you called Brave New World or Better New World, which is sort of a fun way of kind of getting at some ethics and first principles when we're looking at these emerging technologies that sort of manifest as buzzwords and takes and all these different sort of competing opinions.
And so we have four really esteemed humans and one much, much discussed large language model who are going to be kind of weighing in on on these issues. My name is Jessie Damiani. I'm a Writer and a Curator, Arts and Culture Advisor of an organization called Protocol Labs, which if you've seen Silicon Valley, the whole build a Better Internet thing is kind of based on Protocol Labs.
So that's kind of more or less what Protocol Labs does. And I also work as a Curator at Next Museum, which is a new media museum in Amsterdam. I'm going to ask each of you to give a little bit of introduction to who you are and a little bit about how your work intersects with with these topics. And Rico. Me first? Yeah,
Jesus Christ My name is Rico Kenneth Norwood as well to I am a Ph.D. recipient of the University of Southampton. I'm the Lead Researcher for TALES, a nonprofit storytelling initiative that is international.
We are currently building a video game, interactive experience for eight year olds and plus that centers ecology through mythology. So we work in places like Brazil, Congo, Romania. India is our new locality and I do a lot of stuff in terms of like reading in our database the notions of like just going through paperwork. I have kind of integrated in the back end in that way. But we also put together recently a library of myths which takes over 200 tales from like around the world, and AI definitely helped us in terms of like extrapolate even some of these things, deities and things like that as well. But also humans were helping us as well too.
So we use it as a tool to help us like sift through like all this stuff. That's how AI helps me in a major up pass. Good morning. So my name is Mahrinah Shije. My family comes from the Pueblo San Adolfonso And I'm married into the Pueblos in northern New Mexico. I've been in tech for about 20 years and all sorts of capacities.
Right now I serve as a General Partner to Endemic Venture Capital, which is a venture studio building Indigenous led climate tech and tribal gaming future products. I also serve as a special advisor to President John Sharp for the World Economic Forum, looking at the future for Indigenous peoples as well as lots of work with the UN in tech policy. What happens when AI goes rogue? What do we do with killer drones as humanity and looking at that also under what what happens through our sovereign tribal nations. So thank you. And on the next, Okay I’m Dr.
Brandi McNeill, I am the Deputy Director for the New York Public Library. I've also been on the Public Library Association's board. I just got done doing that. I'm also a writer, and the way in which libraries show up in this space is through lifelong learning. That has been a mission of most libraries across the U.S.
and so we do that in a variety of different ways. So when I think of what we do with the New York Public Library, we have programs such as Project Code. Project Code has been able to partner with large giants such as Google and Microsoft and M.I.T. And so we have been able to create ways in which we can ensure that marginalized communities are not left behind in this tech space. We do things like tech talks where we bring in people who are creating NFTs that look like the people that come in to our libraries.
We also have partnered with Apple to create app development programs, and we just recently had a hackathon, and that hackathon was about really helping people who have disabilities. And so people were creating apps that would help people with disabilities. So, thats some of the work we do. Hi, I'm Thea Anderson.
I am on the Responsibility Responsible Technology team at our media network. A media network is a social responsibility firm funded by Pierre Omidyar, who is the original founder of eBay. We do grantmaking and investments. I primarily focus on ethics and technology, though it evolves and it changes depending on the different issues in the technology sector. But it always underlying, always focusing a lot on ethics in the space and right now quite a bit is focused on a concentration of power in the technology space.
AI's always been an issue, but seems to be evolving much more as the interest grow quite a bit as well. Other issues focusing quite a bit now is around issues around disinformation online, especially around content moderation, especially growing in the gaming space. We haven't really been working in that space, but I've been pushing it very, very hard and now we're starting to move more in that space. I think more there's a recognition that's been so wonderful to hear over the last couple of days.
It's just that tech obviously is being tested on the gaming infrastructure, but that this is where culture is also being tested to some extent. So it's been really empowering to hear this over the last couple of days and think you're just a kind of a housekeeping note. The the some of these questions are going to be a little bit more scripted than I normally like to be, in part because I'm trying to be fair, be machine readable to to our large language model friend.
So just if you see me typing on the computer at some point while we're while we're talking, I'm not intending to be rude to anybody. Just the nature of the the pen on me is the as a sort of machine surrogate. So the first part of the conversation we're really going to be looking at this kind of buzzy and contentious term of the metaverse, which has been sort of on top of people's minds since 2021, since it first kind of started reappearing in clubhouse houses and sort of ultimately by the fall, Facebook changed your name, obviously, to Meta. We all know this at this point.
But even going back, thinking back to Rebecca's Keynote, even going back to 1992 with snow crash, the metaverse was always tied to gaming and gamification. So kind of I want to get perspectives from each of you as you're thinking about the metaverse becoming a more pervasive aspect of daily life in society. I have sort of different questions that I'm going to be referring to each of you. So, Dr. McNeill, I'd love to start with you As someone whose work is directly connected with the public, how are you thinking about access to the Metaverse and other emerging technologies? What steps do you think we should be considering to ensure that the metaverse is as inclusive as possible? Are there factors interfering with some folks being able to access them? How might we better use the educational capacities of emerging technologies? That was a lot of questions.
I don't do well when I get a lot of questions at one time, so what I will say is how libraries are looking at the metaverse. We're using it, but it's also for us learning and learning ways in which we can explain it to people who are coming into our libraries, especially because it's not necessarily the first thing that people are thinking about, especially in New York City. A lot of people are struggling to pay their rent. A lot of people are struggling to actually feed their families.
So they're not thinking about, oh, let me jump into this metaverse and do this thing. However, we also have the flip side where we see it helping people. So a good example is we have been doing gaming in a variety of different ways.
And when you have a vet who comes in, who was in a particular war, who is now introduced to gaming because of us and now doesn't feel as isolated at home, then we feel like that's a win, right? That's a mental health win for us. And so we're doing a lot of things in terms of how do we educate people about it, how do we show them what the uses are of it? But we also need to show them the pros and cons, right? That misinformation is a huge part of being able to explain to people, what is it? How do you stay safe in this environment? And so that's some of what we've been doing in that space. Amazing. Thank you, Mahrinah, You've referenced the importance of cyber, cyber.
And by the way, with these questions, answer the parts that that you want. We'll see how ChatGPT responds You've referenced the importance of cyber sovereignty and the indigenous metaverse. Can you share a bit about those and your work on both? How are you thinking about relationships too, and with technologies in ways that avoid digital colonization, preserve indigenous IP and foster non-Western perspectives values in stories? Gosh, that's a broad question. A series of questions.
So first of all, I do have an afternoon talk that I'm going to be talking a little bit deeper on cyber sovereignty. But the idea that indigenous nations here in the US and in Canada and in many other countries have independent sovereignty, we're able to assert those rights into cyberspace and necessitate government to government relationships, negotiate cyber treaties, create our regulatory environments and look at things from our own people centered and healthy perspective, as opposed to like a corporate and consumer somewhat centered perspective, which we have here in the US. And so that is really critical for us to develop technologies that, you know, reflect our worldview and our perspective. The easiest way that I like to talk about what Indigenous culture is is that we're relational people to each other, to time and space and place.
And it's very difficult to synthesize that with a Western worldview. So right now technology looks one way and it interacts one way, but as that grows in adoption and development from indigenous peoples, we're going to see very radically different ways to interact with that. And one of the things that I always like to reference is how language shapes the world around us and our perceptions of it.
An example of this in my table language is the way that we talk about our younger siblings. The word is to you, which means seed. It's the same word, a seed. And we have this beautiful relationship with farming and how we interact with our food ways and also understanding that we have that responsibility to each other to nourish and grow and help. And that's not very English or, you know, some of these other languages.
So when we're crafting that metaverse, when we're interacting with culture, there's ways to help bring people who, you know, we're not immersed in that world view into perceiving the world the way that we do, and then also having access to some some of the more public. There's something called the cultural iceberg theory, where you see the top of the iceberg, but it's like very deep. And, you know, we can we can share some of that without being very explicit. And I think that's really beautiful. We can also use this to power our tribal nations economically and do so in a way that's remote, that keeps our people close. And one of the things that are exploring with my venture firm, which everything is very proof of concept right now, and I'm very lucky to live at a time where I can do that and one of the things that we're working on is making sure that anything that's cultural, intellectual property is not, as is known, that it is a cultural ownership, right? It's owned by by our our past generations and our future generations.
And that's not something that is salable. It's not something to be taken. And that's really concerning. Also, when we talk about ChatGPT, you know, the evolutions of AI scraping data, because if we are protective of that, that cultural ownership and even if we create guidelines around that, are we making sure that other people are abiding by those guidelines in an ethical way? Absolutely. Rico, you've been a gamer since you were young.
How does that impact your thinking about the metaverse being both an academic and a gamer? What perspective do you think are important now that folks who weren't gamers before are taking an interest in the metaverse? Are there lessons or insights you could share, particularly regarding ethics and first principles? You spoken about not falling into techno determinism, for instance. Why is that important to you? So I always start with the tagline that gamers have been in the metaverse before. The metaverse was a thing. And I think that we live in this time where the metaverse, which has been like utilized as this gap to bridge between those who didn't game or didn't understand game or who are afraid of it and sell it into a package that it's like digestible to the masses, like we have these talks where we think about the metaverse as this place that it's new and like we were talking about grab like people have different definitions of like what the metaverse is. Yeah, because it's a video game and they just want to say it and like is wild to me. Kind of like when kids were in Penguin World, they were in the metaverse, like when we were in Grand Theft Auto 5, we're in the metaverse.
Like when I'm owning my nightclub and shipping my cocaine business and Grand Theft Auto 5, I'm in the metaverse. Like, all these things are happening. There's like actual exchange of finances there and stuff like that as well too. And I think it's a way to bridge the gap, but it's a very nefarious way to kind of like talk about something that's already existed because, as you said, like people disappear in these like co-opting of these spaces as well, too. And then they forget about kind of like what was already there. And then they repackage it as if it's something as well new.
But in terms of like the ethics and stuff like that, I think it's something that is really like, you know, video gaming was a space for me to find not only safety but myself, my gender journey, my sexuality journey and like, just like, learn about the world. And I think like when you're talking about the library and stuff like that, that's such a powerful thing to have in a modern suit because, you know, when we went to the library, all we had was like, art, white paint 2.0, and that was about it. But to be able to go to the library and then connect with other people in the virtual realm that like, maybe we might not have the finances to have a swimming pool in our house or something like that. But hey, I can swim with you in this virtual place.
And I think the connective aspect of that, like with our game and stuff like that as well, to connecting children from the Congo or connecting children from Brazil and things like that. In world of us, it's a value I always like. It's a decolonial roadblock.
So like I literally say that sometimes too, but like, think about it. Roblox was like this decolonial anti-racist. It's like, you know, like indigenous centered space where people really went into and got to understand and like, be structural things about the world and cultural heritage and sharing and just learning like this is like actually the benefits of like creating spaces like this in the digital realm versus, you know, like us doing Zoom meetings in 3D with Mark Zuckerberg and, and like stuff like that as well too. But I mean, like, that's at least what I got from gaming in terms of like the metaverse conversation and like what I think also is missing from it, this need to make a common space for everybody to come to to have access to to trade and just to restart over from what we have right now. So yeah, I love that your work spans a range of different areas under the umbrella of responsible technology, from encrypted messaging to the fair data economy to new creative economies. Some people cite the metaverse as an evolution of the web, as a kind of shorthand.
If there's truth to that, what do you see as priorities we should be considering across policy, enterprise and investment? Yeah, just building on what Rico was saying is even where I work, like some people use the word metaverse, some people don't, and I don't even really bother because again, it's you spend a lot of time just discussing it. So I think the way I would even even taking a step back, no matter if we use that term or not, I would say, you know, working for a philanthropy, what role should Flansburgh even play at all in technology as well? Like ethically? So the way I look at it is there's real power in ideas, right? And so potentially where should philanthropy engage? So in a sense of rules, thinking about checks and balances, thinking about governance, power, who makes the rules, and then potentially ideas like who's actually thinking in 20 years and that's 50 years in advance. So when I think about issues around the metaverse using that term exactly like where I'm directly supporting is a lot of like badass gaming attorneys who are focusing on issues around like IP. And again, they're once they're ones that potentially would not have been involved in like what the web looks like today. So again and ideas as well as yeah, people that wouldn't necessarily even have quite honestly cash flow to necessarily be at conferences that wouldn't necessarily be at that. So some of it's at the very, very basic level.
But again, I think for me it's more of ethics, like what role should philanthropy even be playing in a lot of these spaces other than, yeah, making sure the right people are where they need to be versus me being sitting at that table speaking for people. I think, you know, if we could get the screen up, I, I started prompting and I think I might have to prompt it to be a little bit more brief in its response. But so I started with the overall framing with the metaverse and asked what ethical considerations we should consider. And what we can do is we can make these available somewhere else. I'll talk to the Games for Change teams so that we're not spending all of our time reading.
But I'll just kind of blow through these these numbered these numbered sort of pieces. So privacy and data protection, protection, accessibility and inclusivity, digital ownership and intellectual property, digital addiction and mental health, online harassment and safety, digital divide and economic inequality, virtual crime and security, algorithmic bias and AI ethics, regulation and governance and environmental impacts. And so I'll read its conclusion. In conclusion, the Metaverse presents exciting possibilities for human interaction and experience, but it also brings forth a host of ethical challenges by considering these first principles, we can work towards building an inclusive, secure and responsible metaverse that enhances human life without compromising our values and well-being.
How do we think about that? That was nice. Diplomatically, Yeah. I mean, like I didn't see any gaming, anything like that. It literally as well too. And then like diversity talks kind of weird inclusion or like equality is often conflated with equity as well too. So yeah, they might be talking about like equality in there, but sometimes we need to be talking about equity as well.
So you need to do some more work. Yeah. Do some more homework. Yeah, do some more homework, chat Liberty. So I'm going to, I'm going to let us sit with all of that stuff, but not, not doing more reading for the moment. I also wanted to have a spend a little bit of time talking about this burst of activity around generative AI and just for some context from from sort of my position, I'm using the term.
I don't necessarily agree that it's the term we should be using. And I know that there are others that feel that way, but it's the sort of the term that we have to discuss these topics. That's that's why we're using it. But we've seen this rise of text, image diffusion models like MidJourney like Dall- E, and we've also seen the rise of large language models like GPT 3 and GPT 4 for which power the chat interface, chat, GPT.
And it's easy to get whiplash when we're seeing these sort of different kind of takes. It's either going to be the this crazy existential risk or it's going to solve every problem in in health care and medicine. But so I wanted to take a moment for us to kind of like cut through a lot of that noise and really get to the signal. So I guess as a as a broad open question that we can kind of popcorn when you think about the rise of publicly accessible, generative tools, what ethical considerations come to mind? Like, so these are tools have become available to the public, even though we've been interacting with AI in various ways, whether we knew it or didn't know it, I think that there is regulation that is needed when things go to extremes like deepfakes and stuff like that as well.
In terms of like taking somebody's image through collective stuff off the Internet and literally like posing as them in online spaces. And then it's like, but that's me going to the extreme of humanity versus like the an extreme of humanity where literally Chat GPT is used to spell check email sometimes for people who may have like dyslexia, like me or something like that. And then it's like, you know, the balance between the extreme and the non extreme is, well, we need to be on the sweet spot when we talk about it. I don't like to do a fear mongering, kind of like not to say this is a fear mongering conversation, but I don't like to start fear mongering conversations with generative AI.
because they did the same thing with video games in the nineties and they did the same thing with media in the nineties as well too. Like they're coming for our children that the Hays Code Act is nothing but like them fear mongering people against like films and stuff like that. You know, if you're in the bed with a woman, make sure you have one foot on the ground.
You know, like these are things that history has repeated over and over and over again. And for some reason, they always weaponize children in the middle of it, like air is coming for your kids. Like, well, no, not all of them.
But you are like none of them. Hopefully, as all to you know that presentation yesterday. No. But I think like you know, we need to find a balance between regular nations, but also allowing people to be in creative spaces like this as well, too. But also the fear of like it taking from creative people's jobs as well.
You know, like this air that people are relying upon to do work or do CGI graphics and stuff like that in the film industry and stuff like that. And that's a very scary thing. I met one dude, he used to carve, and this was back when I was living in New York, working in Michael's frame in Art in Chelsea on 23rd. Literally the hustle he used to carve cars for companies like Out of Clay, but eventually he didn't have a job anymore because eventually stuff like 3D printing and 3D design came in.
So how do we not use these tools to absorb less human beings? But how would you use these tools to like, you know, like help and aid us? Because, I mean, like there's nothing replacing humans. Like, as much as we want to do that work force leaders, there's nothing that can literally replace us. We're so dynamic in so many ways.
And a model as you just kind of showed off, it can't be as dynamic as a human being can, if that makes sense. So just that balance between like regulations and freedom, like that sweet spot is what I look for. So I agree.
I think I think I'm caught in the middle of like it being great and it being horrible. Right? And so when I think of the services that we offer at the library, right. One of the things is we have a studio where people can come in and express themselves creatively and we have some people who come in and they are trying to get employment as voiceover actors. And when I look at certain apps that are now able, such as a specific company who's creating books that people can write great for and the authors, because you're able to now create books and have it out there.
But then they also are embedding this audio model, which once again is great. It means that more people will have access to books that they've never had. On the flip side, that voiceover actor whose voice is amazing that they can just replicate what happens to his employment, what happens to the path he's going down. So I think about some of that. I also think about that whole equity lens, and I think that's part of what we are trying to instill and make sure does not happen to a lot of the communities that we see that come into our library systems because we know we have to explain to people so that that fear mongering that's happening.
Hold on, calm down. Let let's let's kind of go through what this all means. Let's give you a little bit of understanding of what it can do, how you can play with it, give you a safe environment to play with it, and then maybe, you know, expound on that in another way.
But I think it doesn't mean that there aren't things happening when I think of the deepfakes, literally me and my son have a password so that if he ever gets that call and it sounds like mom, who is like, Oh my gosh, I need you to sell me X, Y, and Z amount of money, We can know whether or not it is us because we've had it happen. And I think sometimes it's like it's happening to people. We see a bunch of older adults who come into our libraries who have had all types of scams happen to them because they're just not able to keep up with all the different ways that things are happening to them.
And so, you know, that's why I think I'm quote on both sides, because I'm seeing what happens and what comes through. And so with the libraries, we're really trying to make sure that we can inform people. We're trying to make sure that with the equity world, people aren't left out to be creators. You don't know what an NFT is. We'll tell you what it is. We'll tell you how you can create it.
We'll give you digital art classes that help you to figure out how do you create that thing that you can put on so that you too, if you decide, I mean, you might not be able to afford it anymore, at least not in their minds. But if you wanted to buy real estate and you're like, You know what? I can't get the house over here anymore because that's unattainable. Well, maybe I can get it in the metaverse, but if you don't know how to do that, that's why we have the proper libraries. So. So building on this, not only is there radical potential to exacerbate inequality these, but to reinforce stereotyping and perception bias.
Like I've I've done generated like like Pueblo women doing this and it spits out some just like nonsensical thing, like the things that we wear are very like, culturally informed. They're very specific, they are very meaningful and, you know, it's spitting out like random things. And, you know, if you if you're not in a in a culture or a community, you might not know what those things are and you might not know the difference, but it can create really challenging perception biases and also inappropriate things in that way. But also looking at, you know, reviewing job applications or college application or philanthropy. Right.
Like, like Native Americans get 0.2% of all philanthropic money. Right. So what is I going to do when they get an application from a Native American organization? It's going to automatically exclude because they will give money to that. Right. And so we run a lot of really big challenges in exacerbating social inequality, you know, financial inequality, all of these things.
But in addition to that, there's just an incredibly dark potential for the increasing of radicalization. And we've seen that online in the U.S. in our political systems.
We've seen people swarm our government buildings. And, you know, that is going to continue to get bigger and deeper. And we collectively need to not only invest in media literacy, but in policy guidelines that are, you know, healthier for all of us and our young people who are maybe going into the metaverse just to play. And I'm meeting somebody who's teaching them, you know, whatever perspective. So things like that Can I just add one thing on to that. Yes, I agree.
I am best at the RDF filter that literally would not keep my skin what I needed my skin to be. So this one, I guess. Yeah, I just say a few things and again, I think I'm excited about lots of pieces. But again, it's that tension I guess on the regulatory environment.
I think my question is there's a huge rush to regulate. It's like, well, my question is, are some of the laws that we already have which are they sufficient because some of them aren't. So it's like just adding more.
I don't think it's necessarily what we should be doing, you know? So like, that's I think that's one piece of it. I think another thing too, is again, going back to concentration of power, there's a lot of huge companies that are just growing and growing and growing. So again, one of the you know, one of the things we also do like is we're very lucky to work at a foundation that can actually fund strategic litigation. So for example, at Clearview AI, we're actually allowed we're funding them, we're funding actually a lawsuit against them in California, and we're using that with the California privacy law, consumer protection.
I don't know if they'll win. It's actually the the plaintiffs are some Black Lives Matter activists and a couple of other grassroots activists can linked to the Clearview AI using surveillance and biometrics. I don't know if they'll win, but the point of it is setting that precedent.
And it's really to also test those laws. And I think more and more is going to be specific companies using AI. So again, part of it is also it's very early days. So I'm excited that we're able to do that. We're also starting to do that in the EU and it's very much linked to issues around building on what you were saying around automatic wage discrimination as well.
There's one thing I look at it too, is that things are moving so fast and there's a lot of focus around like the output, which is the data, and there's not enough focus for lots of reasons, like on the process, which is like an algorithm. So again, it's not just strategic litigation, which is not always a solution. But if we can also how do we stop some of that data being collected and take a step back? I think that also can be very useful. But again, that's very difficult to do and then often do do any of you Are there examples like we've we've referenced some of the examples, the Barbie app, Clearview AI, where maybe things have gone in unethical or frustrating directions.
Are there any examples that come to mind that you're currently seeing or have seen where generative AI has been used in a way that feels right and feels ethical and feels productive? Like I can talk to it about in terms of like some of the work that we're doing. At TALES in the background, especially like with the Library of Myths. So like I was saying, like it's like 200 myths from like all these localities that we're like sifting through. We like different themes, mythologies, deities, like, like all these characters and stuff like that. And of course, like people seem like a part of our team. They manually read through all of these in their native tongue and then like translation models as well, too.
But literally the database that they have put together, it allows us to go in and literally like filter through all of these stories. So if I want to find a story that's based on a Congo that deals with like making over the world or something like that, or like how this one was born or something like that, I can literally put it in the filtration system and see it and it brings it to me or other stories that are like that. Maybe I want something about like gender empowerment in this native story. Literally, it will put stories together from all these different localities so you can see the bonds that they actually have in them. And these are like great ways that it's been used, even in my personal work and stuff like that, right? Like literally being an academic, they always like just read the first line and keep going and just read the first line and keep going. Like, no, like read it though, but literally just going over reading all my own work and then like kind of like using it to sum up some of the points and other things as well too.
But comparing the book is like an extension tool. I think it's great because it's like, you know, we do so much nowadays and it's like a one person journey. Like, especially like I meet some people who they need to get grants written and stuff like that. And there's like a formula for writing a good grant. Like, you know, like, and if you don't know that formula or if you don't have the money to pay somebody to do that formula for you, why wouldn't I go to Chat I was told that somebody like, girl, you don't pull up Chat GPT you know, like, like that's literally a tool to empower you to get the grant money so you can, you know, go publish this graphic novel that you're talking to me about because, I mean, like, she can't afford to, like, pay somebody to write her grants for her. And like, the knowledge to do that is also he say she say or they or like, you know like it's in areas of like nepotism that sometimes people of color or marginalized identities cannot access as well.
I wish I had one that I can put together the proposal for me when like back then when I was applying, I was literally just like shopping around, giving it back like I did it in a day, you know, like these very formulaic, like I think are the ways they kind of like leverage the divide a lot to me, like where people didn't have access to pay somebody to do it for them. That admission scandal. Yeah, like the way that, you know, like stuff like that. So I mean, like I see the power in things like that for, in generative AI that makes sense. The question I would even say within libraries, you know, one of our biggest challenges is that we have people coming from all over who speak various different languages.
I mean, aren't you Social programs alone can have over 51 countries in one class with 30 something languages, right? And so people have to navigate to get into the library to get the services they need this then means if you don't have someone at say that service stuff, that speech, that language that can start to help and guide people to that, how do you help them? And so this is where A.I. comes in. This is where how can we use A.I. for the good in order to help with things like that? How do you know? One of the biggest things that I think about when I think of some of the ways in which we are not doing things right, is that everything is automatically you're already opting in like we need to be able to just we're already like, we need to be able to opt out like we would have that be. First, I want to decide that I want to do this thing or be in this thing, or have you take my information or scrape my information for the use of whatever service. And I think that's not happening and that causes a bunch of issues. But from the greater good, I mean, I think there's there's tons of ways that A.I.
is truly helping us. You know, we're looking at it for cataloging. So I think, you know, with our collections, there's a lot of different ways that we can use A.I.
and libraries, but really it's for the connection. For us, it's able to do that. Obviously, things like able to help people who work. So we have tons of people. We have a career services department,
we have tons of people who come in who need resumé help. Yeah, we literally have to sit with every single person coming in that saw if there's a way that there is something that we can use to kind of help them, to guide them, to get them at least to that point. Because a lot of people are you come in at the last minute, I need that resume right now. I'm about to go to the interview. We're like, I don't even know you, your background, like, I'm going to help you.
Well, this is a way that it can help us, right? But we've got to be cautious. Libraries are very sensitive about privacy. Very.
So that means even when we're thinking about, you know, who's technology or who's platform we're going to use to do some of this type of stuff, what are they doing? Where are they getting the information from? Right. Is there copyright infringement? Who's going to be held liable because we don't want to be so. So the examples that I think of very prominently for me is one, emphasizing what Rico had said about, you know, just making up for the fact that we're consistently under resourced in organizations and know tribal governments like this. This has made to me it has the potential to make tribal governments, you know, ten times more effective because they just don't have the capital right. Like we're still recovering from human capital loss and colonization.
And we probably will be forever. But, you know, in terms of that, like we just don't have enough people to produce the amount of work that we have to do to just exist. So that's that's really incredible to have tools like this that help us do that better because that helps our cultural survival.
It helps us fight back against cultural genocide, and it allows us to preserve our life ways, which better everybody because we do retain some some of that continuous knowledge. But in addition to that, you know, when we have people who are from very close cultures and don't always interact outside, you know, it's really difficult to do that code switching. And so this these tools can, you know, take a general idea and put it in terms that they just might not have access to and may not have had, you know, the personal and professional coaching come into a community. So these are incredible to me in bridging some of those just social divides.
It's like, well, we're just I mean, one thing or just on a personal level and my mom is blind and I think I see lot of opportunities and she's not particularly tech savvy. So I think I do think hope that's an easy way as well for her to just be able to speak and, you know, moving forward with a lot of different pieces where like right now would be too difficult. And then yeah, so that's I think one of the main things. And then on personal level as well, say one more thing, if you live in a foreign country is great because you can translate.
So I'm always in the store in Germany just looking at the labels like, what is this butter or like something else? I'm like, okay. Or when you get a scary German letter, you don't. And it's just about like signing up for like a cell phone bill. Yeah, that helps to the translation thing, like you said. So you, I had wanted to, to, to ask our pal here but the interface to access tragic beauty has has sadly died so it's just us it's just us for the last 5 minutes. And with that I just wanted to kind of extend this this last line of conversation, which is, you know, the the title of the panel is Brave New World or Better New World, Brave New World.
I'm I'm sure, like all of you know, but about 90 years ago, this book, Brave New World, sort of imagined a dystopia where, you know, there was opt in, surveillance and distractions were keeping us from, you know, leading meaningful lives. And so it's it's kind of eerie some of the ways in which some of the sort of predictions in that book proved prescient. But maybe to, like, leave us on this kind of of thinking about the examples and ideas that are bringing us toward this better new world. And, you know, I know that better can be a fraught term and new world can be a fraught term. And, you know, but for for the for the sake of sort of brevity in the discussion, how do you what do you see as being things that could steer us in the direction of better world versus brave new world? I mean, the first thing I think about is digital literacy.
That's got to be the main thing because I think a lot of people feel like everybody is on the Internet, everybody has access and everybody's doing the thing and it's not, you know, we still have people who we have in our classes that were explaining some of the basics. And I think it's important that digital literacy play a huge role so that the equity can happen. And I think that is what will help us to get to a better because when everybody can be involved in what happens, then we have a better. So I've had some really incredible conversations over the course of this conference and I really believe the cultivation of empathy for those of us who are in the business of developing these technologies is the most important thing.
I met a gentleman who developed a game called Never Alone, which probably many of you know, and it was just absolutely incredible to me how thoughtful. And, you know, being driven by indigenous protocols in collaboration, that this this game was created and, you know, it it was really just even moving to to know that somebody cared enough to do that. And so for four people in this room who are in that industry or watching wherever this is distributed, you know, it's really important for us to be listening to each other and learning from each other and hearing these different experiences, because that can guide generations of people interacting with these technologies to learn how to interact with each other and the world around us in a lot better ways. It was the first game we played on our Twitch show. It's amazing. It's amazing.
Like we literally reference it all the time in our process. What can get us to a better world? So the question was not centering white folk in every conversation that we have as a start to access leveling the playing field, starting over, not having a seat at a table, but breaking the table apart and starting a new table, you know, like literally like all these things of like digital literacy, like empathy, access, like all these things are literally just unlearning so many processes of structural damage that have happened to us over years and years and years. And I do believe it's just a resetting that we really have to do. And how we get to that reset is through like learning these processes and unlearning these processes like, you know, like me in us and doing the video game. We were doing this one thing we were talking about like, Yeah, we're going to put in a collection system and stuff like that and inventory.
And some of our junior researchers were like, well, you know, inventories perpetuate capitalism. And I was like, Well, no, we're not selling nothing. Nobody, you know, just collecting like. But no, Rico, they perpetuate capitalism because you have to collect things for yourself.
And so it was an interdisciplinary approach where we all listen to each other and it was like, Well, what if we collect things to progress the world together, not just the player, right? And it was like these ingenious things that, like I would have never thought about that because I was in my own world in that like bringing these other perspectives in and literally just like centering these dominant ideas and conversations and starting over with something that could revolutionary, new, impactful and radical. And I'll just that's most definitely. Well, that's a perfect building.
So Klaviyo, because I was going to work with power, I mean really, and to me I think one of it is supporting unions is a huge piece and really is supporting. I would say actually youth led change, but really youth led change, not 32 year old youth led change and really youth led movements and really youth led building a new economy. I think those are the big piece and yeah, tech can be a part of that. So obviously a lot of it's going to be on tech platforms, but it has to be worker power to me. Love it. What an incredible conversation. Thank you all so much.
And thank you all. Oh, thanks.