Spring Symposium Creativity in the Age of AI AI Impacting Arts Arts Impacting AI

Show video

this is a test . This is a test this is a test. This is a test. this is a democracies test ≫MICHELE ELAM: Hi, everybody. Welcome. So excited we're starting. Oh, I love it. Already a noisy group. I'm Michele Elam. And paired with my wonderful colleague James Landay and cochair of the symposium in creative in the AI and arts impacting AI.

And James and I are going to do a little preample. Here. And we call it HAI, here. A multidisciplinary institute to , education and policy and practice to improve the human condition. I'm in an engine department and I'm a former director of African and African American Studies and also with a claim for gender research and I mentioned all of this only because all of us -- everyone in this room and virtually participating bring so many dimensions to all of with these conversations and we're not just artists and creators and especially those are culturally shifting, as we speak. And the speakers here are not categorized by position or profession. And James and I did this symposium a here ago.

And we saw the outside impact and transformative technology that's fallen under most recently generative AI. And it's been responded with joy and fury and strikes and lawsuits and halleluiah that tech will save us and you will find today true believers and atheists and agnostics among us. And people here don't fall into the AI apocalypse or the cult of AI salvation and click here save the world type. And let me first give thanks to our cosponsor Deborah -- and she's the Vice President to -- interdisciplinary arts there . And and a special

gratitude to the extremely dedicated hard working and impossible ly good-natured HAI team including KA CI -- and we're not and thanks lastly to my utterly amazing students and some are here and many will be in the afternoon. Both -- all my classes and also more recently the undergrad and grad classes in artificial intelligence and they really inspired this event and they continue to inspire me and I'm so thrilled you're here and we're going to begin first of the land acknowledgment on this unseeded land. And we're going to watch a video on that. I don't think I cued that up. (Plays video.) this land was and continues to be of great importance to the OLONE people and consisting -- and we have a responsibility to honor. just a couple quick notes on this complex topic and this one in particular especially in "Creativity.

" There is no definition of creativity and even , though, is been often as the defining feature of humanity and it leaves a debate between ARISTOTLE and PLATO and unduly motion s and distrust that sounds familiar in this symposium we're not taking for granted what creativity is and despite the ink spilled on this topic , there is no understanding on creativity. Despite the neuroscientists and your barber, your neighbor . What's automated with AI -- a human, neither scholar silver winters should be taken for granted and what human is at the center of AI. And some of the panels after will explore and who and what will center in the world that we imagine a note on the ark art of the day. And art impacting AI suggests, the

morning suggestions largely have generative AI impacting livelihoods with issues with privacy and copyright and real life flash points for these issues. And the second half of the day that simply the arts are not reacting to artificial intelligence and imaging more expansive creativity and ethical model and worlds past, and present, and future. And have design and deployment of policy. As well as for the

field of ascetics. This is how and what we value as "Art. " And who is that self an -- art perspectives decolonial endingnousindigenous . And we also spotlighted a few artists in particular and formerly our first HAI artist Rashaad Newsome and developed with HAI Stanford support his BEING looping -- as part of a decolonial therapeutics that you're invited to participate in. Just before lunch we're going to show a seven- minute film at the -- we spotlight who is also a speaker -- she's speaking this afternoon with AI systems with the indigenous principles and we're blessed to have at the conclusion of the symposium by -- and the day capstone includes presentations and discussions with industry leaders with a more birds eye view with the future of AI creativity from their perspectives. Gill -- and, please, join us -- and the conversation beyond the walls and just a brief note on the lineup of the speakers and on the outset we had the goal of joining the conversation, technologists and creators however people are identifying both commercial and noncommercial in an education forum to bring a few of the spectrum of views and dip into the many genres and art, choreography, and music, and more. And in case you get vertigo, this day is a mad collision of vocabularies and mindsets and wildly different ways of thinking. And no one should

underestimate the labor of interdisciplinary work and people casually toss around it's something we should do. Taking for granted methodologies and epistemologyies and lived experiences and that's just within academia and bringing people with such different imperatives to the table -- or I should say to make a new people is important for a symposium like this and regardless of the expertise and credentials and everyone is a stakeholder in artificial intelligence and clearly it's impacting billions in every inspect of our public and private lives and James are coming together with so many different mindsets and experiences. It's an experiment, I'll give you that. And it's also an ethos and for us that ethos is a caller call for us the audience, and speakers, the come together with togetherness with mind and spirit of a child with curiosity and curiosity which after all is a birth right for all things and acknowledge who is not here. And we're acutely aware that this symposium can only scratch the service between the real complex surface between people and AI.

And in fact we didn't have time to include a discussion with AI and English -- hopefully we can mount a symposium in the future and bring up these really fret ful and futures and communication beyond women and as James bald win put it, language is a political instrument. And we want to recognize the thinkers that have been working on this subject. From the UK, EU, and Africa, and some are watching the live stream there. And we know you're there and much of the work is represented here. And you will see the goals on the HAI website and I believe there is an agenda on the front desk too and because we wanted the conversationing to go beyond this s to go beyond this wall. And we can find that on the site as well and I want James to have an opportunity to speak as well as too.

James ≫JAMES LANDAY: Okay. I don't want to speak too much and so we can get going on this and I really want to emphasize what's really important to me. Michele, especially spent a lot of every time and we spend a lot of of effort to get people from arts, technology, and people that are hybrid in many ways and I also notice from the folks in the audience, there is an incredible number of you that are also are in the field -- students, really creative and ideas -- and so I really encourage to you think of this not as we're just presenting to you but as a "Conversation," and a conversation you should have during the breaks and during the lunch and during the reception at the end and asking questions as well as the many hundreds of folks as well on the live stream there. And we're really excited by starting the conversation and we can have a month-long conversation on this topic and it's really deep and interesting and we're thinking of a asking interesting questions rather than getting answers at this point.

And so I also welcome you and without further adieu -- ≫MICHELE ELAM: You're going to be the moderator -- ≫JAMES LANDAY: Yeah. I'm going to EMCEE the first half but feel free to grab us during the breaks if you have any problems . ≫MICHELE ELAM: Oh, how do we go -- go go go -- ≫JAMES LANDAY: We're going start off the morning with three different panels . We're going to interrogate about the future of AI and the creative industry and so in our first panel we're going to look at -- happy to introduce my Stanford colleague Camille Utterback . And she's also luckily

for me here in computer science. She teaches some cross courses and she's also a teacher right now of course with Michele. Camille is also an award winning Mack Mac Arthur award writtenning winning artist and I'm happy to say she's a friend of mine and, Camille, come up and introduce the panel. ≫CAMILLE UTTERBACK: Thank you so much, professor, Elam and Landay for organizing this and so being here and for all of the folks working hard to make this happen and I'm going to introduce first Dr.

Jennifer King and is the fellow at the Stanford University for the human center of artificial intelligence. And Dr. King is a recognized expert in information privacy. Sitting at the intersection of human /computer interaction, law, and the social sciences, her research understands -- only privacy as well as the policy of emerging technologies and more recently her research explored alternatives with the world economic forum. And the California 's new privacy laws. And her past work is working on social media, object media, generic media , generic privacy. And policy making by the future of privacy

forum and she's been an invited speaker by the federal trade commission by several workshops . And been in many out let's, New York Times , national public radio, Bloomberg , fox, MIT tech reviews and among others. And Dr. King completed her doctorate at UC UC Berkeley at school of information. And Dr.

King was an at the center of society at Stanford law school and she was a graduate student at the research center at UC Berkeley and she was a privacy researcher at the Stanford law technology and public policy clinic at Berkeley law. And she was at the California privacy board on many mobile privacy policyies and RFID advisory board. And prior to academia she worked in security and thank you so much for joining us today, Jennifer, and I'm going to introduce Caroline. And Caroline Sinders is an award winning designer and she's a Founder of human rights lab of research and design.

For the past few years she's been examining the artificial intelligence systems design and harm and politics and digital spaces and technology platforms Sinders has working with the UN. IBM Watson and others . And she's held fellowships with the what are Harvard Kennedy school. The MOZILLA foundation the YERBA, BEUNA for arts. And the international center of photography photography. The contemporary art center of New Orleans telematic media arts, the art human -- hyper alleger -- she's been named by FORBES as an AI designer to watch . Responsibility technology category. For a tool kit he created for technologyiologists and how to do safe event during COVID-19 . For a product she led design on and she's provided insights and critique and feedback for international regulatory -- her artwork on disinformation has been described as work that helps us better understand how easy visual culture contributes to their credibility.

Sinders holds a masters degree from New York university television program. And she's between UK, -- and I know those are long bios and I think it's amazing to have these lists from different parts of the world and so I just wanted to read those and give props -- take it away. ≫JENNIFER KING: Hello, everybody. And both to James and Michele thanks for our invite today and my primary research is information privacy and data policy and I'm also not an artist and you're going to be able to tell that on the slides that I'm going to be presenting today, but I want to set the stage for our conversation talked about how we made ourselves an readable and legible to computers and I want to address how our personal information is impacted -- and first I'm going to take us back in time a little bit . Okay.

I'm not going to embarrass some of us if you recognize what this is. For those of you that have absolutely no idea, go look it up. We'll take a survey and let me start by saying for over 30 years and this is from 1995 and 1996 and growing numbers of us have been procreating this public severe sphere.

And continuing with online discussions and sharing photos and works in online spaces. We witnessed internet access spreading from the initial few eventually reaching around the globe and to get this access -- make ourselves legible and readable readable -- and so from our first attempts from writing HTML , from learning how to optimize social media and through online communities, as well. We made our selves readable by machines and so web 1. 0, web 2. 0, which again around 2006. And these are Myspace, before Facebook. That era was about creating and sharing data and increasingly the consent about ourselves and governments have been digitalizing the data for several years and it took the platform of the internet with very few gate keepers to be the precondition for generative AI and let me take you on a quick side trip and one of the road bumps on the information highway.

And sharing the data came with direct conflict with the advent of file sharing. It's the ease of copying that allows me to copy all of these images today and give you this presentation and so, again, many of you are probably old enough to remember this era and for those of you that are not Napster that was released in 1999 was one of the first file sharing program and there was no online video. No YouTube. You can barely get photos on the internet. Overnight a CD that costed $16 at tower records, back in the day and you could download free digital copies of music and movies that would've costed me hundreds of dollars and suddenly overnight the internet can used for something more than building static website s. Most of the industry did not like this

business model and in 1998 the digital copyright act forcing websites to take down copy written material and the RIAA sued Napster out of existence and sued their customers for copyright infringement. And I don't know if you experienced this but that was happening at college campuses at the time. And this worked and about the implications about water marking and when we used to say information just wants to be free.

But under threat of these individual lawsuits and as well as platform compliance and it move to fringes of the web and Napster and it's many clones that technology has come for their business model and it took several years for the online business marketplaces to emerge and Spotify that truly emerged from Napster and it took a long time to emerge and mature and in the meantime we have seen the recording industry's business model nearly collapse and today a small number of recording artists make a sizable living and before others were able to evening out an existence seen it change dramatically in their lifetime . I started with the story to make the point that we've seen this disruption before. And when a new technology is released and it up ends both our structures and economic norms and what happened to the music industry is only a prelude for others. Journalism, film, and I put pictures here of the Hollywood writer strike.

And large corporations were able to leverage the legal copyright to get platforms to agree to digital distribution on their terms . And the openness of the web and the -- it places us at an inflection point with another technology . AI. Specifically generative AI for the purposes of today. With the adding of Artificial Intelligence . What lies ahead -- going from machine readability and what I suggest the data produce -- the public sphere is built on an infrastructure that is optimize ed for machines and not people. And we've laid all of this on top of our experience. The

data is collected and processed and now we're seeing it scraped and repurposed to build generative AI and first what constitutes as a public space and blurred how our expectations how data is collected and reused in those public spaces and these are images of Google street view and launch back in 2007 and it was one of the first -- I'm sorry, the physical public sphere and it was deemed legal in the U.S. but faced other details with license plates not blurred out of these images and that caused some people real-world harm and found those folks somehow immortalized and captured somehow on Google street view and the push back we saw came mostly from the EU which had a data protection framework at the time and that's why Google again blurring out faces and we can see the algorithm blur out anything that looks like a face, a cow, and various temples and statutes. And we see an arise for facial recognition and technology and again with privacy and data protection laws and addressing issues of biometrics and we see push back of scraping people's images to building these systems.

In March the Italian protection authority banned chat GPT until they make changes and they raiseed questions whether this system can ever be compliant based on how the company is getting it's data. It makes a necessary building an infrastructure that service appeal first allowing what agency and what data and content we wish to including what I call the privately owned public digital scare. We need how control power data is used and how to include participation in these products if we don't want to participate. In other nations particularly the EU there's been considerable progress. So presently generative eight is AI clip think out our could not consent or without compensation . You're contributions were added to the expansion of the web t gave you access to the free tools and products and services.

That have been costs that we all have for it. Our data is often collected and used far beyond what we originally contributed. That have made tech companies some of the most wealthest beyond the planet. We're usually told that in isolation your data has no value N the worse cases some of us actually experience real harm in the real world . We are harassed a and our labor is appropriated and underred and ironed lined mind mind. Especially here in the U.S.

where we have tried to get a privacy law passed . It is time to build a technic cool intro structure that actually privileges humanity over machine ability. Aren'ts having to accommodation the needs. To demand intentionality and quality in how these two technologies are build rather than pillaging the common and scraping everything in sight and who has seen some of the worst corners of the internet. I'm concerned about what exactly data is being scraped and general purpose tools on this type of data . I'm not convinced that the toxicity will arch ought all of the contributions. Caroline my collaborate will talk more about that in a few minutes.

Specifically how the quality of the training data can impact the output of these data models. Finally if you want to maintain the open information exchange in connection, we need to understand where to place limits on our machine legible and research the right to remain legible on our onetimes. This is the backdrop that care Lynn and I are embarking on and we'll talk more about in the Q and haveA. and haveA Q&A. ≫CAROLINE SINDERS: Hi everyone. I'm to horned to be here.

Here we go. I'll go back. We concluded a tech conference with tech problem s. Definitely performance art. Thank you so much for having

me. I'm traumaing this talk around premise that technology can be facilitate art and I want to use the example of photo photographer. A camera as a form of prefer but so a negative. We've been debates how will also be an agent of creativity for almost two centuries. And creative of photo \graphy\{^graphy}, was met with panic.

Photography as a medium sits in this unnecessary economy the separation of technology and art. And it's unnecessary to separate the two. They in fact overlap or at times with the same thing T general I have the A I are not the same thing. They're linked metaphorically this times and practice . And either are the images link of data and datasets. Images can become AI output.

And I want to sort of give a bit of photography. I mentioned I am a photograph. That was a really different time. I remember my story from the sophomore year, and this is a folklore because I tried to track it down and I can't. It wasn't considered an art but a science. The camera was simply a tool because pure visual him rehad been to a science ly.

So it's not just science but ants art also. Now the strong man that you see here, very easily deprived this notion that the remora is only camera is a only a truth teller . And art information although at the time it was not intended to be really a brown drowned man.

You can understand where I'm coming from and how I'm thinking about general I have the AI as a tool. What it mean to make as an artist right now to create art itself. What tools do we use? What are the application implications of those tools and much like photography. But a camera has it's limits constraints and guardrails. What are the limits constraints and

guard lates officer general I have the AI late rails of AI. I'm here at an artist and also a human rights researcher and design patterns. I'm a postdoctoral researcher privacy regularity, the commissioners officer or. Now

I'm not here in the capacity as an IC representative, to please do not tweet anything that I say and tag them . They would not like that. I'm here today as a artist and researcher represents my human rights lab. We are both presenteds and collaborates and we encourage you while this conversation is happening in the United States this is a global conversation. That the tools we have here in the U.S.

and the regulation we have here are are not universal or global. I live in United Kingdom and I live under different relation laces than you do. They were different tools at my disposal. I want you to keep this moo in mind that things could change and can thing. For a few ideas grounded in a bit of history of photography . I'm not a techno solutions person or a techno optimist. Overall I'm an optimistic pessimist. I hope for the best

and plan for the worst. And in this human rights conference we can go much more in depth front modeling. Front monthling allows to you map any potential harm in communities or individuals might face and potential marm harm that is May Oo rise downstream and harm mitigation and harm reduction. Including the heart that I make. How will I make

something that could be misinterpreted. Front modeling laws us to completely back ought of the harms. The harms of AI is same of the harms of other systems. The harm of the labor that is cheaply paid and distracted to make models of where the data comes from or how problematic that data is . It's the harm of using public data of it being taken and being retransform ed back into the privatized and closed off . But it's also new and the new

media harms of general I have the AI given the feed and hyphy dealt outputs it creates. Disinformation, AI enabled , ton fear often referred ass to reveg porn . And it's impact centering it's harms and also it's esthetics are incredibly important. Dis Disinformation how TikTok eithers use green-screens and documents as evidence while claims as you've scene in the Depp Hurt trial. So here's a video so you can see how people are going becoming internet sloths sloths. And 32 channels logo

to mimic current newscasters all the Kay to create truth thinness playing upon the esthetics. A document had been trustworthy and also weaponized. A new factor is a at the same time. Esthetics of truth thinness can

also lie . So I'm interesting in what the esthetics are. The weird smoothness of doll ly, retouched into -- live I do not know T vapor wave he is setting when your Bearskin has documented and commented on and the future when we look back on a timeline of trends which those be the ones that stand out from now. Images have always been data to me. As photography is sump a necessary fart of journalism, documentation and archiving. Not all hims that be viewed as a data or dataset . I love how weird this image is. I love it as a photography. I don't think it would works a pain have painting painting or 3D model.

How funny and bizarre children can be is what really works here. If we look at the context sheets here's the missteps and mistakes and the scape scaffolds to get to this one image. That is an example from a photography perhaps about bad data. What is madly trained on artists context sheets. What would it create. I don't think it would have the

ability to -- we'd get something probably in the middle. I think about photographic data. Not all hims are perfect or good nor representative . Nor should she be. Not all images are representative of reality of spirit. Artists have a vision and so do photographies.

photography photographerrer : An amplification of those harms and biases. The tools has an identity and intentionality something that I cannot change. You can a powerful powerful -- whereas a scalpel. No one is allowed to see my context sheets.

They are projects. Your project is called cyber tries of beginning. I created 4,000 hims of 4,000 images of suitress trees. see Cyprus tree s. Some were too dark.

And the dataset that is we have created have quantity and cruelty. Quantifier training and quality because we are printing them and showing the dataset . It's also for the esthetic. It's a different kind of mindset if I was creating in heavy quotes a traditional photo project. I would probably take 100- 200 images and if I would lucky I would find five hims and event tale select one from that one day. But this is the difference. It's different intentionality T way I photograph by male as a series of portraits is different than how I photograph for this project. Machine learning dataset and that was an entirely different process for how an artist. The intentionality and design of the project was inherently different. Perhaps mid journey can speed up the process.

The hurricane high da went through these swamps. We accidentally mapped parted of Ida's source. Some of these hims are now gone . They are dead. In these projects -- these mentales are an archive of what they have of them.

It's a project about crime change and how Cyprus tries are disappearing . Et cetera these particular trees in a very particular location that represents something from particular to the people in this area. We cannot control who is taking himming of our images and how the press uses our images and how much the like ness of this project existist ands on the web without our knowledge and without our concept. This is something that as a UK art ist in differencing under a different kind of -- in the you U.S. and the EU thinking about this as well.

I think it's also important to go back for the second and think about what does it mean to be the video game design or a filmmaker s any age of AI which is I think is an extremely interesting question. Part of what makes an artist an artist , if you this about a contact sheet slide, is agency . Not all artist's hims were good and not all that I took was good either. The diseasetive moment. All of the decisive moment the composition of an image but I've always entered as intentionality. In a photograph is communicate its subject in all its intense at this time photography implies the recognition of the rechemical in the world of real things.

What the eye is does find and focus on the particular subject within the mass of reality. In a photograph the composition is a result of a simultaneous coalition the organic accords coordination of elements seen by the eye. One does not add as if it was an after shut superimposed on the basic subject matter . Composition must have its own intentionality about it. There's one moment in this element and motion are in -- photography must \sees\cease Monday the moment and hold the equilibrium of it. He said quote. This is what a

creature did and this is what an artist does. So much of art is seeing and waiting and having the ability to see what others cannot. I don't think generative AI can replace rather than try to recreate it.

S there's a the general I have the Kay is kind of meet country at the in a way. It doesn't Lynn list on the the harms of AI but it won't totally replace art. But I do want to echo some of Jen's thoughts here and I figured we would use almost the same slide. We need to allow permission, credit, compensation and also a system that allows for refusal. When you start redefining what public and open means and allow for stapling that is recenter concept and this must include refusal as I consider that as a form of concept . Real commitment of how communities work will be used as form of permission.

I I am think thinking about Mowry Mãori. This group will refuse you and you have to have evidence to show how it benefits their community. They have \ore\other on prance appearance protocols and setting limits around the kinds of communities and companies that can use your dataset. It's not just the companies using your work.

We need historical evident, the commitment that there's a commitment from the requests that they're centers intersectionalitied in of their work. That they want it. And will the other than honor our transparency protocols. Our dataset has an separation date. You can only use it it for 10 years. Will you really honor that.

Then our dataset cannot be used by you and that's okay . I'm suggesting that we shift and redine what there's open standards really mean for AI and generative content. Thank you. [Applause]

≫: Pulled from a slight redeeper ly deeper history and and -- that often happens around silicon Silicon Valley around our conversations and I appreciate the internet he's and history and kind of the deeper prof photographic photographic photographic history -- you want to go first? I go first? ≫: A lot of our work focuses on deceptive and manipulative design and what we're doing in relation to datasets and AI and we're kind of investigating two particular approaches and one is the question as whether we can use copyright, for example, to investigate how data is making it into the datasets that are being scraped. Less about copyright and more about "Personal data ," and if we can use the regulatory tools developed in the EU and UK and unfortunately only here in California in the U.S. and investigate how this date data is making it into this dataset. And right now there isn't any transparency what is being scraped and there are already issues with copyrighted things with these models, in particular. ≫: I guess I was also really interested in a quote from Caroline talking about a dataset full of mistakes and so this idea of refusal. And

refusing to be part of certain datasets . Is there a way of marking that and you were talked about the feminist datasets and all of these different condition and is are you hoping the legislation -- if there is a way that we don't completely refuse or create an inbetween scenario where you can specify use and I know a lot of artists want to be parts of these systems and if we can control it or have credit in some way and I think a lot of the conversation tends to be all or nothing. It seems like we need a compromise but what is the infrastructure or the rules and technologies around those compromises and do you know more specifics about that or have ideas. ≫: I definitely have thoughts. The next panel is focused more on copyright and regulations, from my understanding and hopefully where this spurs from, as well. And I'm a really big fan of smaller and media-sized datasets and part of that is putting in the labor and understanding the time and impact of creating a dataset . I think JEN's talk on this was very El -- we have begin up our rights of data for speed and ease and how do you fold back friction into this.

Or how do you fold back -- or rather an elongated system that allows more equity. And a lot of the AI tools -- particularly the AI tools -- the fast fashion of AI. That there is a lot of environmental issues with fast fashion. Even, though, it's cheap and accessible; right?

And I think we're kind of dealing with the attention that's around -- it's important to highlight. Even the affordability or the quickness of some of these tools and it does come at -- one can argue at a deep human rights cost and environmental cost. And those working on these systems are under paid and there is the external issue of what actual copyrighted images are in these large systems that have become a part of their scaffolding, in a way and how much is taken without their consent and you can't necessarily separate all of that.

One of the things that I enjoyed crab collaborating with Anna riddler. And she does her own datasets and it took months to create our project. Many hours. Many visits to different swamps and boat rides and it was not a fast process -- one can argue a relatively small amount of images for a model and we have 3,000 to 4,000 images and that's quite small and it's important to stretch out that time again if we use the fast fashion metaphor . How long should it take to make something -- that we can argue equitable or perhaps higher quality? That does take time. That takes investment and that is where I think "Art," is an incredibly important research method and tool for inquiry.

In the UK, for example. And our district researcher and our practice -- any creative practice is a legitimate research method and area of inquiry and I feel like that's less true in North America and this is where we look at -- an artistic practice and looking at the example of making our own dataset and think of it as "time," and "Cost," and look at what it looks like with that similar bestment investment and that goes up against larger things like "Capitalism. " And I'm thinking about this from an artistic standpoint. I know we're at time, actually. Yeah.

-- ≫: Yeah. We probably can't take any questions -- questions -- I'm sorry. Good. ≫: There are a couple of questions online but let's start if there is a question in the room and then we'll finish with the online -- I see the hand, right there.

≫: Mic? ≫: I think the mic is coming -- ≫: Stanford alumni and -- but my is one to Caroline about the artistic artistic expression, that you mentioned particularly the contact sheets and things of that nature. How do you feel about -- and, again, the tool itself has let's saist says it's journey and the hundreds of mistakes that the artist comes up with one image and as an expression, that photograph -- that's the first question. And the second question is I've used the tools themselves too to come up with very specific imagery for capitalism and I'm fundraising within slide shows . Keynotes, that type of thing. How do you feel about that? ≫: Thank you so much for your questions. And going back to the first one, one of the reasons I did not use my own contact sheets is the state of -- allow these contact sheets to enter a public conversation through a series of exposition withs exhibitions and I would not want that for the work that I do. And it's important to see how difficult it is to get to

a decisive moment to an icon ic image to an image of intentionality . And when I'm creating a particular image and I may go through rolls and rolls of film and I still sheet medium format film, which is slowly disappearing and there may not be one contact sheet. The other images -- I will hold on to that contact sheet as part of an archive to know it's still there and it's long, arduous process to get what you wanted. And I think it's fine to get to your second question of, like, being able to sort of use at times -- the journey to create small graphics. I have a hard time using them in my own practice because what I want is a scalpel in a way to fine tune things and I've been interested in Adobe's recent announcement, for example. But I think -- I'm sort of really torn and I'm torn in a variety of ways.

I think that you should have the ability to use a tool , like, dolly but it shouldn't come at the cast of violating another artist's consent and I would rather take a less accurate and lower fidelity tool and built in a different way and I know it sounds like a magical wand or wish but this is where I think about thinking of different regulation s and guardrails and I live under GGPR in the united kingdom and how do you extend this space to something like generative AI, can you? And I know some conversations I have with creatives. And they seem very aware that an open source license may not work with machine learning and this is why we have to think what "Public," and "Open," means in machine learning and taking it a step further, if you consider yourself an open source artist, a lot of my work is open sourced and I have decided a lot of work -- doesn't mean it's not public. Within this dataset, I don't think I can call it an "Open source project," anymore. And that's okay. I can create a definition with my collaborators and I can define how I see it as a public project and it's important to fold that back in. Our criticals and intentions on how we want that project used. And I think there should be more

accessible features for people that are not graphic designers to make graphics for their presentations and I think that's an extremely interesting tool. And I think much more of these tools can have guard sent in them and maybe have climate footprint limits and maybe you're only able to generate a certain amount of images per month, per week. And maybe there are extreme limits in therms terms of what you can generate. And certain words blocked or recovered. And

removed. And it could be lower fidelity files and people are looking at water marks embedded into the images. I have the copyright embedded into the data of my actual images and I was lucky in 2010 and our professor sat us down to give us a pipeline on how to do that as a photographer and those are helpful tools and I call this six-inch zombie doors and a zombie will trip over it.

It will not stop a zombie but show them down. And I can show I registered that image and a lot of these guardrails I'm suggesting are not solutions but they are necessary frictions we need to start thinking about and some of those things can be built in those systems and some are potential regulatory suggestions and that's how I think about it I don't see this as a binary yes, or no. ≫: And you kind of answered one of the questions online which is from Jospeh YAO, and do you have suggestions to protect for online privacy and just to wrap up. JEN, did you have

anything else you wanted to add to that? ≫JENNIFER KING: Very briefly, specifically in the U.S. we had a privacy law. We had one proposed and here it's coming back in this Congressional session and ignoring some of the conflicts we're having with the State of California with potential law. And I also

would state very briefly -- that's one step that we need to be arguing how we deal with personal data online. And there is a lot of interesting work out there with concepts of data trusts. Data cab collaboratives and I will leave it to the audience. And there is a whole different infrastructure rather than status quo we've been dealing with. Obviously, this all or

nothing situation. It's all public or not public I think is untenable. ≫: Thank you so much for setting us up.

≫JAMES LANDAY: And that was great and we're going to continue on talking about open source and copyright and I want to first introduce or panel moderator professor Russ Altman and he's a professor of bio engineering of genetics of medicine and by courtesy of a computer science and he's also one of the associate directors of Stanford HAI and I Felt I met my long lost brother. ≫RUSS ALTMAN: And this is called generative AI and what's more natural than having a bio engineer faculty to moderate it. And I want to introduce Scott Draves and he's an AI artist and engineering and Pam Samuelson and she's a codirector at the center of law and technology and with those brief introductions I will just throw it to Scott . ≫SCOTT DRAVES: Thank you. And thanks for all our attention this morning. It's an honor to be here and -- can we switch to the slides, please? Thanks. And so a lot of people have a take on generative AI and before I give you yet another one, just where does -- where does mine come from? Where do I come from? I was trying to think of you know what I can best contribute here today and I thought one thing is just -- where does generative AI come from? What's the history, here? And so this image -- here's an example. You know, I've been

doing this open source and generative art since the '80s and '90s and this is -- this image is 149 from 1994 where it got awarded the -- made me realize I was an artist that was -- It was perhaps the first open source artwork . It wasn't just an image, it was code that allowed other artists to create their own images and this artwork was very much about enabling other artist and the metaactivity in the programs and exchanging coated over the internet and building things together. And to this plain algorithm as it became known create add whole genre in visual style. And eventually I couldn't even go into a bookstore or look at a magazine rack without seeing some version of it. And of course -- oops.

Here we go. So here's an album cover that used it. And of course these were all images. I didn't get any sort of credit for this .

My name isn't here. It was made by another artist just using the algorithm. And really what I learned from this as -- I put this code out there with a plan, but what I learned was that sort of giving up control , the real power isn't sort of getting what you want, getting other artists to use it or getting the code back, but getting things that I didn't know I wanted or that I want didn't want and enabling other people to do things that were really inspected. And so just -- but in 1999 the electric sheep was created -- I create d the evolving AI screen saver which used feedback from its odd yarns so he learned from everybody that was watching it's and it tried to satisfy that human desire. And it generated

adulations and this was a collective intelligence created 24 years ago and actually is still one running. This image -- sorry. Here we go. And so this image is actually a still frame from a pure bowl commercial that IBM and H&R block ran . It definitely went places that I didn't expect. So but most of my career, I was a regular tech guy. And I've got the Ph.D. in commuter science with James.

And I worked on regular technology in unrelated to -- but I had an interlude in my career for five years. I worked full-time as an art ist just doing open source stuff. And I'll show one more much lesser known example here. My sort of super, talking about history. This is a super early

attempt at something that is now like stable defusion. dediffusion. So this is a generative algorithm that \takes\as the an image as input and sort of generate s that looks based -- is modeled after the input . As you can see here's an attempt at doing hands. It's still has the same problem after all these year years.

Sorry. This was 1993 though . So all these predate the adoption of neural networks that we have today. But that's -- so that's sort of where I'm coming from as somebody who's been doing this the open source stuff and the general tivetive AI AI stuff and it's all prelude to what's happening these days. And it's child play compared to what the algorithms today are doing. So over the last three years I've seen 18 an incredible leap in the computer 's ability to understand human language and generate him images. What are the implications of this?

I would say we could have like a private tutor for every child. A personal assistant in everyone's pocket . New forms of personal expression. PhotoShop that's easy to use. It just does what you tell it. Interactive fiction. Who knows. These are the obvious why did, come have some of them from fix fiction. Say from George ... we're just pointing a camera at a stage

and we don't know where this is going to take off and where this technology is going to take us and what it's unintended consequences are. And these models, these language modeleds and hims image models contain notions of truth and beauty. Who defines truth and who defines beauty. And how can you trust the model that you're using serving your interests versus say the company that created it? So the model that is have been sort of making headlines are proprietary products from secretetive organizations. And there's issues with trust and accountability . So the good juice news is there's an alternative and that's open source. There's four parts to that. We need the code for training the model s to be open.

We need the code that runs the molds the inference engine to be open. We need the training data it would be open X the model ways to be open. And if we have those four things, then what are the implication s? If you could the data that goes in the models that allows independent research and third party audits, and it can really help with trust and alignment because you can find -- you can really see how it work s. And the liberal linting, because open source makes stuff tree to use and that's going to really help spread the benefits of AI to everyone . And if the licensing is liberal, which means allowing commercial use, then that can motivate invest. Ment you can get a virtual

virtuous cycle of improvement. So it's really important though that one of the thing that is open source enables is this forking and fine tuning zero of the models. And so you can take a foundation model and add a little bit more data and exchange its character and change its notion of truth . And change it. And that allows each person or for every country or every culture or every identity to create their own models so instead of having just one model from one company, you can have each person or each organization or each group of people creating their own model. And we're really lucky the beauty of these pre contained models the foundation model as is that making them is extremely expensive but the fine tune something actually really easy and keep.

You can just do that in a few days . And so so instead of having one notion of truth, we can have freedom of choice. And bias is I think is really inherent in the model so as we need to be able to do the one we want. And ultimately every person can have their own model.

I think open source will help a lot with a burn of these sort of hard problems. But there's really one that remains which is what happens if these powerful tools with used by people for stay nee impairous purpose say nefarious purposes. ? Harm can result. And hopefully I'm optimistic about this.

If you look at the history of tension technology, I think the ultimately I believe in human nature and that there's more good people than bad people and the benefits outweigh the problems and we will identify the problems and address them the best we can as the cookie crumbles . So my experience is that open source is part of that solution of making these, makes this technology work best for everyone. And so that's the end of my introduction. [Applause] ≫RUSS ALTMAN: A reminder so send our questions to all of the places that we told you to send them too to. ≫PAMELA SAMUELSON: Good morning. I'm very happy to be

here. Thank you for the invitation. Copyright, general I have the AI is the focus of my talk today. So we have a really long conversation about the governance challenges generally that general I have the AI is posing. There's a global conversation about this particular topic. Obviously the general public has embraced GATT and chat and [Chat Box] chat GBT and policy makers is defamation, disinformations just to name a few so we're just going to talk about copyright today and the thing to know as we begin this and some hotters and artists and programmers are very -- about this development and copyright not being an obstacle and others negative . There are they hauls challenging

general I have the AI object copyright and realitied grounds X United States copyright office is hold some lipsing sessions about the stakeholders will about how the copyright office should be thinking about the main issues main issues that it is addressing and it will push a report probably sometime by the end of the year. The copyright office actually has a website that's all about general I have the AI and so lots and lots of materials there if you're interested in it. I have only time to talk about three questions that I'm going to have to do it really fast. So the question about whether ingesting copies of copyrighted works as training data is that an infringement of copyrights. Obviously pretty much everything that's out thereupon on the open web is actually protected by copyright law unless this was authorized by the U.S. copy so that means there's a rot of

copyrights stuff there and . Even if people aren't exploiting it, the works are still protect . And the second question is whether AI generated works. And two of the lawsuits against stability raise this issue as well as the first one. And then a third question has to do with removal or alteration of what copyright law calm calls copyright management N. What's the name of the work? Who's the that are? And on what terms it's availability. And the stability dayses raise this issue as does a clause action lawsuit against get husband and the Codex of language language model and the copie lit the Cocoa- pie light in . Co-pilot in the Claude. And the authors is the one who gets those rights.

And and these are the five major exclusive right that is copyrights grants to them. The rights last for practically forever. And copyright only protects the original expression of an author, not the ideas not the facts not the methods. There

are lot s and lots of unprotected stuff in copyrighted works . And copyrights exclusive rights are limit ed by various use and other doctrines and there's a copyright like law that makes intentional removal or alteration of copyright management information illegal if you know that it will facilitate copyright infringement. Fair uses are not non-infringing . And so it's a defence to a charge of infringement and fair use is using usely the it was that is raised when we're talking about the ingests of \copy right\copyrights works as training data and there were various factorsism I'm not going to be able to go into this right now but I'm certainly happy to talk at the break with about these things. So ingests copyright infringement or not . There's several cases and I'm listing 2-ohm them here are the courts basically through out copyright claims on fair use grounds so that field put a bunch of his work up on the field net and Google crawls the web and copied it to index the context and let people filed find field's work and the court said no it's fair use because Google is using the works not to exploit the content but to just let people know that the works exist on our outthere. outout there. And the' petal court ruled that Google's digitization of tens of millions of copyrighted books for research library collections was fair use when it was done for the purpose of index it is contents and the serving up snip pits in response to search queries.

And stability is going to be relying on these and similar cases to support its fair use defenses. But there is a lot of people out there who really don't like the ingestion without permission and they can't really easily opt out. They weren't paid for the value of their contributions. And some part of the concern is that the

images that are being generated through stable diffusion and others of these images generators is that they will compete in the market with images that artists actually are doing. And in some sense you're competing sense against yourself . At theft that's the way some of the artists think about it . And this cartoon is how some people think about general I have the AI. And now there are countervailing, which is. The people who develop these large language modeleds are not really interested in the entered in the expression of the work.

They're really interested in essentially understanding the facts and they think of a documents in light of raw material for commutation until uses that are actually not exploiting the expression. And so that's sort of a way of looking at it. And also generative AI enables a lot of creative reuses of things and so what copyright law is supposed to be doing is promoteing the progress of science or knowledge and culture and so the people who favor general I have the AI are thinking that this is actually a good thing that the general I have the AI advances the purposes of copyright . Output is derivative works. So authors have exclusive

life to prepare derivative works but the comforts have basically said something can infringe the derivative work right if you haven't just extracted some expression of a derivative work and put it into the second work. It's not enough just to be based upon the derivative work. You actually have to have essentially taken expression and reused it on the class action lawsuit against stability and claims that every output of stable diffusion is a infringing derivative work and I think that's actually hard to say under existing precedence if that's a sound result. In general, the text and images that are generated in response the user class are not going to be substantially similar in their expression to the works that were in the training data. And if that's true, then that's unlikely to infringe the work right . And there are some examples of people have shown where you can essentially like Mickey Mouse and Superman, you do a prompt with them you're going to get an image that look s ache Mickey Mouse or Superman.

Of the cases against stability, get at the says, hey stability you ingested 12 million photographs and captions from own online data days and that's actually copyright infringement also . There are some hims that they came triangular infringing of the derivative work right and also there's a copyright management information claimed this the particular this is getting images and this is the mangle ed stuff and that is a basis for a -- that's that stable diffusion is at least vital violating the copyright management law and so with that -- these lawsuits are in very early stages. It's going to be many years, actually, probably until we know what the results are from the courts but it's actually good to remember. This isn't the first disruptive technology that's ever been out there and so I like to remember the piano -- the composer was like that's my music and it didn't look like a copyrightright to the copyright. And it's not a copy, it's a machine. And copyright law has to recognize that technology has to be responded to and sometimes you amend a law in order to protect copyright owners and sometimes you just let the technology happen and so the recording industry hated MP3 players, and guess what, they are legal and I'll leave that for discussion and thanks very much for your time.

≫: And so I think we have microphones for questions and so if you want to raise your hand and I'll start things off -- thank you both, for those really stimulating comments. You reverend referred to a bunch of lawsuits that are on going is that how it plays out? Is it the courts that's going to figure this out versus a kind of perspective regulatory approach? Or self-governance by the companies involved and I'm interested in your thoughts of the alternatives for having the courts making the decisions. ≫: The copyright office, as I said, has a clear intent to put the markers down with the main questions that I was just raising and so they'll have a say. ≫: Do they have a good history of moving? ≫: No, but think have motivation. They can't do anything by themselves; right? They can make recommendations to congress to pass legislation and we all know how functional congress is right now and part of what happens is when you have a dysfunction al congress and, you know, the BIDENed an N administration, whatever they think the about this. They can't do anything,

either. And a lot of things end up in court and it's a big dispute and there is no orient other entity that can really deal with this and one of the things that is importance -- fair use is an elimination in the copyright in the United States and a lot of companies have fair use defenses in their copyright law and other fair use

2023-06-01

Show video