AI & Art Connections Between Creativity and Technology

Show video

ACMI acknowledges the traditional owners, the Wurundjeri and the Bunurong people of the Kulin nation on whose land we meet tonight. We pay our respects to elders past and present, and extend our respect to Aboriginal and Torres Strait Islander people from all nations of this land. So welcome to AI & Art, exploring the connections between creativity and technology. I’m Elizabeth Flux, the arts editor of The Age.

I'll be chairing the discussion today, so it feels like AI is something we're hearing about increasingly more like exponentially more. There's not going to be a lot of audience participation. Only two. This is one of them. And at the end, there's the opportunity to ask questions. That's the other one. But who here has heard about AI in the last week or two weeks, whether that's in news or in conversation? Just a show of hands.

All right. So you're in the right place. That's good. So of those people, who's heard about it in a positive light? Who's heard of it in a negative light? And who's heard of it in a neutral light? All right. So that's a cross-section. Interesting. So tonight, we're going to get stuck into the truth of things, hopefully clear up some misconceptions and paint a more clear picture. So what it means, What AI means to the art world, not just now, but going forward.

But so to do so, let me introduce our panel. Memo Akten is an artist creating speculative simulations and dramatizations exploring intricacies of human machine entanglements, perception and states of consciousness. The tensions between ecology, technology, science and spirituality using AI to reflect on the human condition. Rebecca Giblin is co-author of Chokepoint Capitalism, director of the Intellectual Property Research Institute of Australia and a professor at Melbourne Law School, where she works on questions of questions at the intersection of law and culture, particularly creators, rights and access to knowledge and culture. Rita Arrigo is a renowned digital strategist with a reputation for her ability to lead digital transformation projects in the public and private sector and a passion for AI and emerging technology. So if you haven't seen it yet, Memo Akten Distributed Consciousness is located in Gallery one.

I'm putting that way because that's the door. I don't know the actual direction. It's inside the story of the Moving Image exhibition and generously supported by Naomi Naomi Milgrom AC. And the Naomi Milgrom Foundation.

So to kick off our discussion, Memo is going to talk us through this work. So over to you. Thank you very much.

And thank you all for coming. I'd like to, uh, try to quickly whiz through some of the themes behind the work. There are many layers to the work, and so I'll just summarize them briefly, and we may or may not come back to them in more detail. Um, so on one hand, I made the work in 2021.

That's when it kind of started really, and it was a response to the explosions of NFT’s in 2021, if you might remember, and the so called Web3 movement centered around decentralization and distributed computation and all the ideologies that came along with web3 and blockchains, etc.. And but it's also a response, and perhaps more so to the climate crisis, the general ecological devastation, mass extinctions, etc., that our civilization and our way of life is responsible for and our inability to mobilise to take action. And the work employs cephalopods and cephalopod cognition and their distributed nervous system and their distributed intelligence as a means of reflecting upon the increasingly pervasive synthetic alien intelligences that we're building that we call AI, and especially those or the perspectives to do with scraping the Internet and our collective consciousness that is exists on the Internet, but also trying to draw attention to the distributed nature of intelligence and knowledge in general, in that all intelligence and knowledge is distributed, collaborative and collective, and the boundaries between individuals and species and living and non-living systems are more permeable and dynamic than we might think. So the work draws parallels between the distributed cognition of cephalopods, with the distributed computation performed by smart contract based blockchains. And ultimately the work is as we face the challenges faced by the climate crisis and general ecological devastation; the work invites us to meditate on our relationship with all the living and non-living beings that we share our planet with.

And we are invited towards the de-centering of human exceptionalism and invited to let go of the dangerous dichotomy of man versus nature. And I use gendered language deliberately here that has been embedded in our culture for so many years and instead to embrace the interconnectedness of all living, non-living, human and non-human beings that we share the planet with. So those are some of the themes that I'm sure will come up and more in our chats now. Thank you. Wonderful. Thank you.

So the work as it exists now couldn't exist without AI. So I guess I want to ask about AI. Technology has always shifted art.

So is AI any different in the way that, say, photography has changed things, in the way that anything has changed in the past? So I don't know. Whoever would like to go. You can all go if you like. I'll have a quick go.

I do think that technology does dramatically shift art and we can see that through the kind of things that happened with the Internet. I think that really changed the way we could find out, the way we could experience art and people starting to use it in very different ways. But I think AI has a really significant difference to that. And we're seeing a lot of artists kind of indulge in the capabilities of AI, particularly around the ability for generative AI.

So that's you see that in your work, like the significant use of generative AI in their imagery as well as in the language that's being used. I would say that we've always made art from the tools of the time and we see it really obviously when we with the advent of digital and we so much, much more sampling and mashing up and so on. But one really interesting difference I think with AI and particularly generative AI compared to earlier technologies is that there is like these technologies are not going to become sentient right? And they're not going to take half of all jobs. And so the, the Heitman that are out there and again, I use the gendered language advisedly that are that are telling us to worry about those threats.

They're doing, say, for a reason. They want us to look over there. But I think that there are things that we should actually be thinking about, which over here and is going to drastically change the conditions of many kinds of labor markets, including creative labor markets. And I think one of the things that we really need to be focusing on, as Molly Crabapple has pointed out, is that if technologies like image generators seriously disrupt the market for creative workers to like illustrators to actually work and to make a living, but they're built on that only possible because of training data that is from the work of of preexisting human creators, then that is a difference. And somebody said to me recently, I just found it so incredibly striking; “Oh, but you don't understand. You know, artists are just like the buggy whip manufacturers, we don't need them anymore.”

And the fact that somebody said that out loud, it really just stopped me dead in my tracks and realised that I'm in such a bubble that that that couldn't possibly have occurred to me. And one of the difference is that there is this risk if there is a widespread perception that the kinds of outputs that can be generated by this sufficient and that we don't need to, that we devalue artists more than we already have. And I think that's something we really need to be thinking carefully about as well. Do we want to live in a world that doesn't have artists and doesn't have fresh art? So I'm just wondering what sort of world that person is imagining.

Like if you don't like what sort of art they want to see out there or is is a world without art at all? I think it's a world that's very tightly controlled because if the machine is generating the thing rather than a human, that's something that they can understand and reduce to a binary of zeros and ones. But as soon as you add humans into the equation, it takes it into a broader power. Perhaps you might even assume that's a very uneducated opinion as well. Potentially. But I mean, you depend it depends on what kind of education you're talking about. So this person was very highly educated in tech, for example, but you might say not as inculcated in the art world, but they're not a stupid person, but perhaps just a narrow focus. Hmm.

I also want to add the word AI is very complicated and amorphous, but even more amorphous than perhaps AI is the word art. And one of the really fascinating things about this discussion is people use the word art in a very different meanings. Like, and I'm not making any value judgments here, but for example, someone who models a tree for a video game, their title is Artist, that they're a 3D artist, someone who does, I don't know, a backdrop for a film like a set.

They're called an artist. Someone who makes an illustration for a book is an artist. A subway worker is a sandwich artist. Okay, that I was not aware of. But so, you know, musicians are called artists.

Actors are artists and okay, we're all artists, but definitely the impact is going to be different. So I generally use I don't say computers are going to make artists obsolete because that's just too broad. But creative laborers, people who work put labor in, in what we call the creative sectors, which is also a problematic term, I think, because doctors can be creative, lawyers could be creative, unfortunately or fortunately.

So creativity is nothing to do with art, really, that you can be an artist without being creative and you can be very successful. Just apply the formula that works. But we call them creative sectors. So people who work in those fields, yes, there is going to be a lot of automation where maybe one person using software is going to be able to be more produce more outputs than ten people before going back to your original question, one thing that's so fascinating because there's a lot of anti AI art hate out there on the Internet, it mirrors the Luddite movements so much in the early 1800s when the Jacquard loom was invented. What it did is it allowed unskilled laborers to replace and do the work more efficiently of the skilled artisans and the introduction of this technology did not benefit the artisans. It benefited the factory owners, and so they effectively exploited the labor of the artisans.

And then they managed to just dump them and switch to unskilled laborers. And now with generative AI, you can produce images without requiring the skill of drawing or painting. On one hand, this is democratisation, which we can chat about separately. But the so I do think there's a lot of value in this, and it will allow many people to tell stories that they would not have otherwise been able to tell. But for example, Stability, the company behind stable diffusion off the back of this huge pool of skilled artisans are now worth what they were worth, 10 billion or something.

What are those people worth? Nothing that they're they're potentially losing their jobs. So this is really the heart of the problem, is that where does the benefits of this in value? Who does it benefit? And I think that that's really important to to make those connections and to notice that the people who are funding these technologies and trying to shape them are not people who are setting out to make the world better, right? They are venture capitalists who are setting out to make ever more money for themselves and for their investors. And that's exactly what they're going to be doing. They're not going to strip away half of all jobs, but they are going to be making work worse. So there's more surveillance that is a reduction of kind of creative control and discretion of humans, more, uh, reducing it so that you need less skill to do it. Because if you start with an artificially generated image and then you don't actually have to hire a skilled illustrator to make it, you just have to hire somebody who's got more basic skills to come in and do the edits to it.

They're not able to ask for as much as good pay in as good conditions. But there's definitely organisations and corporations and businesses that are looking to responsible AI. And I think, you know, and a good example of that might be someone like Getty Images who's about to launch a generative AI service that uses responsibly copyright images that have been paid for so that then you can then pay for these AI generated images. But I would then say, look downstream at the conditions in which photographers licensing images to Getty Images work and the the chokepoints that these other big corporations do. So even if they do license the copyright and we saw this with with Adobe with the new version of the new generative AI version with Firefly, they said we've licensed all of the training data and technically they did, right. But that was because they got all of the contributors to their stock images to sign over their rights in the small print.

They didn't even realise what they were doing and then used all of this. And when those those photographers and illustrators complained about it, they've said “Oh we're going to maybe try and find a way that you get paid for it later. But you did technically agree to this.”

So there's ethically source and is ethically sourced, I suppose would be my response to that. So how do we protect creators? Like how do we like? Because the problem often seems to be people behind the scenes making things worse. So how do we protect people downstream? So there is some work going on around... There's the Coalition of Content Prevention Authority called C2PA that would begin in 2021, and it was to develop an open standard for indicating the origin of digital images and whether they were authentic or AI generated. And they've been... It all came came through when we all saw the Pope in the puffer jacket.

So that kind of started a lot of this discussion around it. And, you know, it was mid journey that did that. So there's a lot of companies that are now kind of trying to say, okay, if we we've agreed to sign all AI art with a cryptographic watermark. Microsoft's one of them and this range of other companies. So there is work ahead around, you know, trying to find these ways of being able to ensure that there's there's an indication of A.I.

that's going on with what you're producing. So, yeah, but I know that from an Australian perspective, you know, we still have a very different copyright law to the rest of the world, so you can probably attest to that as well. It's it's much tighter from what I understand.

It varies in some ways, but I think that there are like things outside of copyright like labeling, I think is a really interesting one. And I think what we're going to see is the importance of this in the context of music in particular. Music's been a bit behind the text generators and the image generators, partly because there's a real uncanny valley issue where people can really tell when there's something just a tiny bit off in the music.

And very often the synthetic music is just a little bit too strange, but it is coming along now in leaps and bounds and there are certain genres that lend themselves better to synthetic music than others, and particularly in the ambient space, we think about what the downstream implications of this is going to be. We see how Spotify is really trying to reduce its its costs and it spends almost 70% of its revenue on licensing fees at the moment. It's already been caught out getting sort of prioritising artists that it enters into special deals with fake artists who provide like ambient music. And so it prioritizes that music in its playlists and algorithmically delivers that into audience ears instead of music by people like Brian Eno, who then get shifted off those playlists, we can see that like there's very easily a potential for them to be creating their own AI division, creating their own air generated music, and then putting that into listener ears now that we have outsourced to Spotify in so many cases, the power to decide what we listen to. And I think here we need to have like solutions, like labeling.

There's nothing that's going to stop them from doing that in copyright. There's nothing that's going to stop them from doing that in contract law. But it might be that consumers want to know that what they're listening to is generated by machines, and maybe that will change their mind about whether that is actually what they want to put into their ears. Or maybe they'll be like, Oh, well, it's $2 a month less, That'll that'll be fine. I mean, at a concert that Beck did a little while ago, he played it and generated Beck song and apparently it was terrible.

So I've got a little bit of time. Hopefully for that. But you talked a bit about democratization before, so I want to get a bit into that because it does sound like doors are closing in some ways, but our other doors opening. Yeah, that's actually a really good follow up from from Spotify because or music rather, because today, I mean, always, you know, musicians are struggling like a handful make it make it and then the vast majority don't. So this issue already exists in music.

But I often like using the analogy when we talk about what's happening around AI and current AI tools and generative AI tools. Um, I like the analogy of the drum machine because the drum machine, when even like electronic synthesizers in general, when they were introduced, many didn't even consider them valid music instruments. You know, it's not a violin, it's not a piano, etc. And then people making music with these electronic devices wasn't even considered decent, proper music.

But what it did is it allowed a whole new generation of people who didn't have access to an orchestra to make music. I mean, hip hop exists because of turntables, because of the drum machine. And so and today what we call laptop musicians, they exist because of this new technology.

Now, drummers didn't go obsolete. Drumming still exists, but it is true, and I know this from actually a family friend that a lot of drummers did lose their jobs because all of a sudden at weddings or at various stages, instead of hiring a three piece, a four piece, a five piece band, you could hire a smaller, cheaper band with a with a drum machine. So there is always this shift. But so when I talk about the democratisation, I am referring to the fact that people who don't have access to an orchestra could make music. And from a very personal point of view, I can say as a kid, I wanted to be a filmmaker. And, you know, growing up in Turkey, I didn't have any access to anything that could help me be a filmmaker.

I didn't have access to people that had access to equipment, etc. Luckily, I had access to a computer, so I learned how to program and that was my entry into making moving images that somehow tell the stories that I want to tell. So I'm very, very excited about, you know, the potential of kids in various parts of the world who don't have access to a lot of equipments. But if they have a computer with an Internet connection, they might be able to tell stories that they would not otherwise be able to tell. And that's one of the reasons why I'm quite excited about the potential of these technologies as well.

Not only because it's going to mean like at that really entry level, we're going to be able to see more people be able to make stuff, but we're going to be able to see professional creators take it much further and in different directions. And this is one of the reasons why I think it's a much more nuanced story about what what the impacts are going to be for creative workers, because it might be that it makes certain kinds of of of work more accessible and then results in increased commissions for the kinds of people that can make that. And so I think that there's like huge potential, particularly for sort of, you know, more personalised things and more local stories to be told potentially, like making a film based on a local community is still really expensive. Even in 2023. But in 2025, is that going to be a lot easier? I think potentially for sure. And so I think as well as the sort of there is this there is this darker side and there's the potential that we need to be watching out for about the dangers of this.

Getting a little bit excited about the potential, too, and thinking about the ways in which we can help provide the conditions for that to flourish is something I’m excited about. Yeah, it's definitely a lot of creatives are seeing it as a creative renaissance in a lot of ways because it means that a lot of the the, the things that they didn't have in the past are now available to so many more people and that that visual language can actually be a lot more easily interpreted by many more people than it being something that, you know, you need five academic degrees to understand or something like that. So it's that democratising of a visual language kind of that that is, that is seen as very positive and I think we underestimate the potential of it. I just wanted to add one thing about what you said at the very end there.

I'm also an educator. I'm a professor at UCSD in the visual Arts Department, and I've been teaching a AI class to art students. Graduates, undergrad and graduate for for a few years.

And two or three years ago, a prerequisite for my class was you had to know how to program. I could only do my class with students who already knew Python and knew a little bit of technical things because even two years ago You had to program to be able to do anything visual with AI. And then we had Dali and made Journey and also something called Google colab notebooks became very popular and added UI, etc..

Anyways, things changed in the last year and so I was able to give a new class to art students and music students. And it's completely no technical prerequisites because we we have the tools to do it. And as a result, the projects that they're doing are so much more diverse because the people who are able to come into my classroom and to work with, AI just they come from completely different backgrounds to the much more limited set that I had before. When I said, okay, you have to know how to program to be able to do these tools. I think that's that's very exciting. Yeah, it's definitely the user interface for these generative tools has really opened it up to so many more people like the non coder, which there are so many of us out there.

So yeah, it's, it's, it's definitely a really exciting renaissance in that way. What are some of the ways that you've seen AI used in creatively in art? I mean, we talked about the terms, but what are some of the ways you've seen it used? I think if you guys want to go ahead and think. But I mean, I went to New York earlier in the year for the first time, and I saw a graphic and one of the works by Refik Anadol where he uses different ways to generate moving images like he pulls them from databases. Sometimes he uses it from EEG’s and it's just amazing. Huge. If you've had the chance to see them, they're amazing. They're huge, large scale, mesmerizing ones that are constantly evolving.

And he's been working in AI for quite a long time. So that's one way he's been able to tie in like something really human, like the way your brain works, your EEG’s and make it visual in a way that people would never otherwise be able to see. So that's I've seen I work just do some work with the Science Gallery. So I've seen things like the ability to use being able to understand your emotion and then being reflected in an art piece so that they can it can identify whether you're happy or sad, disgusted. You know, these kind of elements, which I saw being done with the smell of blood.

And that was just amazing to be able to bring that interaction in by understanding the human emotion. I've also seen lots of robotics being used in art, which is really popular, but it's sometimes it's, you know, it's hard to define whether that stuff's interesting or not. But there's a famous artist that uses Spot the robot, the Boston Dynamics robot, to actually paint and generate art as well.

So it's kind of wild. People have some really interesting reactions to those robot dogs. Some people really cannot handle them and some people love them because they like dog. I mean, I know I said to to a show of hands who's seen the robot dogs in action and who likes them.

Okay. I thought they were going to eat your face. I was just so creeped out by them. One project I just remembered, it's I don't know. It's not really an artwork, but it's by Mario Klingemann, who's been working with AI for quite some time as well. And it's something I think we did in 2016.

It's not even a recent piece, and he did it when he was a resident artist in residence at Google Arts and Culture, which is actually there's a lot of problematic aspects there about bypass that for now. But what he did was he did this tool and I think it still exists online where you select two artworks from the database, Google Arts and Culture, and they've been archiving artworks, you know, all kinds. And so the project is called X Degrees of Separation. You pick two artworks, any artworks you like, one could be a sculpture, could be a piece of pottery, it could be a painting, and it plots a path of visual similarity from one to the other.

And what that means is it's the fact that it looks like a morph so let's say you pick a a Roman marble statue and you also pick a pot and it does a kind of more from one to the other, but it's not an actual morph. It just picks other artworks from its database. That would be steps in this morph.

And it's a really mindblowing way of exploring the this vast database of human creativity across, you know, millennia and seeing the similarities across continents. And yeah, it's really, really fascinating. Amazing. Well, we talked a little bit earlier about the ethics of where I sources images from, so I'd like to get a little bit deeper into that because I mean, there's the controversy earlier in the year when the app lensa was very popular and then a lot of artists started to notice that their styles were being used to generate pictures. So, again, I'm coming back to my question about how do we protect creators, but also how how can artists be part of ethical AI usage in future as well? Yeah, I think when it comes to when it comes to copyright, first of all, copyright only protects expression, it doesn't protect idea and we generally have accepted over the years that an artist style is on the idea side of that spectrum. But that doesn't mean that that in response to, you know, the fact that now and so we were playing around with this this afternoon in my copyright class in the Masters.

And one of the prompts that we used was to like, first of all, create an image of a law school and then create an image of a law school in the style of Wes Anderson, but then create an image of a little school in the style of Cathy Wilcox. Right. And I was really curious to see when somebody suggested that how well, like an Australian cartoonist would be representing the training database where they would get something that was like at all recognisable. And this was on mid journey and we did like it was so distinct. Like if I'd seen any of these images, I would have been like, that’s a Cathy Wilcox.

And so there are questions about whether we should find a way of protecting style. And I think the answer is possibly, but probably copyright is not the right way because copyright rights are, you know, usually fully alienable, and they get extracted very easily via contracts. And so just merely having the copyright is not very useful if you don't also have the power to hold onto that right. And we're seeing this play out in real time at the moment with voice actors working on computer games. They walk into the studio, they sit down, they pick up the microphone and they have to say, I'm Rebecca Giblin.

I hereby assign all of my rights over this recording for you to use my voice, including in training a voice model. And so then they effectively, because I didn't have the power to resist agreeing to that transfer because they want to get this work over the 5000 other people who are in line to be a voice actor on a video game. I'm a they're not able to hold onto it. And so then they have to compete with a synthetic version of their own voice, right? So they undercutting themselves next time they come in, this is like, well, why should we pay you that much money? Because we can get a pretty good simulacrum of your voice already with this. And so we want to be maybe thinking about, well, these are personal rights and and voice models as well.

The ability of now with just a very small amount of training data for people to be able to say things in your voice, to put words in your voice, it's so intensely personal. And I think that that focuses attention on invites us to think about, well, what is due to us as humans, right? And is the the personal nature of what's going on, does that invite us to think about what's different between humans and machines and indeed, like just take it one tiny step further. What's different between humans and corporations? And do we do we need it to be treated differently? And so I'd say we need to think really carefully about these questions. But I would also avoid rushing into any kind of any kind of a conclusion that copyright is the thing that will fix it. And I've had some really what I think is quite dangerous reasoning recently.

And again, this was a conversation between Paris Marx and Molly Crabapple, where they sort of by the end of the discussion about generative AI they're just like, okay, we'll copyright. We've been pretty skeptical about it and it's really not great in a lot of ways, but maybe that's the best thing that we've got. And I think that's dangerous to just settle for this thing that has done a really poor job of getting artists paid. And there's also achieved a lot of collateral damage in terms of the loss of culture through rights that often over broad and extend anybody's interest in them. And we should be thinking much more directly about what do we want to achieve and how do we go about doing that and finding ways that not the things that the frameworks that we invented hundreds of years ago in the printing press was invented to actually achieve it.

There's actually Australia's AI ethics principles that we're one of the first countries in the world to have one. In 2019, they were all put together and they were around transparency, fairness, you know, accountability to a range of different and there's a lot of work being done. I personally work at the National AI Center, and there's a lot of work being done in helping people to understand how to translate those principles into practices in the way that you deploy your AI and your organisation.

And Lensa is actually a great example of one that didn't didn't do anything for the whole inclusion, because I heard of a lot of people that were using their avatars and in the men would get these amazing pictures of astronauts and scientists and the women would get these naked fairies or like you know, other kind of examples. So it's really interesting how many AI products are out there at the moment that are not in line with these ethics principles. But I think there will be this drive around, responsible AI that people will see that as not just something that they have to do, but also something that their brand and their values really, really open up too, because that way you actually can have an AI industry that people want to use as well.

But we're actually also going to see the guardrails coming off. And like a lot of the big products that are out there at the moment, the commercial products, they've been really careful to avoid like we already know that if you if you provide something without guardrails, humans will do terrible, terrible things. And like, for example, like with Microsoft's chat bot a couple of years ago turned it into a Nazi in 5 minutes. So if you do like on Midjourney and there's a lot of prompts there, I again, I was like, Oh, I think that you might be violating the community guidelines.

And it was like, I'm pretty sure I'm not. But they're being really careful. But there are some products coming out there like this one that I was playing around with where it will undress any woman. Right? So you take a clothed picture of a woman and using this neural net, it will take her clothes off and show you what apparently she looks like underneath.

So again, these are really personal things that we haven't necessarily thought too much, beyond the idea of the misnomer revenge porn around what to do with pictures of us. But the idea that fully clothed pictures of us could be put to this kind of use is something we're going to be confronting more and more. This is the the drive to less ethical as people figure out how to take the guardrails off and these technologies escape into the wild. That's probably like more of that deepfake kind of scenario where, you know, we do have a lot of challenges around deepfakes with voice deepfakes faces. But I think already people are working on, you know, deepfake detectors and a lot of those, and there is a high barrier to create those kind of really good deepfakes. So it's it's not it's not something that, you know, you can just download a free product and do.

So there's a there is a barrier to a lot of that kind of stuff. There is now, but I'm pretty sure that like to create a very believable Deepfake Yeah, right now you need a lot of skills, but yeah, within a few years I'm sure it will be off the shelf. But in response what you were saying, Elon Musk famously found Chat GPT to be too woke. So he wants to build his own version that's not woke.

But but going back to your question of how do we protect the creators, I don't have an answer, but I would like to make the question more difficult by adding some more complications because I mean, already you've discussed it. I just wanted to add a few points, which is one of the maybe obvious potential solutions that's been proposed is this idea of consent for training data, because right now the models are being trained on just stuff scraped from the Internet. So without the original artist's consent, Select famously one of the most recent datasets is called Lie on. It's what stable Diffusion is trained on. For example, and it's initially it was 5 billion images, scraped from the Internet. And there was a huge out raw because, for example, it might contain lots of artwork by a famous.

The canonical example is Greg Rutkowski, a fantasy painter. And then you can say, okay, give me a dragon doing this that the other in the stule of Greg Rutkowski, and it gives you something that looks like, at least to an untrained eye, very much like Greg Rutkowski. So then the first step was, okay, you have the option as an artist to opt out of the datasets.

And this was obviously not enough because if you might like, I actually have hundreds of images that dataset. I personally am not, you know, it doesn't threaten me, so I'm not bothered about it. But the argument was that it shouldn't be opt out, it should be opt in.

So only artists who agree to be in the dataset should be in the datasets. And this might seem like a good idea, but it really isn't because for two reasons you don't need to be in the datasets to be able to be replicated. You can have great. We've got Greg Rutkowski removed from the datasets, but then I as an individual, could take that model and give it one image of one image from Greg Rutkowski and say, I'm creating an image in this style.

So now the organization which made that dataset, for example, stability is theoretically innocent because they didn't train on Greg's work. But I, as this random anonymous Internet person, can still use that model to mimic Greg's work. So for me, the problem isn't what's going into the model, it's what's coming out. The other issue around all of this is, Oh, I forgot where is going to go with that anyway. So this is one of the one of the big issues. Okay, I had another point, but I can't remember it now.

But I'll I'll, I'll leave that there. I just wanted to say that controlling what goes in isn't the problem. It's what comes out.

And what comes out is not just a problem of VR. It also happens with without a guy like I could imitate someone else's work and there's not necessarily protection against that because copyright does only protects expression if I imitate exactly the work, then okay, that's that's a case for copyright. But if I imitated style or an idea, then that's not protected. Should it be protected? On one hand, knowledge progresses when we share all of this. So I like the idea of being able to train AI.

Oh, that was my second point. If we start prohibiting and enforcing opt in consent, then this might have a very bad consequence in that right now there are a lot of open source movements in building AI. If we say, okay, we need opt in consent and lots of artists opt out, this will allow big companies to hire artists and a handful of artists and just mass produce training data. And so then it will be even more isolated in the hands of these big companies who can afford to generate data to train on. And then all the open source or smaller initiatives will not be able to compete in producing more.

So whereas right now the open source alternatives are surprisingly, not surprisingly, I should say, inspiringly on similar levels of quality. So I just want to add those complications in the... I'm just giving everyone a heads up that we're going to open up to questions in about 5 minutes. So with questions, please keep it to one line that will end with a question mark at the end and they'll come to you with a microphone. But that would be in about 5 minutes.

So, yeah, just keep in mind if you've got anything, we should have time for a couple, I guess before we throw over to questions. These are big conversations that need to be had. Who should be in these conversations to continue developing the ethics, to continue putting in framework to decide whether things should be opt in or opt out, like what people need to be involved in those discussions. I think it's definitely a multi, multi disciplinary sport and I think, you know, even in I have a lot of focus in the business side of things. So, you know, they're already talking about having a responsible AI champion as part of an organisation that ensures that what you're, what you're doing is actually aligned to our ethics principles. But I think artists, curators, gallerists, you know, we're thinking about the creative sector probably needs to be involved in some of these ethical discussions, a few, a few little stories.

So I have also academically been involved in AI have a PHD in the topic, and I used to go to AI conferences and in 2016 I remember going to an AI, and it felt like the biggest academic AI conference called NeurIPS Neural Information Processing Systems in 2016. And it's it's a technical conference and it's huge like tens over 10,000 people. Academics go and there was one tiny little workshop in a small room about this big that was around the ethics, and hardly anyone really went to that. The following year, Kate Crawford, who is the founder of AI now, which is one of the world's leading kind of socially responsible - AI organisations, gave the keynote ... - And an Australian. Yes, Yes.

Thank you. She's Australian, really a wonderful person. She gave the opening keynote and at the same conference and a few years after NeurIPS introduced a rule that any paper to be accepted to NeurIPS and it's the most prestigious conference, needs to have an ethical considerations statement as part of the paper. And there was both a huge backlash against this from lots of AI researchers saying you know I'm you know, I'm a scientist. I don't think about the ethical considerations of what I do. Why should was thinking about why face detection should be might be misused.

But there was also a huge welcoming and I remember chatting with a psychologist friend of mine about this and she was shocked that people don't think about the ethical considerations of the work that they do, because obviously in psychology in the sixties, people didn't. But it's so ingrained now in psychology, education that you do have to think about it. So what I'm trying to say is things are changing quite quickly, but I think it will take a generation for it to be fully embodied. I was once at a dinner with a senior Facebook AI researcher and I said any team that like a private dinner and I said, any team that deploys a product that is going to be used by the masses needs to have as part of the core team, an ethicist, a sociologist, anthropologist, in the same way that you would build a team saying, okay, we need two UX designers, we need a network engineer, we need a UI designer.

You should put on that list as integral. We need an anthropologist who will study the consequences of this, that the other and this engineer got really angry and offended and said, What makes an ethicist have better ethics than me? And that kind of shocked me and made me realize what a bubble I was living in. But as long as, I also wanna add Google famously had said, we don't need regulation.

Regulation can't take care of this. We need to self-regulate. So we will have an AI ethics board internally. And when those people run by Timnit Gebru did her job in highlighting the ethical dangers of the work that Google was doing, they fired her. So self regulation also isn't necessarily an option. Government regulation.

I can't even see how that could work, so I don't know what the answer is. I'm going. I'm just adding complications. Um, but, but I think it's definitely it is going to be about the same way that we have our ESG goals.

We have sustainability goals, we have diversity and inclusion goals, all these kind of things. It will become part of that because otherwise, algorithms and algorithm decisions, they're going to be part of business. So they have to be treated like business. I agree with you. And I find that really bleak when I look at how poorly all of those things are performing at the moment and how quickly we are running out of time to fix this. And so I think that that's something that we really need to be conscious of now, Like these technologies are going to become endemic.

The legal consequences of this are going to take probably well over a decade for us to get even, like start to get a first layer on head around whether it should have been allowed in the first place. But by then it's horse is out of the barn. And so I think we need to be.

There's been a lot of work. No, no, no, I, I do understand that. But also I'm looking and of course, there's a wide spectrum of actors in the field with different motivations, different business models, different funding and so on. But what I am seeing is that there's so much capital being put into chokepointing in these markets as well to get these technologies into the hands of a small number of powerful corporations and to to to to extract ever more value for a short number, a small number of people, while ignoring the enormous environmental consequences of these technologies. And like we're actually running out of compute power and for some uses at the moment, because so much of it is going to these like as we're sort of saying, we don't even have a large language model in Australia. We don't have one yet. So

I think CSIRO is trying to buy, but you know, everyone's trying to figure out how to get one, but we don't actually have the hardware for one here. So there's and we also don't have the legal framework that would permit us to create one here, like legally it would way be way too risky. And I think that's what you were alluding to when you talked about Australia's lobbying search engines. Exactly. So you should all know that we don't have any search engines running out of Australia because of the legal issues.

Every single making a search engine means copying everything on the Internet and copying everything on the Internet is a copyright infringement unless you get an exception that applies, which we don't know. So yeah, but I think I think the you know, I think people are trying to get ahead of that and trying to because, you know, we've had so many challenges around, you know, the use of AI, particularly with things like the robo debt issue that we have. And and I really see businesses really wanting to get ahead of that and have this responsible AI strategy around what they're doing. And I know it's hard to believe, but but there is a lot of work being happening in that space. I do know that they're trying, but then I'm also acknowledging those broader considerations, the fact that we like the hardware and we like the legal framework to even create the models here. So therefore, we're going to be, by the time we get there, these large models that are entrenched in other jurisdictions that have less consideration of that are going to be what what's on offer.

So I guess what I'm saying is like really urging us that we need to be paying really close attention to this now because you know, how, you know, ten years after anything, like ten years after smartphones came out, we looked back and we just like, well, maybe we're not really that delighted with the consequences of everything that we've got from that. And we think back and they were like a bunch of really obvious things that we might have done differently if we were thinking about it in the same way then as we are now. We're in that moment now. We're making those mistakes right now with generative AI and in ten years we look back and we think back to this. And I was like, Oh, I wish I'd said we should have done this, this and this, but we don't know what we don't know yet. I just know that we are making the mistakes right now.

That's the happy period we’re in. That's an interesting point to see if anyone else wants to enter the conversation. We probably have time for one, maybe two questions. Is there? All right. This one down here

for the microphone. Thank you. Thank you very much, everyone. Memo I'm really interested in how your art explores the connections with technology. Can you talk a little bit about that? Sure, yeah.

Thank you for the question. I started working with, I'll say software in particular and writing code. Um, really, as I mentioned, as a means as the only means I had to making that was available to me. And initially it started out as a, as a tool.

So the computer's a tool and I'm using it as a tool and it's a medium, let's say, and it's a medium that I really enjoy. It's a medium that is dynamic. It's a medium that can be responsive, interactive, it's a medium that can scale to be very large and immersive. It's a medium that can be very small and intimate. I've made apps as, as artworks. Um, when the iPhone came out, for example, but I've increasingly become interested in this medium, not just as a medium, but also as a subject matter.

So here, for example, I'm not just talking about I well, I mean, obviously you also are not talking about AI as just the tool, but the implications, the social, cultural, ethical, legal implications as a result of these technologies. So for me, there's always it's very, very, very resource driven. I love doing research and I do research both into how to use these technologies as a medium, but also the broader implications of these technologies like the legal, ethical, etc.. And I started using AI. Well, I mean, I again, it's a very, very broad term argued. I've been using AI since the beginning, but I got into machine learning, let's say probably about 14, 15 years ago as a way to build systems that could understand what was happening in the world around it.

So I wanted to build responsive environments, interactive systems that could sense people. It could somehow try to understand what they were doing, where they were going, what they were saying. And this is the job of AI. And I gradually started more and more of that. In 2014, I realized this is getting big, this is going to be big.

I really want to know this really well. So I started a PhD in AI, uh, and little did I know it ended up. Yeah, it didn't end up being quite bigger than I thought sooner than I thought. I should also add, as a kind of side note, people always used to say the people who should worry about AI for job jobs are like the laborers, the truck drivers, the stuff the other artists are safe. And I would always thinking, no, no, no, no, no artists, but not like conceptual artists. But I'll say creative labor workers, they're going to be the first to be replaced. What replaces the bad word?

Sorry to be like automation to come into it because there is no absolute truth. You can get something wrong and be okay. It's not like a medical diagnosis where you say, Oh this person doesn't have cancer and it turns out they do. That's a mistake you can't afford to make. But with what we call art, you know, creative labor, there's no absolute truth. And also you're not interacting with the physical world.

Robotics is complicated, but purely virtual is quite easy. So I was expecting what's happening now to happen like ten years ago. I wasn't expecting it to be as soon as it is happening.

I wasn't expecting it to happen in 2023. That was a bit of a digression, but does that answer your question? Or I could go into more detail on you guys in more detail about work. Yeah. So the work that I have here combines a lot of what it combines the two hyper technologies of, of the time actually AI, but also blockchain and distributed computation.

The idea of a blockchain is that instead of the way like Amazon would work is there's one, there's the Amazon server and people connect to that server and Amazon owns that server and Amazon owns that data. The utopic vision of a smart contract based blockchain was that we distributed this and everybody owns it. It's, it's, it's actually a nice ideal.

It doesn't necessarily play out that way but so that was that exploded in 2020 and nfts etc.. So the work was a a response to that. And I was using cephalopods who have a distributed nervous system. They their central brain is actually tiny. They've distributed that computation across their body. So I'm using cephalopods as a way of of reflecting on that.

So again, the technology is the subject matter here, but it's also the medium because I'm use I'm I wrote custom software using AI. This was 2021 so before Midjourney and all that the images are generated with AI generative AI, the text is generated with AI, the text is encoded in the image as an invisible watermark. So I released the images as NFT’s initially, and then a month later I announced that everybody who bought an image, you actually bought a verse from a manifesto that was written with AI. So it's actually a book that's distributed on the blockchain and... Did a lot of people buy it? Yeah. So there were 256 images and it sold out instantly.

It's also worth saying it was really fascinating. I released eight a day on Twitter. I announced it was all kind of pre-scripted, so every day eight critters were spawned as I call them. They were spawned, set into the world. And then I didn't do anything to create a community.

Usually in the NFT, Web3 world, it's all about community discord. I didn't do any of that, but a community emerged and people were what it was. It was all auctions.

People were watching and live narrating the auctions on Twitter like, Oh my God, so-and-so just bid, you know, 957 says, oh no... dadadada And I was just watching this in absolute fascination how, you know, it's the pandemic people really wanted community and they were forming community around this project the NFT version of it. Yeah I'm so because we were talking earlier I actually asked you are you pro NFT or anti NFT? And you kind of went both.

And did you want to explain that further? Yeah, I think it's possible to be both on many topics. With Nfts in particular, I was perhaps quite infamously against the blockchain Etherium for its ecological footprint in using this album called Proof of Work, which I won't go into now, but so I was very anti that with NFT. So I did this on another blockchain called Tezos, which is thousands times more environmentally friendly because it uses an algorithm and Ethereum switched to this now as well. But I was on the fence or wasn't even on the fence.

I, you know, I enjoy being able to dig into the both extreme ends of the discourse on one hand, and NFT’s come from a very anarcho-capitalistic worldview, almost genocide style that if you read some of the early manifestos from the nineties of some of the people behind these technologies, the genocidal level of like, yeah, anyone who can't keep up with the technology deserves to die. And this includes and then they list the kind of minorities that they think shouldn't be around. So it's that level in the early histories of these technologies and arguably you can see remnants of that in the in the space.

But on the other hand, there's a very utopic vision of decentralisation, of equity, like the complete opposite. And I wanted to explore and put myself in there to see what I would see. I'm so sorry, but it's very exciting. Bit of the conversation. But we have best time, so we're going to have to stop.

I'm so sorry, but thank you all for coming. Thank you. Thank you to Memo, to Rebecca and to Rita.

And thank you all for coming as well. And I'm sorry to cut it off at one of the most interesting bits, but we're actually already running over time, so. And thank you, Elizabeth.

2024-01-29

Show video