Textual Technologies

Textual Technologies

Show Video

Good afternoon, everyone. A  very warm and we need a warm   welcome today to the National Library  of Australia and to this special event.   I'm Alison Dellit, the assistant director general  for collaboration at the library. Thank you for  

attending this event whether you are doing so  in person or whether you are joining us from   around the country digitally. I'd like to start by  acknowledging Australia's First Nations peoples,   the first Australians as the traditional owners  and custodians of this land. I'll pay my respects   to their Elders past and present and through them  to all Australian Aboriginal and Torres Strait   Islander peoples listening today. Today I am  speaking on unceded Ngunnawal and Ngambri lands.  This afternoon's programme of events, To  Be Continued: Lost Literature and Textual   Technologies, is co-hosted by Engaged ANU  and the National Library. It's part of  

Uncharted Territory, a new ACT Arts and Innovation  Festival, celebrating creativity, experimentation,   and groundbreaking ideas. I head up the  branch at the library that looks after Trove.   Trove is more than just something where we  reproduce the experience of a library online.   It's one of the biggest collections of cultural  data available on any single country in the world.   So every day we work with massive collection  of digital texts, and we're always looking   for new ways, opportunities, and risks of  interacting with the data that we contain.   Today, we launch ANU's new podcast, To Be  Continued: Uncovering Lost Literary Fiction   from Bushrangers' bushfires to  Australian ghost stories, tales of   modernity and children's fiction in Australian  newspapers that were digitised through Trove.  

We will hear from the people behind the podcast  and hear live readings from literary discoveries.  But before that, we have a very special event.  Professor Katherine Bode and some special guests   are presenting a panel exploring the textual  technologies without which To Be Continued would   never have been possible, and what might be done  to better understand, challenge, and redesign them   in an age of artificial intelligence. Libraries,  especially national libraries, are curators and   stewards of culture formative in what counts as  knowledge of our time and our place in the world.   So it's very fitting that we are hosting a  panel of researchers who all work on textual   technologies from varied disciplinary  perspectives, from varied periods,   and varied places to explore the question of  what reading and writing are becoming and what   that might mean about who we will be. Please  join me in welcoming Katherine and the panel. Thanks very much, Alison. I'd also like to  acknowledge that we're meeting on the traditional  

lands of the Ngunnawal and the Ngambri peoples and  pay my respects to their Elders past and present.   It's a privilege to be here at this  wonderful place for writing and reading   to consider the history of and continuing  developments in textual technologies.   That phrasing is a bit unusual, textual  technologies, but writing and reading have   always been technological, as perhaps  librarians know better than anyone. 

Because language is so central to what it means  to be human, changes in these technologies often   generate controversies. Socrates, the ancient  Greek philosopher, dismissed writing in terms   that are suggestive of some ways that ChatGPT  is being discussed in critical commentary about   so-called artificial intelligence applications.  He argued that writing supplies not truth but   only the semblance of truth and that those who  write have the show of wisdom without the reality.   When word processors were invented, commentators  worried that these would lead to colourless,   overwritten, and extravagantly qualified  prose. Some authors concealed their use of   these machines by having their manuscripts redone  by a typewriter, and that was at a time when that   word signified a profession as well as a thing.  The same was true with word processor soon after. 

Today, we're having another debate about  changes in textual technologies focused   on so-called "generative AI, which is an  industry term, so there's the quotation marks.   These applications use machine learning  technologies and large amounts of scraped   internet data to generate statistical  models that predict sequences in words   and pixels in order to simulate human cultural  practises of reading, writing, and image making.   This debate is in fact the background for three of  our panellists being here with us in Canberra as   visitors from South Africa and the United States  of America for a planning meeting to prepare for   a global humanities institute on generative...  on Design Justice AI, whoops, that's the  

industry term, no, Design Justice AI. That's going  to be held in South Africa at the University of   Pretoria in mid 2024. That two-week institute will  bring together diverse researchers and community   members to explore the perils and possibilities  of AI, as we say in our grant proposal.  The possibilities will be front and centre  at the next session, which is to launch a   new podcast made with fiction discovered in the  National Library of Australia's Trove collection   of historical Australian newspapers because  that project was only possible with similar   textual technologies to those we find in new AI  systems such as machine learning and digitization   of writing and reading. This session is here to  balance that next one, to acknowledge that these  

textual technologies also come with perils. With  the use of large language models like ChatGPT,   which is the most prominent one, the one we hear  the most about, these perils are extensive. They   include discriminatory results arising from  algorithmic preconditions and training data   used by these systems. There's now countless  studies showing that they're likely to predict  

that doctors will be men and that Muslims  will be terrorists and so on, as you'd expect.  There's also environmental harms. A recent  study reported that ChatGPT requires 700,000   litres of fresh water in its training and that  it drinks, as this study put it, half a litre   of water for every 25 queries. There's also  unconsented data surveillance and harvesting.   Last but not least is the harm done to the unknown  number of people, often from countries with low   wages by global standards doing the human writing  necessary to... often violent, degrading, and   traumatising images and texts, doing that reading  and writing necessary to pretend that these   systems are either intelligent or artificial. So while we recognise these perils, and some  

in the panel will talk more about them than  others, I'm looking probably at you, Lauren,   we're also going to think about how we can  learn from the past, from other places,   from other disciplinary backgrounds and research  questions that are not often considered when we   talk about these textual technologies, how  they might be done better. So I'll do all   the introductions upfront. Then we'll engage in  a experiment in academic speed dating where I'll   ruthlessly compel these international experts  in textual technologies to condense their great   knowledge into bursts of five-minute brilliance.  Also, if you hold your questions till the end,   we'll ask them all together. First up will be Kate Mitchell,  

Professor of Literature and Director of the  Research School of Humanities & the Arts at the   Australian National University. Next will be Dr.  Nomadloski Bokaba, Lecturer in African Languages   and Literature at the University of Pretoria in  South Africa. After that, we have Dr. Abiodun   Modupe, Lecturer in Data Science and member of  the Data Science for Social Impacts Group at   the University of Pretoria. Then Lauren Goodlad,  Distinguished Professor of English and Comparative   Literature at Rutgers University and Chair of  the Critical AI Initiative. Finally, Associate   Professor Geoff Henchcliffe, Researcher in Digital  Art and Design and Associate Dean of Education in   the College of Arts and Social Sciences at the  ANU. Please join me in welcoming our panellists. Thank you, Kath. I'm going to begin by  taking us backward in time to ancient Greece,  

probably Athens. Although on a cold winter  Canberra day, maybe we want somewhere beachy   here. I'll leave that up to you. It's a warm  idyllic day, however, with the cerulean sky   stretching above us, the faint scent of sea all  about us as if the gods themselves had blessed   this momentous occasion with their favour. In  the middle of the bustling vibrant marketplace,   an aged storyteller addresses the gathered  crowd. Maybe he has a long white flowing   beard cascading down his chest and deep set eyes  that twinkle with the wisdom of centuries holding   the promise of captivating tales. His voice  rich and resonant carries across the throng,   capturing every ear within its melodious grasp. As his words dance upon the air,  

a vivid tapestry unfurls before the mind's  eye. An epic battle springs to life: Zeus,   the mighty ruler of Olympus, his voice thundering  like a crack of lightning; Aries, the God of War   charging forward with a ferocious battle cry;  Athena, the goddess of wisdom brandishes her   shimmering spear with grace and precision,  her eyes ablaze with strategic brilliance.   The Gods clash with titanic force, their powers  colliding in a cataclysmic crescendo. Flames roar,   thunder shakes the very Earth, while celestial  beings weave through the tempest, their actions   dictating the fate of both mortals and immortals  alike. Now, as the old man's narrative reaches   its zenith, a collective gasp sweeps through the  crowd who, with their hearts racing united in a   shared moment of wonder and awe, are transported  to a world where gods walked among mortals.  The ancient Greeks had a word for this kind of  evocative wordsmithing that my imaginary aged   philosopher has engaged with here, and that word  was ekphrasis. Now ekphrasis means "to speak out  

in full." It was a tool for the art of rhetoric.  It was designed to bring something that was absent   vividly before the mind's eye of the listener  and, in doing so, produce a kind of affective,   emotional response in its audience. Its purpose,  in short, was to use words to paint a picture.   Its purpose was to position the listener as  witness to absent events, in this case of   mythological battle, to participate through  the speaker's powerful rhetoric in events   that occurred elsewhere and long ago. The  picture exists in the mind's eye. It's an  

effect of both the words, the speaker's  words, and the listener's imagination.  Now, over the centuries, the concept of  ekphrasis, both in theory and in practise,   both narrows in its definition and  expands or becomes more expansive   in its use to think through various  questions relating to word and image.   Ekphrasis has transformed with various shifts  in technology of both word and image in three   key ways, in lots of ways, but I'm going to  identify three today. As the written word   became more readily available with the invention  and widespread take up of the printing press,   ekphrasis shifts from being a rhetorical strategy,  something associated with the spoken word,   to the written word. As it does this increasingly,  especially after the Renaissance, becomes more and  

more associated with the depiction in  literature with literary poetic forms.  The second is that as transportation technologies  and shifting trade practises made travel more   prevalent, ekphrasis took on a new life  in travellers bringing back stories of the   artwork they encountered as they travelled to  Europe and through Asia. So ekphrasis becomes   increasingly associated not with creating  any sort of form of visual image in its   listener or increasingly its reader, but referring  specifically to an artwork that exists in the real   world. So ekphrasis becomes the description  of a work of visual art in verbal language. 

Why think about this in relation to our  contemporary moment? Through the centuries,   ekphrasis is associated with thinking through  various aesthetic sociopolitical kind of forms.   But by our moment in the 21st century,  its definition has expanded because of   the expansive idea of what constitutes art  with the introduction of digital media,   digital artworks that might include interaction  between the artist and the person engaging with   the media, whether we think of that as  a reader or an end user or a viewer.  I'm just going to throw out a few key  ideas. We've arrived at a moment in which,   if the desired artwork does not yet exist,  it's readily... sorry, we've arrived at a   moment where artwork is readily available to us.  We could all pick up our phones right now and  

pick out any image that I threw out. We could  find the image. So in the late 20th century,   it was predicted that ekphrasis would actually  cease to be important, cease to be needed. We   can all see an artwork anytime we want to. But what we've found into the 21st century   is ekphrasis has actually increased even as the  ready proliferation of artwork and art images   has increased and even as our relationship to  art has shifted. Indeed, we've arrived at a  

moment in which if the desired artwork does not  yet exist, we can prompt AI to create it for us   using words or prompts. Words make the image,  and the image represents the words. It's a kind   of reverse ekphrasis. So to pick up some of the  issues that Kath mentioned in her introduction,   the way that ekphrasis has been conceptualised  over many centuries enables us to think through   issues like, what is the effective  power of word and image? What's its   power in both the person who creates it  and over the person who uses that image?   The automated labelling of  images, as Kath suggested,   has highlighted for us the problem with centuries  of practises of representation that produce   biases in the way that we think about images. I'm going to close with one example that you might   be familiar with, which is the painting the Girl  with the Pearl Earring that became the book and   became the film, the Girl with the Pearl Earring.  What that ekphrasis does as an extended piece of  

ekphrasis, it turns an artwork, it adapts it into  a novel which is then adapted into a film. What   the ekphrasis treatment of that does is highlight  for us what happens at each stage of that   introduction or that transformation from artwork  into written form and back into a visual form.   What the novel does is highlight the way the Girl  with the Pearl Earring, if you know the book,   it imagines that she's the servant of the artist.  She's compelled to pose for this image. It takes  

that sort of striking look that she has back  at us, the viewer, and imagines that this is   the way that this female servant asserted some  sense of agency back when the image was produced.  One of the ways I want to suggest that we might  use ekphrasis as we move forward in this stage of   generative AI that's being used to create  words, words being used to create images,   is to think about that intermedial adaptation from  word to image to computer code, so we have a new   instance of adaptation, a new  adaptational intermedial moment   there of which we're not quite in control  perhaps. Certainly if you're a user like me,   I don't really know what's happening in  that algorithmic moment behind my prompt.  

But being able to think about and separate  out those different layers of how it phrases   is working at each of those moments I think  is one of the ways that we can unpick some   of those inherent power imbalances, the  new questions about what aesthetics means,   what it means to create art, what it means to  create literature in this new age. Thank you. Good afternoon, ladies and gentlemen. I come from  South Africa, and I come from a country where   textual technology is not an easy thing  because we have a lot of languages   which are official as given by the Constitution.   A case study from South Africa on languages  will indicate that the Constitution, which is   one of the main document in our government in  South Africa, declares 11 official languages,   and recently, they've added the 12th language,  which is South African Sign Language.  *The languages that are regarded as  official languages in South Africa   will then be, this is Zulu, which is  my language. It's a Xhosa, isiNdebele,  

siSwati. If you check in that group,  that four group, we call it Nguni group   because all of them sort of start with, if  you would've listened, you'll hear me saying   isiZulu, isiNdebele, siSwati, isiXhosa. So those  are like the isi-formative languages. If you know   one of them, obviously you'll understand the  rest. Then the next group is the Sotho group,   where under we find Sesotho, Sepedi, and  Sotho-Tswana. Those are non-isi languages.   The other criteria is that the isi  languages, we write them conjunctively,   and the si languages, we write them  disjunctively. In the conjunctive languages,  

for I am working, I'll have [foreign  language 00:21:19], which is one word.   For the disjunctive languages,  that is the Sotho group,   I will have [foreign language 00:21:28].  So I separate the words that I'm writing.  Already, I've calculated seven. Then we have  two indigenous language or African languages  

that are standalone, which is Xitsonga and  Xivenda in addition to the languages. Why   are they standalone is because when  we check their linguistic formation,   there's no relation to the isi group and the Sotho  group. Hence, we say they are standalone. Then on   top of that, then we add English and Afrikaans.  Therefore, if we have to do textual technology,   the Constitution requires that we treat all  these languages equally and with equity,   those two words. There's no discrimination  against any other language. Any person who  

is a South African, if I want to study  in isiZulu, therefore the Constitution   give me the right to study in isiZulu. Then the other thing is that for our   government to try and cater the equity, equality,  non-discriminatory effect in these languages,   it then created a full department which contains  a language practitioner for each language.   You'll find that you've got nine language  practitioners who are translators.  

Then you'll have other nine language practitioners  who are terminographers. Then you'll have nine   practitioners who are human language technology,  which is of course the part of the department that   deals with textual technologies. We also have in that department,   that department is called the Department of  Sports, Arts, Culture, and Recreation. Therefore,   the languages will fall under culture. Hence, they  are in the department. In the department, we have   a section that deals with language  planning and policy of government. Meaning,  

for example, like I said, South African Sign  Language has been added as a 12th language,   if we needed to do a law that the president must  sign that sign language is now the 12th language,   then each language practitioner will translate  that document from English to Afrikaans   into all these indigenous languages for the  president to sign. So that if I come and say,   "I want to understand what sign language is all  about in Zulu," then I'll be able to get that   document. Therefore, in South Africa, textual  technologies will be highly assisted by the AI,   but there are a lot of complications, and every  person wants his or her language to be at the   top of the ladder. Hopefully, we'll be able  to do it right as South Africans. Thank you.

Good afternoon, everyone. I'm also from the  University of Pretoria. If I have to talk about   textual technology, I think I'll start by saying  we all understand the world of technology or where   we head the world of technology. Many of us,  we thinks about our computer, your smartphones,   and some other thing. But one of the things we  don't actually always have at the back of our mind   starts with, for example, your alphabet, the books  that you read, the newspaper, and all those kind   of things, or the television that you watch to be  able to listen to news, and also the printing one,   for example. So each of this examples that I give,  they're basically technologies, if you understand   it from that [inaudible 00:26:10]. They are  basically technology innovation that actually has  

changed dramatically in how we actually understand  some certain things and how we perceive it and how   also we can able to reason along with it. So this, we're now talking about textual   technology, these are examples of, from here, how  do we communicate? When you communicate perhaps,   how do you also think along that particular  communication that you are reaching out to other   people? So for us, the questions in our research  lab is, how do we now use all these language model   collectively to be able to find a semantic  representations of all these language that   she has mentioned? Because we have a society that  is very diverse in terms of how do we understand,   in terms of how do we communicate, in terms of  how do we reason together. Because I'm an African,   and as an African you are talking about almost  250 languages. For example, I originate from   Nigeria to settle down in South Africa. In  Nigeria alone, even the government is saying  

we have three languages, which is Hausa, Yoruba,  and Igbo. In Yoruba alone, we have nothing less...   I can count about 150 languages. So the question we should ask ourself   from this textual technology and from what we know  originally from the way we are brought up to be   able to read... Because I have a three-year-old  boy. If the mother wants for him to go to sleep,   the mother needs to prepare him as early as  possible, like 7:00 p.m., she will read with him,   you understand what I'm saying, for him to be  able to sleep. So the question is if I want   my child now to understand my language, which is  necessary for him to be able to speak my language,   because if he doesn't, then my culture has already  been diminished. So how will I be able to get a  

book out there and to show that is in my language  in order for me to read to him to understand?  So if you look at it in the whole heuristic world,  this language model that we have today, there are   a lot of bias inside of them. So the question  we are trying to look at our research lab,   how do we now develop...? I mean use this  technology test to be able to develop and   share most of our texts, our contemporary  text or textbooks or novel as well as   so that we can able to understand the writing?  Because I can write this way, but how do you   conceive that particular writing? How do I be  able to make that writer more explainable to you   that whatever bias that you are concerned, I'm  concerned, it's already been put in even while   I'm getting this particular data? So that's  just the most interesting things to us in our   research lab. Thank you so much for listening.  I think that's just what I wanted to share. I want to   thank you all so much, and thanks especially to  Kath Bode for inviting me to take part in this   panel to talk about Critical AI literacy, a topic  of increasing salience to all of us, I think.   Critical AI at Rutgers, which I chair, is among  other things the home base for Critical AI,   the name of a new interdisciplinary  journal published by Duke University Press.  

You've heard earlier about the Design Justice  Network's impact on us. Their principles and   their focus on community centeredness have been  a major influence on Critical AI from the start.   It's the inspiration for the Design Justice  AI Global Humanities Institute that will be   hosted at the University of Pretoria next summer.  But when it comes to AI in particular, building  the foundations for a Design Justice approach will   depend on the spreading of Critical AI research  and literacies. What do I mean by that? Well, as   we know, educators had barely recovered from the  pandemic and from reintroducing dizzied students   to in-person learning when ChatGPT came onto the  scene in November '22. We're still in the midst of   a well-funded hype cycle over this new technology  with uncertain benefits and many known costs.  

It should be obvious that there's no hurry to rush  into teaching wholly new and untested methods.   Here, I'm specifically thinking not about teaching  data scientists, not about teaching computer   science, although increasingly we learn that many  if not most computer science professors do not   want their students using chatbots in introductory  courses for obvious reasons. They want their   students to learn the fundamentals before they  farm out their thinking to a digital tool.  The same is true of writing. It seems like it  should be obvious that there's plenty of time,   humans have been writing for about 6,000 years,  for us to not rush to changing the way that we   teach the subject simply because a few powerful  men who know absolutely nothing about education   are now in an arms race with each other for  monopolising the world's data resources.   To my mind, worries of students cheating are  way overblown. There are all sorts of ways  

to engage students in strong writing. We need to  be able to catch our breath, talk to each other,   and talk to our students. That's part of  what I mean about Critical AI literacies.   In the fullness of time, we and our students may  find lots of good reasons for using some chatbots   in particular ways for performing the tasks at  which they excel. And they do excel at many tasks.   But let's be clear, college writing intended  to teach research, communicative articulacy,   and critical thinking is not one of those tasks. As many know, generative AI is a marketing term   deliberately chosen to stand in for massive  statistical models trained on unconsented data   which generate human-like text through  what Emily Bender, Timnit Gebru,   and colleagues have called stochastic parroting,  but which I prefer to call probabilistic mimicry.   I hope to be able to make that clear what  I mean by that. Despite misleading use  

of anthropomorphizing terms such as neural  network, this is not how your brain works.   So when you hear about AI, you should not be  thinking about this, much though I love this   scene from Blade Runner and I bet you do,  too, or even this, but rather of a vastly   scaled up and multi-dimensional version of this.  Please keep that image of a bell curve in mind.  You may have already heard a lot about the  harms of AI, including from Kath. The data  

in these models train on and over-represent  the discourse of white North American English   speakers, often men of particular classes,  while under-representing everyone else,   including the roughly 30% of people on the  planet who have never been on the internet.   Because some of the best texts on  the internet are behind paywalls,   it also over-represents scrapable  social media sites such as Reddit.   Large language models are, therefore, larded  with toxicity and conspiracy theories.   They don't just reproduce human biases. They  amplify the biases of particular demographics.  Because they're completely untransparent, we  don't know what's in them and can only guess   at the harmful or malicious purposes to which  they're already being put. You may not know this,   but with social media sites, even though we don't  know a huge amount about them, they are required   to report malicious uses, and this has not yet  happened for these very proprietary systems.  

The preponderance of unvetted garbage-in means  that, in ways that investigative reporters are   just beginning to uncover, sustaining the  illusion of artificial intelligence requires   armies of low-paid human workers, millions of  people with the potential to become billions,   according to Google's recent paper. These are not  great jobs. In fact, recently there's reporting   that a lot of the people who are getting  these jobs are farming them out to ChatGPT.   I'll say more about that in a moment.   The most traumatising labour, as Kath mentioned,  is outsourced to workers in the global South.   In addition, chatbots are environmentally  irresponsible in every way, and they subject us to   nonstop surveillance by plutocrats who have been  lusting after the educational domain for decades.  As a result, AI exacerbates  and threatens to entrench   political and economic inequality  not seen since the last Gilded Age.  

In the meantime, the pressure to teach our  students tasks like prompt engineering,   because this could be the job of the future, is  an unexamined fantasy if you've heard that. Why?   Because tech companies will gradually incorporate  the most useful prompts into their models, just   as for decades they've helped themselves to our  personal data. While proprietary AI systems have   scraped centuries of print culture in the public  domain, a historic legacy that was digitised in   the effort to create an accessible commons, which  has now been put behind paywalls to sell to people   like our students while subjecting them to endless  surveillance marketed as effortless writing.  This is from the Substack of  a business professor at the   University of Pennsylvania  who lots of people read,   "Setting time on fire and the temptation of the  button," the button he's referring to is the   button that Google is planning to put into Google  Docs which will be something that you press and   the button will say, "Help me write," and you'll  press it, and it will start to shoot out a bunch   of paragraphs. Here Mollick is saying, "We used to  consider writing an indication of time and effort  

spent on a task. That isn't true anymore." But I think that this is going too far.   It isn't writing if it's not something that has  time and effort because who's going to read it.   It's probabilistic mimicry of writing supplemented  by an exploited underclass of human labour through   resource intensive conditions that benefit a  tiny elite at the expense of everybody else.   If you don't believe me about the  limitations of this so-called writing,   look what happens when you train language models  on text generated by other language models.  

The models collapse. See how the bell curve  gets even narrower because that's how they work.   They're predicting the most plausible answer. If  you continue to go for the most plausible answer,   you lose the tail at the edge of the bell  curve, and then it stops working altogether.  Think about that and about how that mimicry  normativizes thought to the point of imminent   model collapse. The next time someone  suggests that maybe students, your children,  

or grandchildren should learn writing by improving  the first draught that comes from a chatbot or   brainstorming with it. Now, you might readily see  that the outputs that come from that draught are   impoverished because you've been cultivating  the necessary skills to do that all your life.   But most students aren't ready to criticise  bad writing any more than they would bad   driving that might seem to them like good  driving. They still need to learn how to   drive. I know I did in college and even  in grad school, and most of them do, too.  So I'm suggesting to you today that there are a  lot of things that we should be thinking about,   educating ourselves about, and that there's  no hurry whatsoever for us to adopt chatbots.  

I have ideas for how we can help students to learn  about chatbots and how we can teach ourselves   about them, but that will have to wait for  question and answer or another time. Thank you. Hello, folks. I'm going to offer a few  thoughts as a designer of textual technologies.   I'll start by reiterating an idea that we've heard  already quite a bit, which is that design of a   text is a socio-technical process, a practise in  which practise informs technology and the practise   itself is informed by the tech... practise  informs technology, technology informs practise.  

So with the design of text, we see the  practises and forms evolving in alignment   with their technology. In the design of textual  works, volumes, we can plot it from the Gutenberg   press through to industrialised forms of  printing and then with the advent of computers,   digitization, and more recently with the  web and screen-based media. So we see this   very neat and tidy technical evolution  that I'm putting to you. Through all that,   we see design practise and the forms we produce  keeping a-step. And there's a dialogue there.   One isn't necessarily always in control  of the other. The technologies evolve,  

we cop them, and we shape them through  our practises and the forms we produce.  So with the advent of AI and machine learning,  I could posit that it'll just follow suit,   that we'll just adopt this new technology.  Sure, it will disrupt our design industry,   it'll disrupt our practises, but over time,  we'll co-opt that technology, and we will shape   it through those practises. We'll work out what it  can do, and we'll adapt and adapt it. Now, that's   a very tidy and probably naïve and very optimistic  view of these technologies. I don't want to   suggest that anything here is inevitable, that AI  is inevitable, or that our happy relationship with   it is inevitable. As we've heard, I think there's  probably more concern that it'll be anything but. 

In terms of dangers, I think design is  particularly vulnerable because it's highly   programmatic. If we look at how we work with text  and how we design text, we've spent a hundred   years working out robust, systematic rules for how  we present text so that it's accessible, legible,   and it also means it's very easy to encode it in a  machine. So, yes, design is incredibly vulnerable,   but there may be opportunities, too. So I'm going  to just think naïvely and positively for a second.   I think one that's probably the most significant  for designers of text is thinking beyond the types   of text to the qualities of text. So as designers  of text, we create templates for how we present  

text, and it's based on what type of text is.  This is a heading, this is a subheading, this   is a body text, a caption, and we make rules and  use those classifications in how we present it.  Through things like AI, we'll have access to  all sorts of other information about a text.   What's the tone of the text? We've got a heading.  But is it angry? Is it positive? Is it negative?  

How is it coloured? And what can  we do with that information as   designers? That's quite a challenge.  So we go from working just with the   types of text to all sorts of other  very subtle nuances of that text.  A challenge, but also an opportunity. Another  opportunity comes with access to all that content,   that generative content. Now, this is  an interesting one because it means as  

a designer I can creatively explore ideas for  how I want to present information, but I have   no reliance on an actual human author. I can ask  the machine to generate text for me. It's like the   high-fidelity Lorem ipsum. We use placeholder text  now. I can have pretty high-fidelity placeholder   text. I can just use the machine to generate  the content and the media for my designs.  What that does is it potentially inverts a very  well-established power structure. As designers,  

we work in service to a text. That's our role  is to represent that text, do it benevolently,   make it accessible. What I'm suggesting here is  that it inverts. I'm going to ask the machine   to generate the text from my design. I don't like  that text. Give me another one that's more suited   to my design. Give me a media that's more suited  to my design. So it's a fairly radical inversion.  

I should apologise to all the authors and  say, "Welcome to our world." Before AI,   many processes of design have been ingested  by the computer where they challenge the role   of the human. So it's a relationship  we're already pretty familiar with.  What I'm proposing here is not a radical  proposition. It's something that has  

played out over the 20th century,  and that is that kind of relationship   between designers and their technology  evolving in tandem, shaping each other.   So it's a pretty optimistic view. I think more  pessimistically, I don't think the barriers will   be technological. I think a lot of the barriers to  that kind of co-opt and vision will be cultural.   It's whether we will have space as designers and  artists to provide that disruption. I think some   of the biggest impediment is the audience, that  we ourselves are quite intolerant to that kind of   difficulty that artists and designers will have  to show us to the critical and creative, that   turn that they will need to show us. In closing,  I guess what I'm hoping is that, in seeing our   culture endlessly imitated and regurgitated by  machines like ChatGPT, that it will motivate us to   come up with new forms of creative and cultural  resistance, effectively new waves of punk.

Hello. Wow. That was quite a wild ride from the  marketplaces of ancient Greece all the way to   Silicon Valley, a stop by South Africa and the  African continent and its linguistic diversity,   and then ending with punk. I was not expecting  that one. My brilliant experts have allowed us to   have some time for questions. So if there's anyone  in the audience that wants to ask a question,   let me invite you to ask it now. The  lights come up. You can all be seen.  

Anyone? Okay, I will take... Oh, there we've  got one from Paul Eggert up the back there. Thanks. Thanks very much. Oh, there's a microphone. Oh, right. Thank you very much to all the  speakers. It's been a stimulating afternoon.   This is a question that perhaps could go to  the last two speakers. An AI generated text,   we can ask the question of it, what does it  mean? That is to say, we can put it to the   text. We can think about what that text might  mean. But I guess we can't ask the question,  

what did somebody mean by it? I guess this raises  the question of communication, what the writer   may have been intending to communicate. This  is something that's been pretty important to   us in the history of our culture. Have you  any reflections on its fate in the future? There's a microphone beside you. Can you hear me all right? Yeah, that's exactly  right. The paper that I refer to as stochastic  

parrots makes that very point that language models  can't possibly express any form of intentionality   in their writing and that the meaning that is  implicit in the writing is there because the   language embeds it and the person who reads it  can interpret I so that something that used to   involve a specific writer's... For me, the word  intentionality is always a little bit difficult   because I'm of the school that says writers don't  always even really know what their intention is.   But in any case, a person communicated something  and did so with some degree of intention   and another person read it, heard it, tried  to make sense of it. Now you have something in   between that is a statistical model. That's what  Bender and colleagues call stochastic parroting,   and I use the term probabilistic mimicry, because  stochastic is an unfamiliar word, probabilistic is   one you can all understand. Parrots are actually  very intelligent animals that are smarter than  

chatbots, and therefore, mimicry is a much  better word to use. But you're completely right. Hello? Hello? Can I respond? I wouldn't disagree  with anything that you just said there. I guess   the most amazing thing that I'm reflecting  on is just how good they are because they are   just this brute force mimicry. So I think it's  more revelatory about what we think we're doing,   this amazing act of writing, and it's like,  aren't the machines just revealing just how   formulaic it is? Because they're able to do a  very, very good impersonation. I don't know. 

I think that we should take this moment to also  reflect on our own practises and say, maybe we're   not being as original as we really thought we  were. How much of what we do is formulaic, is just   based on cultural convention that we repeat and  repeat and repeat again. In design, we constantly   search out those formulas, and often we celebrate  them. We actually point to them and encode them   and share them. I think there's all sorts of that  stuff obviously happening in all sorts of forms of   culture to a degree that we are just not aware.  The machines are showing it to us and saying,   "It's not that original." There you go.  I'm just going to put that one out for you.

I'm waiting for other questions, but  I'll just say, Geoff, you mentioned that   these machines are very good, and  you're surprised by how good they   are. We've been doing some experiments  during our meeting that we've been having   asking some of these generative AI models to  do things in the English language and then to   do them in non-English languages and discovering  just how bad they can be and how flawed they can   be. So I guess what I would ask is to the South  African panellists on our panel, how do you feel   listening to someone say, "Well, these are really  great technologies"? Do you feel encouraged that   maybe you will also be able to join in with  them, or is there a feeling of discouragement? I am half/half to say the truth because even  before the modern tools that we have now,   the languages that we have in South Africa,  especially African languages, haven't developed   that fast even if we had translation tools which  were our own that developed in South Africa for   making sure that the languages that we have are  moving faster, like English and Afrikaans which   has moved faster than all the African languages.  But the other half is that I feel because of   the [inaudible 00:57:13], the way the world is  moving faster, the new tools that we have now   will maybe push these languages a little faster  because of what is [inaudible 00:57:25] now. For me, I think the very first things that I  think we need to do is to first and foremost just   pause for a while and try and do an assessment  of this particular language. For example, if I   look at it from my own language, if I say a wife,  which is [foreign language 00:57:52], because in   my language, there's all of these... I mean from  literature also perspective, taking morphology  

and all those kind of structure that you need  to, then I put it into Google Translations and   the translation is giving me something else. It's  giving me [foreign language 00:58:14], I mean my   chest, which in my language, if you look at that,  putting it in my language, it's the same word,   those three word. But when I put it into  Google Translation, it's actually giving me   wife. Do you understand what I'm  saying? So if you look at which context,   is it really giving me that translation? So the question now is that we just need   to... For me, we are of the opinion that, you know  what, let us pause. Let us take all these language  

model and pull them through because we're sure  of the data that we are getting, that this data   is actually coming from the people that speak the  language, the people that write that particular   text in their language. Can we now push it through  all this language model and see what is the output   of this language? By that way we can able to  understand the insightfulness of that particular   model or those representation before it can now  go into production. I think for me, that's just   the most important thing that we need to do,  and we are trying to do in our research lab. Any questions? Can I have a...?  Oh, we have one up there from...   I sound like I've seated the  audience with people I know,   but I just happen to know the people asking the  questions. One from Baden Pailthorpe up the back.

Hello. Thank you. I guess this is a question for  everyone in response to a couple of things that   the panel has mentioned, this idea of pausing or  there's no rush to use chatbots. I guess just as   I was listening to you speaking, I got an email,  as I'm sure other people did, from the ARC saying   there's a new policy that's just launched  because some assessors were using ChatGPT   to write grant assessments just recently for  the discovery project round that's getting   assessed at the moment. Also, the New South  Wales Education Department is just talking   about overturning the ban on ChatGPT for next  year. I guess the question is the horse has   kind of bolted to some extent, so what are some  strategies perhaps to pause, as you suggest? I've been thinking about the same  thing, Baden, listening to others.  

I wonder whether... I mean, the horse  has bolted, as we know it. If you're   in universities, from our students, you know  it, perhaps in our own practises sometimes.   I think the thing for me that's quite interesting  to think about is understanding the nature of what   AI can do and what it can't at  present, what these chatbots can do,   thinking through the difference. I think  one of the ways to look at it is what it's  

showing us. Geoff said that it's excellent,  and it is excellent at some writing, really.   It's less excellent, I think, at the kind of  higher-order writing, synthesising things,   analysing things. It's that idea of  parroting back, isn't it, or ventriloquizing   what it's kind of trolled about and found on  the internet, but it's not a perfect mimicry,   or it identifies that gap perhaps between mimicry. One of the interesting things when I was just   trying to play around with ChatGPT, I asked  it to find incidents of ekphrasis, so verbal   descriptions of paintings, in a novel by A.S.  Byatt called Possession, which itself actually   ventriloquizes Victorian literature. The author,  Byatt, writes these short stories and poems in the   style of Victorian poems and so on, and it's  embedded in it. It was sort of identifying  

passages of ekphrasis. I was like, "Oh, that's  great." This is a novel I've written on quite   a lot. I got to a couple and I thought,  "I don't remember that in the book."   Then it took another... not long  because I know it well to think,   "Yeah, that's not in the book." There were tells.  You could tell that it wasn't actually Byatt if  

you're familiar enough with Byatt's language. I think the thing that's interesting to me   to think about is, because we probably aren't  going to get a pause, how do we... maybe Lauren,   you have thoughts on this in terms of educating  in Critical AI, understanding what it is that   the chatbot can do and what it is we're actually  reading, which isn't a kind of straightforward   English or other language. It's language  that's kind of filtered through a statistical   probability and kind of transposed  back into language for us. I think   there's ways to think more carefully about what   that sort of journey through numbers, words into  numbers back into words or images might mean or do   or not do. Thank you. What Kate just referred to is a  problem that through another... The industry,  

the field of AI research loves anthropomorphizing  terms. So it's called hallucination   when a language model just makes stuff  up. It does that because it has a very   good sense of language patterns. So if  I ask it to generate a bio for me, I'm   likely to get something that gets about half of  the information and half of it will be completely   made up, awards that I never won, books I didn't  write but somebody else did. But it will look  

exactly like an academic bio because that's what  it completely has grasped is the pattern of an   academic bio. And it can only give you something  that is plausible. It has no sense of true,   not true, which is why so much human reinforcement  is necessary. Never forget that, that army of   human reinforcers that is necessary to give it  even the level of accuracy that it has right now.  As far as teaching with it, that fluency  that you can get on something like,   "Why is it important to have free speech in a  democracy?" and in two seconds you'll get about   four short paragraphs that will say this  and that with no attribution whatsoever.   The word plagiarism is perhaps misapplied, but  suffice to say that it comes from somewhere,   it's been synthesised, and it's unattributed. So  if we were to teach our students, ourselves that   it was fine to do that, we would in essence  be saying it doesn't matter who said what,   and it doesn't matter if it's right or wrong.  I think really nobody wants to do any of that. 

That higher-order level of thinking that  Kate referred to, it cannot substantiate   things well with evidence. Basically,  at its best, on its very, very best day,   it is giving you something like what is on  Wikipedia about a given topic. It also can   write very funny poems on improbable topics.  That's another fun use. But as far as a source   of information, it can give you basically on a  good day, something like the Wikipedia entry,   at which point you might say, "Why not just  look on Wikipedia?" which will give you   accurate footnotes and has been crowdsourced by  thousands and thousands of people who have shared   their knowledge, and you'll use less energy  and water, which is what I tell my students. 

I have zero problems so far with my students.  I tell them I completely trust them,   that they're not going to use chatbots in the  writing of their papers. I know, I tell them that   you have used this. You might even, for other  classes, be asked to use it if it's, say, like   a data science class where there is a technical  purpose behind using it. I explained to them that   they're going to be doing some good work  with the bots so that they can say to, say,   a potential employer, "I have used ChatGPT." What we do is we do probing experiments. What   we do in that case is we probe the models to show  the biases and the inaccuracies that are in them,   which is a much better usage. It makes the  student into a researcher rather than a consumer.  

I also tell them that we'll be doing search  experiments. So I have my students compare   what they get from three different search engines  and what they get from a database to what they   get from, say, ChatGPT or Bing Chat. So in this  way I kind of feel like we have the best of both   worlds because the students are learning about  the technology, they're seeing for themselves   what its limitations are, but they can still go on  a job interview and say, "Yes, my professor taught   me all about that and made me a researcher."  I think that's what we all should be doing. Well, I was going to cut you off  Geoff, but go on, have the last word. Similarly in the educational space, I  think we're talking with great caution,   and I think people are observing  those boundaries. But I'm struck by,  

I think there's almost zero chance we'll slow it  down, but not because I'm advocating it. Again,   please don't get the wrong impression that  I love this stuff, and I'm all on. I'm not   at all. I'm just struck by people in the real  world who are just using this stuff every day.   They are also aware of it's not perfect, but  it can do some things really blooming well,   and they're into it. They don't just ask for the  whole thing in one. They know how to coach it to   the solution. So I'm just struck by how many  people I encounter in day-to-day life who are  

just using it routinely now to help them with  these tasks, to generate the text, and it just   accelerates what they need to do. I think so  it'll be the weight of that kind of use, sadly,   that will make it much harder to slow it down  because it won't be from academics saying whether   we should or shouldn't. It'll be the general  public who are saying, "We love this stuff."  It's built into so much... We're really  aware of it now. One last thing. Note though,   of just the image stuff. The image stuff has  been deployed for ages. It's in Photoshop,   but we seem to care less about that. The text  really makes it plain to us what is going on here.  

But it's the same thing. It's just like, "Oh,  look, the bottom half of this image is missing,   can you just fill it in?" It's like, "Yep,  there you go. There's your image back."   We're good with that. No one's talking about  turning that off. Let's all go. It's interesting. I wanted to say lucky for us because our languages  are still being grown or being developed so the   data that is supposed to be in ChatGPT, it's  not there. Therefore, we are still safe.

Well, on that paradoxical note, we could talk  for a very long time about these new textual   technologies. Thank you for joining us for  this. There's lots we don't know about them.   One thing I do know about people is that we  shouldn't keep them from their food and drink   for too long. So please join us in the foyer  after this for some refreshments, and please   come back for our launch of our podcast after  this. Please join with me in thanking our panel.

2023-08-10 03:19

Show Video

Other news