Responsible AI

Responsible AI

Show Video

VERITY FIRTH: Hello, everyone. Thank you for joining us. I'll just wait while people enter into the virtual room. I'll give it about 30 seconds and then we'll kick off today's event.

I'll also just get it to tick over, the minute it ticks over into midday. We've got 100 people in the room already, so thank you for joining us. And we'll begin now. Thank you, everybody, for joining us for today's event.

I know there'll probably be more people entering the virtual room as I speak, but I will kick off our event so that we start on time. Firstly, I'd like to acknowledge that wherever we are in Australia, wherever you're joining us from, we are all on the traditional lands of First Nations people and these were lands that were never ceded and I want to pay particular respect to Elders past and present of the Gadigal people of the Eora Nation, where I currently am in my home in Glebe, but it's also the land on which University of Technology Sydney is built, so a special respect to the traditional owners and custodians of knowledge for the land on which our university is built. I also want to pay respect to the land all of you are sitting on at the moment and hope you too are paying that respect.

My name is Verity Firth. I'm the Executive Director of Social Justice here at UTS and I also head up our Centre for Social Justice and Inclusion. It's my great pleasure today to be joined by some of UTS's brightest minds our newly appointed Industry Professor, Ed Santow, our Distinguished Professor Fang Chen, and award winning alumna Mikaela Jade.

I'll have the chance to introduce each of them properly shortly. But a couple of pieces of housekeeping first. Firstly, we do live caption our events and if you want to view the captions, you can click on the "CC" button in the bottom of your screen in the Zoom control panel. We're also going to post a link in the chat which will open the captions in a separate window if you prefer to view them that way. There will be an opportunity to ask questions, so if you do have any questions, please type them into the Q&A box, which you can also find in the Zoom control panel.

This also gives you an opportunity to upvote other people's questions, so I then moderate the questions, and I do tend to ask the questions that have the most interest and they also tend to be the questions that are most relevant to the topic, so please do keep your questions short and relevant to what we're talking about today. As I mentioned a moment ago, UTS is delighted to be joined by Edward Santow. He is a former just recently former Australian Human Rights Commissioner and he's joining UTS in the role of Industry Professor Responsible Technology. We have previously partnered with the Commission on a three year long project around human rights and technology, with recommendations from academics and practitioners across the whole spectrum of UTS's disciplinary fields feeding into the final report released in May this year.

Emerging technologies, including AI, are already a firm fixture in our lives. They're quietly reshaping our world. These advancements hold immense potential to improve lives and connect people, but are equally fraught with risk.

We recognise that technology development and deployment doesn't happen within a social vacuum. In fact, social, political and economic inequality can be reproduced and indeed amplified in tech innovation if we're not careful. Human rights must therefore be the bedrock for anyone involved in the development and deployment of these technologies. At UTS we are striving to equip society with the literacy required to thrive alongside AI, as its partners, to demystify the technology, ensure that nothing is taken for granted, and that there is transparency and accountability built into the fabric of the tech. We're at a critical point as use of AI grows exponentially in the government, private sector, global security and in our schooling system. Are we in Australia equipped to embrace the opportunities of technological innovations in a way that keeps humans and their rights and dignities at the core? So it's my great pleasure to welcome Ed Santow to offer some brief opening remarks to start us off and then we're going to move to a panel discussion with Fang Chen and Mikaela Jade.

Edward Santow commenced last week as UTS's Industry Professor. He is leading a major UTS initiative to build Australia's strategic capability in artificial intelligence and new technologies. This initiative will support Australian business and government to be leaders in responsible innovation by developing and using AI that is powerful, effective and fair. Ed was Australia's Human Rights Commissioner from 2016 until July this year and led the most influential project worldwide on the human rights and social implications of AI. Before that, he was Chief Executive of the Public Interest Advocacy Centre, a leading non profit organisation that promotes human rights through strategic litigation, policy development and education. He was also previously a Senior Lecturer at UNSW Law School, a research director at the Gilbert + Tobin Centre of Public Law and a solicitor in private practice.

Welcome, Ed. PROF. ED SANTOW: Thank you so much, Verity, for that warm introduction. It's a great pleasure to be here.

I too am beaming into your lounge rooms from Gadigal land and I pay tribute to their Elders past, present and emerging. It's also a real honour to share this virtual podium with two other people whom I really admire, Professor Fang Chen and Mikaela Jade, and I'm sure we'll hear more from them as the event goes on. I have a relatively limited role here at the start to set the scene.

In a moment I'm going to share with you some slides. They'll mostly be pictures, and for anyone who has a vision impairment, I will describe what's on the screen. So I am genuinely excited about the rise of new technologies, including artificial intelligence, and it's certainly true that they are, as Verity said, reshaping our lives and our world. We can measure this in lots of different ways. We certainly know that AI, or artificial intelligence, is growing exponentially. We can see the global market for machine learning, which is one of the kind of key technologies that underpin AI, is growing very rapidly, as you can see on your screens now, and we're seeing particularly in the private sector that business is really starting to grasp how AI can be used in their operations.

Half of all businesses are now using AI in at least one function of what they do. But there are some risks and we are really wanting to focus in on what some of those risks are in order to make sure that AI gives us the future that we want and need and not one that we fear. So in the work that Verity referred to a moment ago, the partnership between the Human Rights Commission, UTS and a couple of other core partners, we really went deep on how AI cannot only be a force for good, but it can also cause harm. We've seen, for example, with the rise of the problem of algorithmic bias that you can have a machine learning system that spits out decisions that are at least sometimes unfair.

Machine learning system, by its very nature, learns from previous decisions that it's been trained on and if there are problems with those previous decisions, then that will be baked in to the new system. As I say, that can produce unfairness in more extreme situations that can actually be unlawful, it can result in unlawful discrimination. We've seen especially overseas how banks and others have used machine learning systems to make decisions that unfairly and unlawfully disadvantage people of colour, women, people with disability and other groups and that obviously presents a regulatory risk for companies and government agencies who use AI or machine learning the wrong way. Then finally, there's just a risk of making the wrong decision.

If your AI system spits out a decision that happens to be unlawful and unfair, it's probably also wrong. So to give a practical example, you know, if the bank uses a machine learning system that thinks that women will be bad at paying off home loans and makes decisions accordingly, then they're going to lose a lot of really good customers who happen to be women and they'll probably lend more than they should to customers like me white, middle aged men who tend to be privileged in these sorts of systems. So those are the sorts of risks that we are seeing with first generation AI and that's something that we think has not been properly taken into account in the rise of AI.

But there's another problem as well. This is my second asterisk. There's a skills shortage here in Australia and in lots of other countries when it comes to artificial intelligence. In fact, there are two skills shortages. There's one that we already know about and talk about all the time. There's a technical skills shortage, so people who graduate with STEM backgrounds, particularly data science, you can see on the grey line there that we are kind of increasing the number of people who have those technical AI expertise in their back pocket when they come out of university or technical institutions and that is increasing, but it's not increasing quickly enough.

The dotted red line shows that there is a gap of over 70,000 graduates with those technical skills that we're just not going to have by the end of this decade. But anyway, we already know about that technical skills drought. That's well understood and there's a whole bunch of strategies in place that are designed to address that problem.

There's another skills drought as well that is much less talked about and we consider to be incredibly dangerous and needs to be addressed and that other skills drought is in strategic expertise. So you'll see on the right of your screen that in a recent piece of research the vast majority of executives in companies who are surveyed said that they know that their company needs artificial intelligence 84% said that but about three quarters of those people basically said, "We've got no idea how to use AI." So that is a really dangerous combination of statistics because what it shows is that companies feel this enormous pressure, and government agencies do as well, to invest in AI and use AI, but they don't feel that they have the skills that they need in order to do that well.

So what happens when those two things collide? You have problems which in Australia are perhaps best exemplified by the Robodebt disaster and we could have a pretty avid debate about whether Robodebt truly involved artificial intelligence. It was an algorithm with technologies associated with AI, but let's put that debate to one side. The crucial issue is this. You have a government agency that was trying to recover debts they considered were owed by people who received welfare in Australia. They wanted to use new technology to do that efficiently and quickly, but they weren't able to work with the private sector effectively to develop a system that was fair, accurate and accountable and those three principles fairness, accuracy and accountability are absolutely crucial whenever a government agency or a company is making a significant decision that can really affect people. So that's the big problem, right, and that's where this new initiative that I'll be leading at UTS comes in.

We really want to be at the forefront of building Australia's AI capability so that companies and government agencies can use artificial intelligence smartly, responsibly and in accordance with our liberal democratic values and that means respecting people's basic human rights. So from the outset, we're going to be working with companies at the kind of C suite board of directors level and equivalent senior leaders in government to help them better understand what are some of the risks and opportunities that AI brings to help them set good strategy for their organisation so that they can say, "Look, this is a really smart area where we can perhaps invest in AI and if we put in place appropriate governance and other mechanisms, we can do so safely and respecting our citizens' or our customers' basic rights." Then if you go one layer down, we've seen time and again that you have some senior person in the organisation make the decree almost like a monarch, "We are going to use AI in this particular area", but that then becomes the task or the job of a bunch of people who are generally middle management type positions.

Now, if they also lack the basic understanding of how to do that safely and effectively, then it's highly likely that the project will go off the rails. There's a really interesting piece of research that I came across recently where a large number of companies were surveyed about how they've used AI and fully a quarter of those companies said that their AI projects were a failure. You think about the psychology behind that, right? Most people in companies don't like to admit that, but a quarter of them made that acknowledgment, which shows that even from the company perspective, if you get these things wrong, if you either get the strategy wrong or implementation, operationalisation of AI wrong, then you're going to have very serious problems and you're not going to be able to do the right thing by your organisation, but you're also much more likely to cause harm. So I'm going to leave you just with my contact details. As Verity said, this is literally my first week as Industry Professor Responsible Technology at UTS. I'm very excited to be in this role and I know that there are a lot of people in this session who may well be interested in reaching out and being in contact with me, so you've got my contact details on the screen.

For anyone who can't read it, it's edward.santow@uts.edu.au. With that, I'll stop sharing my screen and hand back to Verity. VERITY FIRTH: Thank you, Ed.

Thank you for that brilliant introduction to today's discussion. I'll now introduce our panellists for this next part of our session. Firstly, it's my honour to introduce Distinguished Professor Fang Chen and Mikaela Jade to the discussion. Distinguished Professor Fang Chen is a prominent leader in AI data science with an international reputation and industry recognition. She has created and deployed many innovative solutions using AI data science to transform industries worldwide.

She won the "Oscar Prize" in Australian science in 2018, Australian Museum Eureka Prize and is also a 2021 winner of Women in AI Australia and New Zealand Award. She's been appointed to the inaugural New South Wales Government AI Advisory Committee and serves as the Co Chair of The National Transport Data Community of Practice at ITS Australia. Professor Chen has more than 300 publications and 30 patents in 8 countries and has delivered many public speeches, including TEDx. At UTS she's the Executive Director Data Science. Welcome, Fang. FANG CHEN: Thank you so much, Verity.

VERITY FIRTH: Mikaela Jade is the founder and CEO of Indigital, Australia's first indigenous edu tech company. She has a background in environmental biology from UTS and a Master of Applied Cybernetics from ANU, as well as having spent most of her career as a national parks ranger, which is fantastic, I want to talk to you about that later. Mikaela's company, Indigital, provides digital skills training platforms and programs that specialise in fourth industrial revolution technologies, including artificial intelligence, machine learning, Internet of Things, and augmented and mixed realities.

Indigital's programs are designed through an Indigenous cultural lens using cutting edge digital technologies to translate cultural knowledge within Indigenous communities, showcase their cultural heritage in compelling ways, and sustainably create jobs from the digital economy. Welcome, Mikaela. MIKAELA JADE: Thank you, Verity. Thanks for having me.

VERITY FIRTH: Now, I'm going to come first to you, Mikaela, because really I think it's a great opportunity to tell people about Indigital. It's a quite incredible company. It's doing really interesting work in demystifying and offering access to cutting edge technology and skills to the very young and also, of course, to the very remote. So how important is it that we develop these abilities in our society across all age groups and generations? MIKAELA JADE: It's super important, Verity.

I'd just like to say I'm coming to you from Ngunnawal country today. I pay my respects to Elders past and present who walk with us on this journey. I'm privileged to work with Ngunnawal community here and my own community, Dharug. It's super important because at its foundation the suite of technologies known as artificial intelligence are really about economic, political, cultural and historical power and as a First Nations woman, I don't think I'm particularly satisfied with the system that we have right now to the degree that I'd like to see that scale as it is without the input of First Peoples in the design of artificial intelligence systems and in fact the systems that they live within, because we have to remember that AI is a logic in a system that's much broader than just computers. It's about communities and people and the planet and all the things that we're surrounded with as humans, so not having our voices in the design of these systems is inherently dangerous because we're leaning into economic, political, cultural and historical power structures that exist that don't always benefit First Peoples and don't always benefit young people or even people in rural and remote communities. So being able to be involved in understanding what they are and having a degree of literacy around technologies like AI is integral to us being able to design our future because these systems will underpin a lot of our life and, yeah, I think when I'm thinking about, you know, justice, education systems in particular, I don't know how our people are going to fare if we're not involved in the design because only we can speak from lived experience of our communities and, yeah, I think also for entrepreneurship too and opportunities for wealth creation and caring for country even, you know, AI will extend to managing conservation and large parts of our estate too.

So, yeah, I think it's important. VERITY FIRTH: Yes, that sums it up pretty well, Mikaela. I like your line about economic, political and cultural power vested in AI and making sure there's equal access to that power.

To what extent now, this is actually a question for all the panellists, but I might start with you, Fang, to what extent is AI already being used in Australia and are the public equipped to comprehend its use, you know, how do we fare against the rest of the world? FANG CHEN: This comes to my favourite subject. So I've been working in AI data science for more than 20 years, so if looking at in Australia we see a lot of applications already being used or in place. I'll just give some examples. The general public may not be aware, but it has already happened.

For example, just take the project our UTS team have done samples predictive water quality so that we can better manage that and understand chemicals in the water or in soil water; leaks and breaks in water pipes reduce our service interruption, et cetera. I think from the most recent data we collected, the work UTS team did, it seems December 2019, it has saved more than 5,000 megalitres of water. That's thousands that's Olympic pool, that size, and not to mention things like Harbour Bridge, how you monitor the safety or integrity of the structure, how to predict food growth and how to manage traffic, how to understand air traffic controllers' workload the list can be on and on. However, in the end, the story, success story, probably haven't been populated well enough I mean, some of the successes or helped the general public to understand the success or understand what's in, what's out and how the technology has been used in those stories and then probably I mean, this is an overall issue worldwide. It's not only an Australian issue. I think we heard a lot of stories from the States, from Europe in terms of some of either misuse or either some of the concerns it hasn't been properly addressed in data, bias, et cetera.

On the other side we haven't heard, just my opinion haven't heard enough stories on the good use of it. I think the good and the bad and then we come up with a certain approach to say how to achieve the good ones, how to avoid bad ones, that will be really ideal. VERITY FIRTH: To you, Ed. So Fang has outlined there there's all these applications of both technology and AI used across our systems in predicting leaks and traffic monitoring and all this sort of stuff. How aware are the public that this is even happening in the first place and does it matter? PROF. ED SANTOW: The short is often we're not very aware at all and I think it does matter.

So just take something that many of us take for granted, like using an AI powered maps application on our smartphones, like Google Maps or Apple Maps, or whatever. What that does is it provides us with usually a much better, more efficient way of getting from A to B, and I can say this as someone who's quite dyslexic. I was one of those people who would be turning a paper map upside down and I'd still on my journey from A to B, I would go to C, D and E and often never get to B. So there's real benefit from that. But it does change our brains, and I'm not understating this.

It literally changes our brain. So when we're relying on one of those kind of maps applications where we basically follow a little blue dot to get from A to B, we engage our active brains at least 30% less than if we were kind of self navigating, even using a kind of conventional map, and what that means is if you take the application away and then ask us tomorrow to try to get our journey from A to B just from memory, we're 30% less likely to be able to do it. So it is having a really fundamental effect on the way in which we live our lives and the way in which our brains work. Now, if we don't know that, then we are not well placed to be able to take advantage and, as I said at the outset with that example, there are real advantages particularly for constantly lost people like me in being able to use AI, but we're not able to take the protective action that we need in order to guard against the risks.

VERITY FIRTH: Mikaela, the work that you're doing presumably part of what you're doing in schools is equipping the next generation to properly comprehend the use of AI and have that capacity to properly engage with the technology. Is that correct? MIKAELA JADE: Yes, and we do that for a couple of reasons and one of the reasons I know is close to Ed's heart in particular is we consider AI as an extractive industry and part of that is modern slavery and the people that are most at risk of modern slavery practices in every sector are our First Nations peoples. So we're already seeing AI based companies approaching First Nations communities and wanting to upskill us in AI with a view to us participating in what can be classified as modern slavery practices. So being aware of the kind of labour and skills markets around AI is really important for young people in schools to understand so they can understand what they're participating in and also being able to see the future as well. There's some really exciting career opportunities for young people in Australia, particularly around the space sector, and being able to understand what those opportunities are from a technical perspective, not just going to young people and saying, "Hey, do you want to have a career in space, it's awesome", and kids start thinking about being an astronaut, when there's myriad other jobs they could find fulfilling and satisfying, from working on country in rural and remote Australia.

So really helping them understand what those opportunities are and then also the opportunity to create their own businesses from country, like I've been able to do so, yeah. And what the future looks like. Like if you haven't seen mixed reality, you don't really know what it is, so you can't start thinking about a future that you might have in that if that's what really floats your boat and you certainly wouldn't be able to determine the pathway to get there. That's what we help students understand when we're working with them. VERITY FIRTH: Yes. That makes total sense.

So Fang, the need for ethical frameworks around AI are pretty well recognised. In practice, are these ethical principles translating into ethical actions? Is it actually working and can more be done to make it easier for people? FANG CHEN: That's a great question. I think you already alluded to the framework has been well recognised.

We've also done a research analysis of hundreds of different documents coming from government standards, policies, guidelines, academic papers. I think universally the consensus is there in terms of top ethical principles, lack of transparency, accountability, fairness, those are the top ones. However, the implementation framework or practice is still at its infancy. It's definitely an area they need to do, so in terms of how to take principles to practice, how to clearly define processes and even tools to help people to clarify whether they have followed or not those principles so give them a way past to do the assessment to say they have done it, followed or haven't or what the improvement area, which areas they need to fulfil so that they can do either AI development or procurement or as a use. This is not a magic wand, you know, or like rocket science anymore.

I think, as I said, if you pick the areas like fairness, transparency, that Ed mentioned, there are quite some detailed assessment measures that have been researched and published. I think it's how to take those into active debate, not only debate on the principles I think that part we have done good exercise. It's debate how to implement it and what are the necessity or steps we should take things forward. VERITY FIRTH: Yes, and that leads quite nicely to my next question to Ed, which was going to be okay, so we've got the principles, Fang is talking about the challenge of implementation.

You've called for Australia's Government and its institutions to be modelled users of AI, almost like to model what best practice looks like, so what does it look like? PROF. ED SANTOW: A couple of things in particular. The first is acknowledging that this is often pretty experimental technology. Historically, not just in Australia but a lot of countries have got a very bad record of beta testing new technology on literally the most vulnerable citizens in our community and, frankly, that's what seems to have happened with Robodebt. You know, it's hard to identify a group of people in our community that are more vulnerable than people who have at some point in the last five to seven years received a welfare payment. So to try this new technology on that group and not put in place adequate safeguards, that's not what we should be doing.

We should learn the lesson from Robodebt and one of the key lessons is trial it in a safe way and make sure that when you go live, you do so in respect of people who are able to protect themselves reasonably well. The second thing comes back to something I said right at the outset, that those three principles of fairness, accuracy and accountability are critically important. What we saw with Robodebt was that there was a very high error rate, so people were being sent debt notices, being told they owed the government money, many people who didn't actually owe the government a cent. So it's crucially important whenever the government is making a decision or a company that they do so accurately and that they not have a high error rate.

When we talk about accountability, what we mean by that is making sure that if there is an error, if there's a problem with the decision making process, that people are able to kind of get redress simply, that you don't need to have to pay a high priced lawyer to kind of untie the Gordian knot you find yourself in, but rather you can have a simple process of getting the problem fixed. And fairness is an overarching principle that's really important and that means that, for example, sometimes when you're claiming money back from someone that you may have overpaid $100 five, six, seven years ago, maybe it's not really fair to claim that money back, so you need to have an overarching kind of look at the system that you're creating and making sure that it really works fairly for people. VERITY FIRTH: Robodebt is a particularly bad example. My next question was going to be around how easy is it for people to interrogate the process of AI or the decisions made by AI? I mean, how easy is it not very easy from what you're saying, but are there other better examples of where it is easier to interrogate those decisions? PROF.

ED SANTOW: Yeah, so for momentous decisions in people's lives there is usually a requirement that the decision maker give you some reasons, and we found this at the Human Rights Commission in the way in which we asked questions of the community about polling, but also the other consultation processes. People said we know we're not always going to get decisions you like, but we want to make sure at the very least that we weren't treated unfairly, I wasn't the victim of discrimination, and when you are given reasons for your decision, that's where you're able to determine where the decision is one you may just have to suck it up and accept or you may want to challenge it because it wasn't the right one. So if you can't get those reasons, that's a crucial problem. Now, it's a design question.

It may be easier often to design AI powered systems that don't provide reasons, but there's no technical reason why that has to be the case. I mean, frankly, for a human it's easier just, you know, to give your decision. I find this as a parent not infrequently. If one of my kids is saying, "I'd like the third ice cream today", it's easier for me to say "no" than to say, "No, and here are the reasons why."

But those reasons are very important, particularly if we're talking about I don't want to be grim about it, if we're talking about much more momentous decisions, like welfare decisions, bank loan decisions, those sorts of things that really affect people, then those reasons are crucially important and decision making systems need to be designed to accommodate that requirement because without wanting to go into too much about it, not only is it practical, useful, it's a principle on which our legal system and our entire liberal democracy depends and that is the rule of law that when one makes a decision, they can be held to account they've followed the law in making that decision. So an opaque decision, a black box decision using AI, just doesn't cut the mustard. VERITY FIRTH: Fang, what do you think about how normal people interrogate the process of AI, decisions made by AI? How difficult or hard is that? FANG CHEN: Yeah, I won't say it's easy. However, it may be a way to open it up a bit.

So I like Ed's example about children or a child. AI is a child, you know. So we teach them or teach the AI system to do something. We influence them, we give them some principles and then how we design the system then follows. Many, many years ago when my daughter was only about 12 months old, she knows where to sit, she knows where to sit, she knows what's called tables, what's called chairs.

Even nowadays it's not easy for an AI system to know all the different chairs, different tables, what sort of surface you can put stuff on, what surface you can sit on. Human is far more advanced than the current AI system. Having said that, I'm just saying that how to design and set the boundaries and let AI system perform with the, you know, learning examples we give so that the system can keep learning and keep improving and why are we doing that? While we can set certain expectations, expectations means that the system is not going to be 100% correct because it's a probability based system. However, there are crucial failures you don't want to see, so we need to identify what are the crucial failures you don't want to see, try to avoid those ones.

On the other side is what is the bearable risk, basically keep the balance. The last example I want to give is the continuous learning and improvement. I use the leak example again. So when we first started to do the predictions, because the complexity in the data for example, when you dig out a pipe, it says 1900, maybe it's 1800 the data may not be accurate and it says maybe it's cast iron pipe, all those things happening, and then over the time we compare our prediction result versus the actual failures happening.

We put in more sensors to get more data in and over the time find now it's very proud to say we can predict the location of the failure within 200 metres, but over the time also set the boundary how to safely measure in the test and also how to build the rapport with the people using that to say what sort of task you should trust AI, what sort of thing you know it can predict really well, what's the sort of thing they can't predict well and then you put the risk measurement or mitigation framework around those ones so if it's high so high uncertainty in prediction, you know it's a high risk you need to pay attention to. If it's low risk, you almost achieved 99% correct all the time through your validation, you probably, you know, can lower your guard a little bit in a safer way. VERITY FIRTH: Yes, I like that idea as the AI gets better and more sophisticated, but also we need to get better and more sophisticated at knowing the questions to ask and how to manage it.

I think that's put really well. Mikaela, in the work you do, how do you foster young people's sense of agency when teaching them skills in AI? MIKAELA JADE: Yes, so we set them challenges that they do themselves through Minecraft education, it's one of the channels we do that, but also working with Elders and knowledge holders in communities to make sure that agency includes law. So Ed pointed to the white feller law in this country and the way that that underpins AI design, or should. There's also another law in this country which is black feller law, which is equally as important, and we're really trying to instill in all kids when we're working with them that there are two law systems in this country and they're both important. I think the other thing about agency and teaching kids is that whole systems approach that I was talking about before not just concentrating on the technology or, you know, the fun that we're having in the technology, but looking at the entire system.

And something that I'd really like to add to what Fang and Ed have said about things that underpin AI, one of them has to be sustainability because this takes an enormous amount of natural resources and an enormous amount of energy to even use AI through the devices, through mineral resources, through manual labour. These are all things that need to be incorporated into our understanding of what agency is because they impact unfairly and unequally marginalised communities. So thinking about the whole system when we're talking to kids about agency because I think there's a tendency for young people to have an individualised, I guess, expression of who they are in the world and culturally we're collective and we have collective decision making, so we try to work with kids to help them understand agency not just from a technology perspective of agency but agency as a human that operates in a community that lives in an environment that needs to sustain them as well.

VERITY FIRTH: That's actually your answer around sustainability has reminded me of a question we were sent before the panel started by an audience member called Jay Adams and I'm going to put it to the panel now because I think we might move to some audience questions and then I'll come back to some wrap up questions. Jay Adams asks: "Considering the climate crisis is the most pressing issue facing our continued survival, is there any way basic already available AI can help us?" So who wants to have a first go at that one? What's out there now that can actually help us? Mikaela, do you have ideas? MIKAELA JADE: Pick the biologist. VERITY FIRTH: You worked for national parks. You must know. MIKAELA JADE: There are a lot of technologies that help with caring for country initiatives.

You probably have seen two really outstanding applications of AI. One was monitoring weeds and predicting the spread of weeds in Kakadu National Park, and the other was monitoring the impacts of feral animal control programs on turtle populations along beaches in Cape York. These are two programs that drew in AI and particularly machine learning to help rangers and caring by Indigenous protected rangers really understand the impacts of the works programs they were having in the national park actually leading to the outcome that they wanted which was better opportunities for natural regrowth and better opportunities for turtles to actually make it to the sea. So there are some really great programs through NAIL SMA being looked at across Northern Australia incorporating AI and what I like about the approach that NAIL SMA and partners are doing is incorporating Indigenous knowledge systems into the design of the AI technologies that they're using. So I think there are some really great examples of that happening. VERITY FIRTH: Ed, do you have some ideas? PROF.

ED SANTOW: I think that's a great example. There's a particularly good one that got a bit of prominence just after the horrible bushfires that we experienced on the eastern seaboard of Australia just over 18 months ago. There are only a finite number of national park rangers, as Mikaela knows only too well, and so identifying particular kind of micro climates and small bodies, which may actually be quite big, where there is a combination of a lot of fire damage, large numbers of animals who may be very vulnerable but could be basically saved. Now, what was done was that they used satellite imagery and they taught through a machine learning system the computer to recognise some of those tracts of land that I've just described and they were able using that satellite imagery and that machine learning system to identify much, much faster land that could then be sort of flagged for kind of human intervention for regeneration and it literally that could never have been done in the past. So it took a piece of technology that we've had for, you know, several decades which is satellite imagery, it brought to bear this sort of machine learning technique and it had a really significant, positive effect.

VERITY FIRTH: Fang, do you have some good examples of climate change AI? FANG CHEN: Yes, as Mikaela and Ed mentioned around understanding the climate impact towards the different, you know, species, plants, farms, even ocean current, around net zero emissions. There's lots of AI applications in that, also in how to use the electrical vehicles and, you know, to remanage or redo some of the energy management micro grids, streamline waste management, so many, many things related to it. VERITY FIRTH: That's great. That's good news. That's one piece of good news. So now to Jessica's question.

Her question is right up the top, so I'm going to ask hers. She's got a couple of questions. The first is in Australia which are the industries in which current uses of AI are creating the greatest human rights risks and then as part of the industry engagement work to be undertaken, is there an intention of creating some sort of AI human rights assessment tool or process? So Ed, that sounds like one that would be good for you. PROF.

ED SANTOW: Yes, sure. So for the first part of the question, the kind of considerations that you need to apply where AI is being used most and so the two top industries are mining that's a more traditional kind of extractive industry that Mikaela alluded to before and financial services, insurance, superannuation. So start with that and then consider where people are most likely to be most affected by the kinds of decisions that are made in those areas. So mining, it's particularly being used to automate the way in which mining takes place, which certainly does affect people.

It reduces the total number of people who are employed in those industries, but I'm going to put that to one side. Financial services, insurance, superannuation, that's really significant because the decisions that are made there about home loan, who gets insurance, those sorts of things are fundamentally important for our basic human rights and so really, really it is important that we shine a light on how those sorts of industries are using AI and, for that matter, to support them to do so in a way that is more likely to uphold human rights. The second part of the question was are there tools and processes that are being developed to help them to assess the human rights impact of the operations.

The short answer is yes. At the Human Rights Commission in our final report on human rights and technology we did just that. So if you're interested in reading more about that, go to tech.humanrights.gov.au. There are also wonderful organisations here in Australia and overseas that have really pioneered that sort of work.

So professor Kate Crawford, who's an Australian based in New York at the AI Now Institute, they have been really at the forefront of that and companies should be encouraged to use those things because it helps them to make better decisions, smarter decisions, more accurate decisions, but it also reduces their legal risk. So there are lots of good reasons to use that sort of tool. VERITY FIRTH: Fang and Mikaela, do you have anything to add on that question? FANG CHEN: A big one, health. VERITY FIRTH: Health, yes. FANG CHEN: I think I don't need to elaborate about health. VERITY FIRTH: And Mikaela? MIKAELA JADE: Yes, I was just going to agree around the mining thing, but also the AI sector itself.

So there's a vast amount of human resources that are required to train data sets or train AI on data sets. There's a lot of data handling. There's very repetitive, boring, manual tasks associated with that and what we're hearing from communities around the world, not just in Australia, is that First Nations communities that have low employment rates are being targeted to do this kind of work and, you know, it's just a pattern that continues to reveal itself depending on what phase of evolution we're at. Like, you know, our people were there when logs were needed to be cut down and our people were there when holes needed to be dug in the ground and our people are now being asked to do boring manual, low paid tasks, so I think that is the industry needs to look at itself in terms of risk to people's lifestyles and communities.

So I think, yeah, probably VERITY FIRTH: Sharing the fruits rather than just, you know yes, exactly right. MIKAELA JADE: Oh, yeah, and mapping too. I was going to say geospatial mapping too, because there's a lot of Indigenous ranger groups around Australia out there collecting vast amounts of data on country which are being used to feed and train AI systems and, you know, they don't often see the benefits of that work either. VERITY FIRTH: Mmm, mmm.

So that leads to Simon's question perfectly, Simon Knight: "In parallel to an AI skills shortage, does the panel think there is a shortage of ethicists?" What do you think about that, Mikaela, given what you've just said? MIKAELA JADE: Yes, and whose ethics are we applying to this as well because depending on your cultural background and your community status, you may have a different interpretation of the ethics that are being applied. I think that's a key question to ask. VERITY FIRTH: Yes. Fang, do you think there's a shortage of ethicists? FANG CHEN: I agree with Mikaela. I guess it's by definition.

Regardless which category you label people, I think to be aware of the ethics, you know, requirements. That's probably very, very important. I mean, in all the skills training now ethics sorry, AI ethics training is one essential element in that one.

If we can train people up in various different categories and then, you know, we will have a better understanding from the community overall to do better with AI. So I think that will resolve problems. It doesn't matter which labels they put on. VERITY FIRTH: Yes, yes.

Ethicists, Ed? PROF. ED SANTOW: There are a lot of (inaudible) ethicists. I brush my teeth twice a day. That doesn't make me a dentist.

We all make ethical choices every day. That doesn't make us ethicists. I think there's some really big ethical questions raised by AI, but then often not helped by really superficial kind of ethical responses like do no harm. Of course we should aim to do no harm, but for an engineer working on an AI problem to be simply told "do no harm" doesn't really help them very much. So really what they need to do things, they need to walk through how to make good ethical choices but before they even do that, they've got to comply with the law and sometimes if we frame everything as an ethical question, we lose sight of the fact that you cannot discriminate against someone on the basis of their race, age or gender or other protective attributes and that's something that is crucially important and sometimes gets lost in the debate about AI ethics. VERITY FIRTH: All right, so I'm now going to just round back to the final question for each of you.

I might start with you first, Mikaela. So the work you've been doing is pretty groundbreaking. This is the second time you've been on a panel and I'm just blown away by what you're doing and you also presented at the UN's forum on Indigenous issues and I thought it would be interesting to get your global perspective given your experience at the UN. MIKAELA JADE: Yeah, definitely.

Thanks, Verity. Yeah, I spent four years funding myself to go to the Permanent Forum on Indigenous Issues and I was there because I realised as a developer that there was a huge gap in protections in the Declaration on the Rights of Indigenous Peoples because it was adopted before we had ubiquity of technology. It doesn't specifically reference digital anything. The situation of going to the UN Permanent Forum to talk about things like artificial intelligence and ethics, it's really difficult because people around the world are there because their states are murdering people and, you know, there's really intense issues that go on there.

So to walk in and say, "Oh, there's also this AI ethics issue", it's really difficult task to do because a lot of people are dealing with, you know, really pertinent other stuff that needs to be dealt with as well. But what I want to do with that work collectively is firstly I got to meet with there was 2,500 people each year from around the world from First Peoples communities and realising that everyone's facing exactly the same challenges is good and also meeting up with people from different parts of the world who want to progress something along the lines of a playbook that sits beside the Declaration on the Rights of Indigenous Peoples to help First Peoples interpret what this means through an AI lens, through a spatial web lens and, yeah, through the multiple lenses of new technologies that haven't really been considered in that work. So yes. VERITY FIRTH: Fantastic.

Fang, my last question to you on the whole, do you think Australians have a positive or a negative perception of AI and does that affect the possible take up of AI solutions to some of our nation's issues? FANG CHEN: I would answer your second part first. Absolutely the public view towards in a positive negative affect the uptake. On the view perspective, I would say it's divided. I don't have stats overall voting in my hand, but I think it's divided, so divided into people who see the benefit or people see so people fear about the potential risk.

So I don't think that this is anything wrong with, it's a natural step of that's why we need more education. That's why we need the skill training, we need to open up the box so people can have skills to understand, then make the judgment call of which way they lean to. VERITY FIRTH: And that's a perfect lead in to my last question to you, Ed, and the last question for the panel, which is exactly what Fang just said, there's a divided view, there are competing dystopian and utopian visions for how AI affects our basic human rights. So what is the role of universities like UTS? How can we support the development and use of AI that gives us what we want and need and not what we fear? PROF. ED SANTOW: Yeah, I mean, I think it's creating forums and training and degree programs that bring together the sorts of expertise we see here.

So Fang is literally one of the world's leaders not just in the technical expertise associated with AI, but it starts there. So having that expertise on tap is hugely, hugely valuable. Someone like Mikaela can come in and help us understand what Indigenous people here in Australia what they can genuinely offer in our understanding of how to, you know, protect the dignity of people amid revolutionary technological change and we know, for example, traditional western notions of privacy aren't particularly effective when it comes to, you know, a data driven technology like AI. Someone like you, Verity, of course, who's experienced in both academia and government and everything else in between, can also provide that really practical perspective. So doing that through public forums, through training I think is crucially important in understanding how we can get the benefits of good technical expertise and make sure that we put it in an environment that is going to have appropriate protections as well. VERITY FIRTH: Well, that's perfect, everyone.

We've ended spot on 1 o'clock, beautifully timed. Thank you to the panellists for being such good panellists. Thank you for giving up your time today. I found that a really interesting and fruitful discussion. Welcome to UTS, Ed.

We're very lucky to have you. Thank you to all of the audience who joined us today. We look forward to seeing you at the next webinar.

This link is all recorded and we will share the link with you, so feel free to share far and wide through your networks. Thanks very much. Bye.

2022-01-18 11:25

Show Video

Other news