What are your hopes and fears for Tech in 2021?
So, 2020 was obviously an enormously challenging year. This year is set to be full of change and hopefully as we emerge from the pandemic. We will find ourselves in a much better position than we were last year. But we will continue to grapple with some of the most exciting but also challenging emerging new technologies.
Today we have got Doctor Carissa Véliz. One of the most valued voices in technology right now an associate professor of philosophy at Hartford College in Oxford and a part of the Institute for Ethics MAI at Oxford University as well. Carissa is also author of the best seller "Privacy is Power" a call to action for all of us. I think Jonathan it is fair to say a highlight for us from last year was coming across Carissa's really important empowering and pioneering work into digital ethics. And we found that so urgent and important that we funded this independent research in parts. And Carissa, you know, you have said before that this is an urgent to improve current practices so that we can avoid the tech related debacles that we had last year and that is a message that resonates so profoundly as we look to the year ahead.
Jonathan is Co-Head of Clifford Chance's tech group. He advises global organisations on a range of legal and commercial issues relating to everything that we will be talking about today. Particularly around artificial intelligence and data so Jonathan and Carissa thank so much for joining us. I guess I will kick off with a question around your hopes and your fears about data AI in digital regulation in 2021. Maybe if you could sum up Carissa first and then Jonathan in 30 seconds or so what your concerns are.
Thank you Herbert. So I think my greatest hope for 2021 is that people will become more aware of privacy issues. That we will see the United States start to think about Federal Privacy Law. And that we will see much more diplomacy between the United States, Europe the UK in joining forces and taking privacy seriously. That is the best-case scenario.
One of my concerns is that might not happen and we might further liberalise data, in particular in the UK. My concern is that the UK might want to become something like a data haven. In which data can just be freely used even in very questionable ways and I just hope that doesn't happen. I think I fully agree with Carissa.
I think from my side I hope for this year, much more transparency really in terms of how data is used. And I think one of the really positive things about 2020 was that the public through you know, whether it was AI algorithms for A Levels or other stories that kind of broke into the headlines. The public became much more aware of how their data is being used. And so I hope that both from a business perspective but also from a public consciousness perspective, people start to realise how their data is being used and are told much more about it. I think my fear for this year is really on the cyber security side. I think the pandemic has left many companies weakened.
Has also left many countries weakened as well and my concern is that dark forces in this space whether there it is state actors or criminals of all forms may take advantage of this. And I think there is a high chance that we are going to see some really serious incidents this year. Which are not just going to be shocking to read about but might actually affect us personally. Loss of our data, loss of money, loss of information which is really personal to us. And again I don't think we have seen so many cyber-attacks that have affected us on a personal level yet and I think that could really change this year. I wanted to pick up on one-point Carissa that you just made actually about your fear around the UK particularly after the UK is now separated.
Do you think that will continue to see a meeting of minds on data and AI particularly around regulation? Or do you think we are going to continue to see the internet splintering? I feel that we might continue to see the internet splintering. At the moment it is not looking great. I do not see the UK being interested enough in taking privacy seriously.
I mean making sure that the strategy is aligned with Europe and with the United States but let us hope it can change. I also completely agree with Jonathan. That this year we will see more cyber-attacks. Either new ones or we will discover ones that have happened in the past.
Like has happened recently and more and more people will realise that we will be personally effected by these kind of attacks. I think the point about the UK's position is really interesting because I have been so heartened in the past three to four years to see the UK place itself really at the heart of debates on AI ethics and privacy. And I think all of the work on online harms is extremely promising but if the UK wants to take kind of an exceptional approach and possibly loosen regulation in certain areas. That narrative of leading the world on ethics does not sit well with that exceptionalism.
And so I think it feels at the moment like there are very different schools of thought and there has to be a meeting of minds on this. I just hope that we are able to protect the rights of individuals and also keep ahead of the curve really on areas like data ethics and algorithmic transparency. Where, as a country we have been doing extraordinary work, world leading work in this area.
Great and I think one of the things that is interesting. Jonathan you picked up on online harms and for me it is the fact that everyone is on the internet so much more now. We depend on it more than ever and so there are these incoming rules in the UK, in Europe and around the world about online harms in pieces and harmful content and information and on all kinds of government arrangements around that.
Carissa do you think the law is enough for these kind of harms or do you think that more needs to happen in 2021? Definitely not enough and I think it's evident for anyone who reads the news in the past year. It is just scandal after scandal after scandal and more and more people getting harmed in one way or another. So, I think one of the things that I argue for in my book is we should end data economy. Because as far as we buy and sell personal data we are really incentivising pretty bad behaviour.
And that behaviour then includes collecting much more data than people need and also just selling it to the highest bidder. Which often is not the institution or the person with the best interest in mind. I think, again I suppose it goes to, we are starting to become much more sophisticated in the way in which we think about these things. So when we capture a pool of data obviously it has value. But I do think companies are starting to wake up to the idea that certain groups and individuals, vulnerable groups may be more adversely affected than others.
And so I very much welcome the online harms proposals because it is acknowledging that young people, vulnerable people when they are working and socialising on the internet can be exposed to real risk. And those businesses who are providing those services have a responsibility to protect them. And I think, in particular, for example, if you look at the Age Appropriate Design Code. Which is a big new piece of new legislation in the UK, that is effectively saying to companies you have to handle the data of children much more carefully.
You have to design your services to be bespoke to those audiences. If they are focused specifically on children and that is you know, a good thing because it's starting to say, we are not all the same when we use the internet. We do not all have the same awareness. Some people are more vulnerable than others. And it is right that companies have to start to think in that way and start to look at their customer base as not just one amorphous group but actually having certain needs and requirements. In terms of those needs and requirements Jonathan.
I think one of the things from last year that was clear is that trust and transparency come together, they are hand in hand. What I am interested in to hear from both of you is this year what do companies, organisations need to do to earn and reinforce that trust and how do they be more transparent around their use of data in particular? Maybe I will give that one to Carissa first. So it is interesting the relationship between trust and transparency because many times it is not.
It does not go hand in hand when companies are transparent and then it seems like they have horrible data practices. I think that is what would happen today if companies were completely transparent about what they were doing with data and where it the data ends up. We would be so horrified and in fact trust. So, the first thing to do is come up with much better data practices and really take seriously at least two points. One is that as Jonathan mentioned. There are people who are more at risk when they get exposed in different ways.
But also that as companies they are creating their own risk by collecting more data than they need and by not having good cyber security. It is just a matter of time before there is a data breach or a law suit and the same goes for society. We are creating our own risk by having this data system that is so reckless in a way we have more data than ever before and we actually are not very good at keeping it safe.
I suppose it comes down to what is trust? Trust is based I think on real knowledge, deep knowledge. And I have always said when we are speaking to any business. If your telling your customers what you are doing with that information and that feels right that builds a relationship of trust. If you are open enough that you are telling them things that might seem uncomfortable.
That is probably quite a good gauge to tell you that you should not be doing that. So, trust is based on knowledge and information. And I think in the next year I really hope that we start to see much better ways of building that trust. So I do not think that we are at a good enough stage yet in the evolution for example of privacy notices and practices and policies.
As an individual you just flick through them. They are so dense, they are so heavy I am really interested in seeing new privacy technologies which really bring to life what is happening with information. To an extent with the Age Appropriate Design Code.
You have got the idea of an audience of children and you have to kind of engage with them in a different way. And I think that thinking creatively around how you can build that trust, provide information and not be scared about providing that information because the more information the better because if your customers trust you they will keep on using you. And I do think that again in the next year we may see large groups of people move away from platforms that they do not feel they are being respected. So, investing in those privacy technologies from my perspective is going to be a really important priority. Great and on that transparency, I am going to pivot our discussion towards speaking about Artificial Intelligence specifically.
So, we have had in the last year a lot of multinational companies, a lot of multinational government bodies putting forward guidance around how AI can be more transparent, how you can explain it and so forth and you have also seen lots of what we hear as AI ethics principles. What I want to know is Carissa what is your view for 2021 in terms of what organisations should actually be doing to implement these principles to bring them to life and Jonathan I am also interested in your view from having worked with organisations on this for eight months. So one element is preparing for in a way that does not overwhelm people because it is something that we also have to keep in mind is that people are busy and we do not have the time to read too much information that might be very technical and very hard to understand.
So, in essence good ethics is about coming up with products that are in people's best interest that if customers knew everything there was to know about that product, they would still choose it. That is not that they only choose it because they do not know enough or they do not have better alternatives, or they do not have enough money or what not. You want to come up with a product that really looks after people and that people would choose it if they knew everything there was to know about it. Yes, I could not agree more with Carissa on that.
I think that obviously AI and algorithms and obviously there are broad ways of defining that, but it is all around us. It is obviously being adopted exponentially now but it has actually been around since the 70s and 80s as a thing and I think the way that we need to look at this is really kind of product safety. So are these hugely complex technologies which are very hard to understand are they being designed with safety in mind with a humanin mind and I think far too often at the moment these products are brought in and the right questions are not asked and if you wanted to change them you could not because you would have to reverse engineer them and they would not work so I mean it is glib is it not. The kind of black box technology is that people do not understand, they do not understand how they work, they do not even know where they are, that has got to change. So I do think that this will be the year when technology audit becomes really quite interesting as an area and you know, companies just asking some really quite simple questions about the artificial intelligence they are using and buying and like where is it within our business, can we explain simply how it works, can we switch it off if it starts to go wrong. These types of things almost kind of taken from different industries like the airline industry or the motor manufacturers who have built in safety for many decades that is how I think we need to look at artificial intelligence development.
That is an interesting perspective, you focussed there on AI audit and so in terms of an AI life cycle we are looking at the end so the product has been created and we are trying to work out, is this working the way it should do, can we explain it to people and proper checking processes. Jonathan what do you think organisations should be doing at the outset to make sure that artificial intelligence is properly calibrated and I think this ties also to where the law is going, what do you think the expectation will be? Well the law as we know in 2021 we are going to see bold new laws in Europe which from my perspective we are going to be as ground breaking, if not more ground breaking than GDPR are actually in the area of artificial intelligence which are going to in Europe start risk rate AI. Now we do not know precisely the parameters of that, but I think again it is kind of common sense really if you are bringing in a piece of AI into your business which is going to have a serious impact on a customer which may be using their really sensitive information which may deny them a product or may have a serious impact on their life then obviously that is high risk and you need to make sure that before you start using it, before deployment, you are asking some very careful questions about the safety of that product.
So I think in this area, people often get lost in the kind of, deep complexity of the technology, but actually if you step back from that and take a relatively simple risk framework and deploy it, it will really help and again I think it will, we will start to see the invisibility around this type of technology removed from businesses. I think when you are talking often to companies who have used this technology for many years there is a kind of a concern as to even how they are using it or where it is, like mapping it is so important. So, it is very simple steps really to start off with I think will bring huge progress. Carissa, do you agree with that? Where do you think organisations should be focussing their intentions in terms of implementation of AI? I definitely agree with that. I think there are two things that are very important. One is that we should not treat the general population as guinea pigs.
We used to do that in medicine and we do not do it any more for very good reasons. So now if we want to test a drug or a vaccine we ask for volunteers, we compensate them, we inform them and so on and we should do the same for algorithms, especially algorithms that can change people's lives in very dramatic ways and they should not go out into the world when they have not been properly tested before. So, I would like to see more testing and, in particular, random mass control trials for algorithms, at least certain kinds of algorithms. So that is one thing.
I remember thinking that is very important when it comes to AI and tech in general is to create a chain of responsibility so that from the outset everybody knows what is their job and who to go to if something goes wrong. So that when something goes wrong, it is not like everybody says, "well it was not me" and, you know, we blame the AI in the end. So, everybody has a job, everybody should know what their job is and what their responsibility is and what to do when things go wrong.
Another thing that I think we will start to see, I think maybe 2021 is too early, but it is going to be the beginning of that trend. In certain companies that instead of marketing for cutting edge AI and cutting-edge tech, we are going to see certain companies that start valuing totally human products and services and companies. I mean we have already seen it with education, it needs to be the case that the better schools were the ones that had iPads and computers and all this and now the more fancy schools are ones that can do without technology and actually focus on human beings and whether they are good teachers and the content and so on. So, I think we will start seeing some of that in other areas although maybe 2021 is too soon but … That is a really good point, a really interesting point because I think one of the questions that Europe is putting to businesses is saying please legitimise why you are using a piece of artificial intelligence, why is it needed, what is its purpose and if you ask that question and actually the responses are not convincing perhaps you should not be deploying it and using it and perhaps actually the existing methods might be good enough. Clearly what is happening with a looming or existing recession in many economies around the world is that people are looking for efficiency, they are looking for profits, they are looking for fast growth and obviously AI offers that. The problem is that that fast growth may, in the medium term, come at the cost of customer trust and longevity within your business.
So I think there are some really big strategic questions to be asked here around do you want to make this quick buck or do you want to have a long standing relationship of trust where technology is used very transparently and actually wins more customers rather than loses those customers. I have to say, I really like the idea of these controlled kind of randomised trials as well kind of stress testing, getting customer reaction. I mean this point around transparency is you should not mind if you tell somebody what is happening because they should be comfortable with that. Well why do you not, as Carissa said go to your customers and ask them if we decide you get a mortgage in this way with this type of algorithm and this type of data, would you be happy with that and see how people respond and they might be, or they might not be and then you can, kind of, adapt your technology to be sympathetic to that. So, I love that idea, I think it is a great idea. Carissa, I mean there is obviously regulatory sandboxes, just to take one example.
So, you create your plaything and you take it to the regulator and say, "does this work from your point of view"? Now, do you think that the focus for companies should be going to the regulator first or as Jonathan has just described, do you think we need to go to all of us to get the kind of input, the rich perspectives of a really broad diverse net of people to be able to make the decision? I think they need a lot of input from a lot of people, from just customers and ordinary people and lawyers and ethicists for a variety of reasons. One is that the regulator is probably overwhelmed and might not have a very careful look at whatever has been done and then they might change their mind in the future and decide that it actually was not a good idea even though at the time it did not pose an objection. So that is one thing, but also as Jonathan mentioned, we are going to see a lot of new regulation about the AI data in the coming years. So, if your company wants to stay ahead of the curve, you would be thinking in analytical terms much more strictly than current regulation. I agree, and I think in a way, there were lots of, there was lots of inspiration to be drawn from, kind of, ESG and sustainability principles here.
So, if you are buying a product has it been sourced in the right way, has it been, have the people who have worked on it, have they been remunerated in the right way and treated in the right way. If we take that into the word of artificial intelligence, if you are building a piece of complex technology, have you sourced your data in the correct way, have you trialled it in the right way, have you thought about safety and if those answers are no, no, no then as a company do not buy it go elsewhere. So, I do like this idea of, kind of, technology ethics becoming almost an extension of the ESG principles I think that is a great way to think about things and often, unfortunately at the moment, technology is kind of left out of the ESG debate a little bit and I am hoping that this year that starts to change.
It makes sense to me at least in the last year we saw that sensitive data, health data, biometric data, all of this extremely, potentially toxic information, has been shared with a lot more willingness because we are trying to battle a pandemic but it feels like in 2021 there might be more of a shift towards the focus of, if you are going to partake in this mass data sharing, how are you going to be responsible about it and who needs to be involved and so all of that resonates a lot. I think, just to close this discussion, if you could give me one takeaway, one thing for you, for 2021, that you feel that organisations and individuals, Carissa particularly, can do to preserve their privacy to protect data and to make this a more hopeful year and I am really interested in your views. Carissa maybe if you go first.
Maybe in summary that we have been thinking about personal data only as an asset and we have to start thinking about it also as a liability as a source of risk and that is going to change the way we treat personal data, both individually and collectively. Great, thanks. I think from my perspective, it is kind of bringing this, these concepts, these theories to life really. I think a lot of businesses have thought about privacy in the context of policies. They have kind of breathed a sigh of relief after 24 months of implementing GDPR and they kind of think it will go away and the challenge now will be to take those policies and make sure actually that what those policies are saying around transparency, around security, around deletion, retention, all of these points are actually happening and if you are doing that, like many really good businesses are now doing, and there is going to be no issue, again it builds up trust but if your policies say something and actually you are not doing that, I think that is going to be a real issue with regulators and the pandemic in a way has slowed down investigations in this space and signing power for obvious reasons. As we start to come out of it, that imperative of making sure that businesses are respecting the rights of individuals will just come back.
It will be strengthened I think so, getting your house in order now is very important. Great, well thank you both. I feel we will all be very enlightened from that discussion and also hopeful so thanks for your time. Thanks so much.
Thank you so much.
2021-02-09 21:30