Justice by Design: How Emerging Technologies, Privacy, and Social Justice Collide
Hi there everybody, welcome to our CPGE EDI webinar. I am thrilled today to introduce you to my dear friend and world renowned privacy expert, Ms. K Royal. I guess I could call you the honorable K Royal since you are a licensed attorney, but Kay has the entire alphabet soup of privacy qualifications and is a prolific privacy author. She has over 60 news articles, is a featured privacy tech columnist for The Association of Corporate Counsel. She also initiated the Association of Corporate Counsel's Women in the House programming supporting women in health counsel and the ACC elected K as their member of the Year in 2015, and the recognition of her extensive leadership. She is an executive board member for the Center of Law, Science and Innovation at Arizona State University as well as an adjunct professor of privacy law, and she's won just about every award you could possibly imagine for a pro bono service awards and for her outstanding work as an attorney and a privacy expert and I'm super excited to hear your presentation today K. Thank you. Welcome K
I'm wondering if this shows everyone how you supplement your income by taking prolific bribes. No, thank you so much for having me Darra was one of my absolute favorite law students back when I worked at the Arizona State University College of Law running the student life and the pro bono, and she actually impressed everyone on day one when one of the most famous and most argumentative professors did a sample socratic method to the entire entering class on the welcome day and Darra gave it back to him just as good as she got it and got the attention of everyone: the professors to staff, everyone standing up. So, love being here, but okay this isn't the K and Darra loves each other show. So let's jump in, I'm going to share my screen and this is absolutely an open
conversation so if anyone has any questions please please do not hesitate to ask any questions that you might have. Jump in, interrupt us, chat, whatever. Darra please feel free to do the same. I have no intention of killing everyone with death by PowerPoint whatsoever. Okay, so today she introduced me, but my email addresses are on the presentation as well. That way you can make sure you can get hold of me if you have any questions whatsoever.
I also co host a podcast is very popular actually in privacy. Today we released an emergency session on standard contractual clauses out of the EU so love it when things are published within 24 hours it means I do minimal editing work. So, let's give everyone kind of a basics in what privacy is. And if anyone, if everyone wants to speak up and raise their hand and say yes we already fully understand privacy, then we can skip kind of doing a grounding in it.
Otherwise I'll give you a little bit of an oversight and hopefully this will put it in perspective as to why does privacy matter. It's actually the hottest growing field both in and out of law right now and it's my understanding, I'm speaking to librarians, students, although that's not what you call them. Data custodians. We have library students and information science students. Information professionals and sort of the big term for our field. So this should really absolutely help.
So you may hear the term personal data, personally identifiable information, personal information. So, PI, PII. What does it all mean? Well when you kind of look at it overall all of it is kind of putting it into one category although they actually do have different nuances, but essentially it means information that's linked to a person. And there is absolutely nothing in the definition that says it has to be personal, personal information it can be business personal information, it can also be publicly available, personal information. There's nothing that says it's confidential. So a lot of times when I talk to sales people at companies they'll say well we don't have personal information so we don't need to do all this.
Well, who do you talk to? Who do you call? Who do you email? Who signs the contract? All of that is personal information, and it doesn't matter if you have the person's name. That is a critical thing to understand. You don't have to have the person's name in order for it to be personal information. It merely has to be information that relates to or is capable of being related to an identified or potentially identifiable person, and I'll give you a wonderful example, very common way of the identifying photos people. Y'all tell me if you can tell who this is. Personal information, even if it is commonly dei dentified.
And this actually matters when it comes to things like HIPAA when they're like, I'm taking pictures of patients and hospitals, but you can't identify who it is so it's not really protected under HIPAA. Well, is it a unique injury. Are you at a hospital where you only take certain types of cases? Is there an identifying mark on the individual such as a tattoo or particular mole or birthmark or something like that. If you can identify and narrow down who that person is with information that is available to you then it's personal information and nowadays information that is available to people come from all over the place. You don't have to just look at the databases within your own company, there's information freely available online or from other companies. So that really impacts it. Now, this is the law and I promise not to kill you with the legal stuff, but you kind of need to understand that in the United States, the word privacy is not in our Constitution.
And people think it is, but it's not. It was not until the Griswold case in 1965, I believe, where they established officially that the penumbra of the Bill of Rights, specifically the first and the third and the fourth and the fifth and the ninth and the 14th amendments actually do mean your private, so you're not being forced to quarter soldiers, therefore you have privacy of your home. And thank you, Darra just completely keeping us updated with notes I love it. So these emanations of privacy. Thank you Darra, that are in the penumbra has actually give us a constitutional right to privacy. However there are 11 state constitutions that include privacy
in the constitution, California is one of them. Although California is was voted into the Constitution, Arizona has it and it was actually in the constitution when Arizona became a state. It was put in there deliberately. There's also federal laws. The Electronic Communications Privacy Act, Video Privacy Protection Act, I love to throw out here especially the video privacy one. That's an example of privacy laws coming to fruition because of something happening in the world. And quite often laws are passed in reaction to something that's public outrage and the video Privacy Protection Act came because the video rental information was released or breached, and it included the video rental list of some of the US Congress people.
Therefore, it became a law that you have to protect the information. So, but one of the other things is to pay attention to you'll hear a lot from the Federal Trade Commission . They're an agency in the US, that you would not think is actually protecting privacy, but there is section five, which prohibits unfair defective or unfair or deceptive acts in trade. And so what they particularly look at are the privacy notices online for companies and if companies say they do something and they don't, or they say they don't do something and they do, then it can become a deceptive trade practice, and it used to be that privacy notices were very wordy, very flowery. Attorneys
would get involved and say we protect your data to the highest protection imaginable. Well as you can imagine that not true, because all security can be broken. And so now you need to make sure that your privacy notices are merely factual, and one of the interesting things is under the new California Consumer Privacy Act and the soon to be in 23 California Consumer Privacy Rights Act is companies have to disclose whether or not they sell or share your data. Now sell is very broad. It does not mean selling your data for money. It can mean exchanging your data for something of value, and a lot of companies say well we don't sell the data so we're not going to put it on there.
They have to put in their privacy notice whether they do sell the data or whether they don't sell the data, and if they do who do they sell it to giving the very broad definition of exchange of value not necessarily money. And as you can imagine if they get it wrong. The FTC can come after them for deceptive trade practices. So it's very interesting conundrum going into. There's also tort law I don't get into it, but people can sue for invasion of privacy intrusions, highly offensive, there's all kinds of things privacy's related to.
Then of course the main federal acts here in the US. HIPAA, which is the Health Insurance Portability and Accountability Act, FERPA for education, GOBA for finance, and then the Privacy Act of 1974 for government data for public sector. One of the interesting things to realize is the very first privacy law in the United States or not even in the United States, in the world is recognized as being 1970, I believe, and it was in a province of Germany, and the first country law was the one in Switzerland, but it was at the same time that the US started passing privacy laws. And so they say the US doesn't have good privacy laws because we're unlike the rest of the world and that's not true. There was actually just an Italian case that came down about a doctor who published an article about a patient, and it was in a medical journal and the Italian enforcement agency had to come down because you didn't get the patient's permission to do it. He didn't identify the information. Europe is all up in arms because you know this Enforcement Act. Didn't the doctor violate privacy and here in the US, there would be no question.
So, matter of fact, if the doctor tried to publish it in a journal here in the US, the journal probably would have de identified it for him, and would have asked for a signed permission from the patient to use it so it's interesting to see how the US privacy law definitely butts up against global privacy law. So let's move on just a little bit we've talked about this little bit. GDPR is Europe is the General Data Protection Regulation,. It is actually driving privacy laws worldwide. Brazil actually modeled there's after the European Union's GDPR and now that the UK has exited the European Union, the UK has their own version of the GDPR. Canada has privacy laws although people like to say they have no teeth, because they don't do a whole lot of enforcement they actually have a proposed privacy law now that they're looking at doing a whole lot more sanctions and violations and having appeals and being a lot more formal. And here in the United States, Virginia actually passed a law, just this past legislative season that's modeled after the GDPR even use the terms controller, processor.
So with all that. Let's start diving into what are the specific pillars of privacy? What do you need to do and we're again we're the aim of this is to get to the artificial intelligence and the social justice and how this works. So if I go too fast, please don't hesitate to tell me to slow down apparently I'm one of the fastest talking Southern people most people I've ever met.
So, just feel free to say whoa stop explain that a little bit and we're good. Darra jump in if you got questions. So, under the privacy pillars and these are pretty standard things that happen under all privacy laws. So you might not be able to comply or know all the privacy laws in the world, much or the ones in the US, but at least you can understand the general principles of them. So transparency and
notice means exactly what it says, tell people what you're doing with their data. Now, my children who are much older than I would like to admit they are right now used to ask why do we write privacy notices if we know people don't read them. Raise your hand. I don't even have to see you raise your hand if you read the privacy
notices of all the websites you go to, or the privacy notices of all the apps you download. And if you reject certain permissions and the app you download. Do you have your phone set for automatic updates of those apps because typically they go from anywhere from 12 permissions you give them when you initially install them to an average of about 120 on updates. So even though you might have said no to permissions if you're set to automatic updates and you probably have it anyway. So transparency and notice, why do we tell people, because it's the only way we have of telling people. I mean I'm sorry absorb that very simple statement, the only way we have giving notice and telling people is to give notice and tell people.
So it's a broken system, but it's one of the only options we have. Consent is very touchy. You have to give people full information in order to get consent from them and if you're going to gain consent for something then you also have to make it equally as easy to withdraw consent, and giving consent can be something as easy as checking a box online, but yet when you want to go back and opt out or you want to withdraw your consent it's usually write us at this email address and make it a, you know, I don't know federal tracked package that requires a signature and include three copies. It's not easy usually. And so you have to make it easy individual rights, we've always had rights under HIPAA. Ever since it was passed, lot of laws now we're starting to include them and one of the biggest ones we're starting to see is the right for deletion and so you can generally request that your data be deleted. And this is very important
both to archivist and to data analysts is can you delete data if it's going to break the database? No, there are exceptions under deletion including if the law requires you to keep the data you can tell them no we're not going to delete the data. So a lot of things there. Privacy officers having someone that knows privacy, whether it's an attorney or a non attorney. I've been in both
roles I'm a licensed attorney, but I'm in non attorney roles as well. Make sure someone actually understands privacy. Vendor oversight make sure if you're going to outsource either your customer data that you process or your own data that you have oversight over your vendors laws are starting to require specific contracts in place with very specific language and if you've got you know five different laws with five different contractual requirements and you're going to have five different amendments on your contract. So why is it hard to get a contract in place with a customer or an entity or an outsource because all the laws that are in place, so you know darn all those attorneys.
Data purpose collection limitation means only collect data for the reason you need to collect data for and it also means don't use it for any other purposes. A very common example of secondary use of data is data that companies collect and then as they do software improvements they test the data on the data they have or they test the software on the data they have. Well the data they have was not collected for purposes of testing software. If they collect the data in order to mail a widget to someone. Well, that's why they
collected the data. So purpose limitation would say you can't then use it to test new software, unless the second purpose is very closely related to that of the first and who in their right mind would think testing software is closely related to shipping a widget. So you might be able to explain it you might be able to make those connections, but you shouldn't have to so what do you need to do, you need to disclose in your notice that you're going to test it on your software on their data whenever you have software improvements and nobody reads to notices anyway so there you go now you have your permission. Data integrity is just making sure data can't be changed as it goes. You keep data logs you make sure it's protected during transmission and storage. Cross border
transfers and international requirements is getting data from other countries and it doesn't even have to physically move. If there is a server in Germany and you're in the US and you access that server even if you don't download the data, you don't store it all you do is look at it, that is a cross border transfer of data because looking at it is considered processing. Processing is anything you do today to including deleting it or an anonymizing it that's processing it, and then security, you've got to have security with privacy you gotta have privacy with security. No security is infallible it's going to break and when it does what's going to get you in trouble is going to be whether or not you abided by the privacy requirements.
Do you have more data than you should, is it not protected, is it not redacted, is it not anonymized, that's where you get in trouble and think of all the data breaches that you've heard of over the years. And you hear the CEO being fired, you hear of the chief information security officer being fired. You don't actually hear the Privacy Officer being fired this because most of the time they ignore us anyway, because nobody wants to hear about privacy. It's kind of like insurance nobody wants it, it's a necessary evil, but it's there it is there for our protection you just need to be aware of how it impacts you.
And one of the biggest things before we jump into AI then is there is a difference between a privacy breach, and a security breach. A privacy breach could be impacting the data and it doesn't have to be electronic if you miss send an email if you lose a briefcase with patient records, those are all privacy breaches. Security breaches actually breaching the information security, but it might not ever touch personal data. If you don't have personal data in that system that was breached, probably not a privacy breach, but it would be a security breach. When you have them both. That's when you hear about all the millions of records that are
jeopardized and all those free credit checks you've got for the next 20 years or something. Okay, so let's talk about artificial intelligence and machine learning. Now in here, what is AI. So artificial intelligence or machine learning is a machine that displays intelligent behavior. It can reason, it can learn it can process sensory perceptions. It involves tasks that historically been done by people but can now be done faster, better, and can actually learn things from what it actually processes, and there are two types, there are narrow AI, and then there's artificial general intelligence, and they mean exactly what they sound like. The narrow is limited to a very specific application,
and the general is general applications that it would have. The problem is, it happens within a black box, and you can't explain it. So if under the privacy laws people have the right to ask you questions about how you process their data, or how you came to a specific conclusion, or what exactly you're doing to their data. You can't answer it if it's AI, because you don't know. It happens outside our comprehension, we don't know exactly what machines are doing. And so that is a very very touchy area when it comes to privacy and it comes to individual rights. The other thing that's touchy is watching. Darra is a blockchain expert and block chain is considered one of the most secure ways of processing data however you can't change it. So how do you do individual rights,
deletion, amendments, corrections if you can't actually change the data because it's immutable. So technology is both our friend and our enemy. It gives us the ability to do certain things, to innovate, to achieve amazing things in the world, but it also offers complications when it comes to very basic privacy protections. And it's no surprise that the first privacy laws were passed in 1970 there were some earlier ones in some very specific areas right before that, but it's because the first mass marketed computer was sold in the 1960s.
And so now all of a sudden, it was different for how you process data, how you compute, how you share data, how data is stored. And that started the whole thing about privacy protections and that was only a little over 50 years ago. So it's something that we're just now starting to get into. So what are some of the ethical issues in AI. Well some of the general ethical issues that people have: unemployment. So this is something we've always wondered when there's machines. What happens after the jobs in,
When they're done processing with they're going to process, or when they take away jobs from people. Now that worries been around for a long time and it hasn't really materialized as machines replace people in jobs, more jobs for people open up dealing with machines. Data analysts is a perfect example of that. Inequality. How do you distribute the wealth created by machines.
Humanity. How do machines affect our behavior and our interactions and I'll show you a whole bunch of good examples of these. Artificial stupidity. How do we guard against mistakes, and that's where a lot of the social justice issues come in is the mistakes that are being made. The common word or phrase is garbage in, garbage out. But what if it's not garbage, that's going in. And that's one of the social justice issues is the facts that are actually going into the artificial intelligence, they're real facts, it's not garbage.
The problem is the societal norms that created those facts. And one of the examples can be it is an absolute fact that there are more African Americans in prison, then there are Caucasians. That's the fact. The backup to that is, why are there more African Americans in prison than there are Caucasians. And that comes down to a whole lot of other social justice issues that we have to deal with and we need to fix as a society. But unfortunately, those are the factors that go into the AI.
And so if you've got AI watching excuse me I have four dogs, and apparently they decided they wanted to be part of the presentation. Sorry. So if you're looking at security in a mall or in a store and there used to be those that were open and people would go to malls and they walk around and they do things, and the AI running would tell them they need to watch the African Americans in the store closer than they watch the Caucasians because African Americans are more likely to break the laws. Not true. Just happens to be our societal injustices coming through with the data that goes into the biases that go into AI. And so that's where we really need to see people that carry this justice with them. We need more minorities working in the programming and in the AI and
in the technology to make sure that we're aware of the biases that go in, because once you train AI according to the biases you can't really untrained it so that question about how do we fix artificial stupidity. So that's one of the social justice issues. How do we keep, keep, AI safe from adversaries, that's very hard. Some of the most corrupt countries in the world. And I say corrupt because that's the, that's the gauge they're on is there's publications about they look at the level of corruption in certain countries. And some of them, part of what they figure is in corruption is because they see security protections own competitors or other entities systems as a challenge, and they want to beat it.
And that's one of the legal things that we take into account when we're doing law in China is if we have a law firm in China, we just have to understand that that data will be breached, because that's a cost of doing business in China is they, they love it they like doing it. It's a way of life there and so it's just one of the things you take into account of doing business and that's one of the first things to Chinese law firms will tell you when you hire them. It's just know that your data is going to be breached by using us because that's how we work here in China. And so, how do you protect against unintended consequences.
You have to make sure that you're aware of the biases in the data that's going into the AI. How do you stay in control of a complex intelligent system that is very difficult, and I'm looking for the experts that can fix that one. And then how do you define the humane treatment of AI, which is an interesting question of robot rights, I have to admit that's not one I get into very much. Any questions on this part so far. Darra does this spark any questions with you?
All the questions. Okay. Well that was a good grounding, that was the grounding and what the conversation is going to be so okay 28 minutes that wasn't bad. I'm really looking forward to hearing you talk about how privacy interfaces with all of this and specifically from that social justice perspective. Then I am going to go straight into, you know what let's go to the next slide. First, just to give you a little peek about how the world is treating AI, and some of the laws that they have. So there are countries that have discussed restrictions on the
use of lethal autonomous weapons. That is a very controversial use of AI. Regions that have permissive laws in place for autonomous vehicles, and how those are used. And then I want to make sure that we talk about biometrics before we actually jump into the AI. So, I love this last one here that is the face if you look at it sideways, it says wire.
And so this is one of the, the very easy ways to understand the biometrics may not always be what you think they are. So, love that and there are biometric laws, some of them are private companies only. Some of them also govern public information are public entities and one of the biggest controversies is law enforcement use of facial recognition. And so, Illinois has BIPA, the Biometric Information Protection Act.
Washington also has a biometrics law, Texas has a biometric law, some of the new states that are incorporating privacy laws include biometrics in the definition of personal information. So it is protected from that, but typically limited to biometrics use to identify a person, the controversies with the law enforcement use of biometrics, and this is AI is recognizing people. We talked about the biases that go into it. If AI was trained using the available student population and the available student population when that was created was overwhelmingly one gender versus another, one race versus another then your AI is going to be trained on certain things and so you can mis-identify people, you can focus on the wrong people and when police are using this in real time to identify perpetrators at a scene, then that's something you need to be careful of and we have some great news stories around there so let me move over this. One of my professors that I work with,
he and I teach the course on privacy biometrics and privacy big data and emerging technology. And we actually did a presentation on AI, just to give you some history here I'm not going to give you all of this and I did send this so it can be shared with y'all as well. So AI really started right in, right before the 1960s, but the boom has been this past decade or so where you've really seen it coming and going. This is a PDF.
Let's go down to these examples so here we go. How do you vote 50 million Google Images give a clue. So what is this show on your screen I just want to verify that it moved it to the right screen. Yes. Okay. Perfect. Thank you very much. I want to make sure, yes. Okay, perfect. So this, what vehicle is most strongly associated with Republican voting districts, and it's extended tab pickup trucks for Democrats, it is sedans. All of this came from AI. Your Roomba may be mapping your home and collecting data that could be shared.
So I just got a Roomba, but in the past few months before then I refuse to do so because that's what it does it maps your house, and that data can be breached and accessed. When AI can't replace the worker it watches them instead and so cameras watch the stations of workers and see if they can meet their quotas. Can they violate your privacy everything from speakers to water meters are sending information to the cloud.
There's a murder trial was testing the boundaries of privacy and home. Amazon Echo, Alexa, Google Home, all of those are used to actually capture transcripts. People can call the cops using it even if they don't intend to do so, and randomly I will walk through my house and just say my husband's name is Tim just say stop Tim, stop Tim, don't break my leg. Oh, I'm bleeding you hurt me. You're killing me. He's like, shouldn't you be saying this with more emotion. No, they just pay attention to the transcripts not the actual audio recordings. And yes, they have used them in trials.
So that is it. Alexa call the police was another one, they AI that can sense movement through laws. So when police are looking at assessing houses and whether or not people are there you can actually use AI for this as well. It is helpful in good ways as well, don't get me wrong here AI is very very positive. But for as many positive uses there are negative uses. And until we can offset
the outcomes of the negative uses. I mean we have to be fair, as a society, hackers are real obstacles for self driving vehicles a lot of information that comes from there and they can hack them by the way yes they can hack embedded medical devices so your heart defibrillators your insulin pumps things like that can be hacked and changed. AI also works in a lot of cases such as shopping so there's an age old story that people who own Macs would be shown hotel prices and car prices a lot higher than people who own PCs, because AI told them that people who own Macs had more money to spend. Google's project Nightingale got the personal health information on millions of Americans and used it. Cameras with AI brains, and we'll talk about how that goes into deep fakes, so they can mine data very much more specific than what you can capture own just software capturing data is the AI can actually take information from that. Your bosses can read all of the company, emails, AI can actually identify people who are potential whistleblowers based on language they use and it doesn't have to be language that says, I'm going to blow the whistle on this or I'm going to report them to a regulator, it can pick it up, many many conversations before that, just based on the wording that they use and the types of emails that they send.
You can identify tweets sent under the influence of alcohol so AI has learned what is a tweet look like, what kind of words are there, what kind of syntax do you use if you're drunk tweeting. Facial recognition what we were talking about before, these are just some of the ways. Some of the points on the face that it pays attention to, and how it does facial biometrics. Class action file lawsuit against clear view AI startup.
And this was in Illinois under the Illinois Act I was talking to you about and about how they use facial recognition and whether or not they got people's permission to use their faces for their facial recognition algorithms. Biased algorithm led to the arrest of an innocent person for a crime he did not commit. So this was in Michigan, and the facial recognition based on the AI and the programming identified the wrong person. So when you talk about social justice issues that's about as unjust as it gets. Can it predict your personality type by watching your eyes, it can based on the information in.
It can also tell your political affiliation, and then that connects to targeted behavioral advertising and they can specifically target individual people in the news stories that those people are fed through their social media or through their newsfeed that would either discourage or encourage them to go vote. If your news stories are showing you on your social media or your newsfeed on your device, your laptop or your phone, that the polling lines are backed up for three miles and people are miserable and people are passing out from the heat are you going to actually go physically check to see if that's true or are you going to choose not to go to the polls and vote. That actually does happen. I may be paranoid, but paranoid doesn't mean you're wrong. How China's using AI to fight the coronavirus of very interesting ways that they implemented that. Very successful processes in most cases, their high tech police they use facial recognition and a lot of AI and a lot of the activities that they engage in. Emotion recognition algorithms computers know what you're thinking.
Children's emotions as they learn. So can you watch children actively learning and use AI, and whether or not they're paying attention whether or not they're engaged. Boost lip reading accuracy. Predict your own, that was another personality one. Detects deception may soon spot liars in real courtroom trials now take that to the next step.
If you're using AI to detect whether or not a witness is lying. That means you're using real time AI in a courtroom. What if your court is wrong, what if your AI is wrong. What if they're not lying. Is this information they're going to give to the jury? Is this information they're going to give to the judge if it's not a jury trial? Is this information shared to both prosecution and defense or both sides of a civil trial.
And different issues like that and yeah Darra I think you just brought up I think that was you that just said that. So this one about children learning and reading. How does that work with neuro diverse children. So, again, it's not necessarily garbage in, it's just limited data in unless they're actually including neuro diverse children in these processes for the AI in equal levels of how they're using non neuro diverse children. Then you're going to come out with the wrong conclusions because they're not using a full data set.
This one, your mind, you're the mind reading AI can see what you're thinking, and it can draw a picture of it pretty fascinating, but does everyone's minds work the same I don't know even know what they would use to do that. Surveillance cameras to use computer eyes to find pre crimes. Now how many of us have seen the movie with Tom Cruise about the predicting murderers and crimes that happened in preventing it from that.
Right. So that's another issue that we've got with the, with the AI and the predictions. Is there still always a human element there always still a human element. Brain machine interfaces forged authenticity so now we're getting into the deep fakes and deep fakes are huge. I can't see a show of hands out there, but think about deep fakes and how many of you have had experiences, either watching the fakes or even doing some of it yourself. There's very popular apps where you can replace your face. In a video clip with a famous movie clip or something like that.
And can you imagine what this, the implications of this would be in courtrooms. So if a video was introduced as evidence in a courtroom. Do we have the right judicial measures in place to stop false information from being introduced as evidence. There's evidentiary procedures to where you can challenge the authenticity of certain things with this AI on deep fakes is pretty, pretty big.
And it's pretty significant. And you are allowed to produce a copy of something if the original is not available, I believe you can assess the original to see if there is a deep fake. Some very very smart people can go into the programming and assess it. But if it's a copy of it because the original isn't available, can you tell that the copy is a deep fake. Can you prove it.
And if you can't prove it. Do you have to let it in or can you eliminate that from being evidence in a trial. Very, very controversial news, and then the rest of these are just good things. I'll stop here on these AI powered cameras become a new tool against mass shootings. So, yes, AI was one of the things that they used in the Boston Marathon bombing, is they were able to identify people in the, in the crowd that was there and who was around and matched it also with witness reports and security sightings and different things like that, but it was very helpful in that. Also helpful in
the school shootings that we've been having lately. And so don't get me wrong, AI is very very powerful, very much, able to be used as a, as a powerful tool. And I will stop the presentation there. Let me stop sharing my screen and then Darra let's talk about some of the questions that you got with this really really brief, but impactful insight into privacy and AI.
Well first I want to welcome all the questions from our attendees, because you and I can talk anytime about my questions so. Absolutely well you had you posted one yet for the archive is out there are we going to have to evaluate AI generated records for authenticity before they come into your collections and how will you do that. That's a question for you? How will you do that, if you're the keepers of history, how do you know that the history is right. Well, and of course we've had as the archival students can say we've had forgeries that's where our whole field came from is looking for forgeries, but you know and it's going to be a problem I think for libraries too and I think it's going to be a bigger piece of the whole dismiss information problem that is so important to the information profession and then for the data, data analytics folks here, you know, how are y'all going to be able to make sure that the data that you're doing your work on is good. Right. And and again I get it. With the archivist,
and maybe y'all can educate me on this. It's not just written paper like you would think of for a library and y'all are talking all the information and how to archive that. And as part of your job is proving it's correct. Does this mean you're getting training in AI and deep fakes and technology. Seems to be where we're moving, but yeah so I will leave it to our wonderful attendees now if they have any questions, feel free to either raise your hand or to talk or to type your question into the chat. Yeah, please do until I see some questions coming through, I'll just continue talking about some of the ethical issues.
So, on the podcast we had a woman on there one time that is a privacy engineer. And she was explaining how it's becoming more common to make sure that engineers are actually trained in not only privacy and understanding what personal data is, but in also understanding how does that feed into the technology and what you need to be on the watch for. And so we've talked about the, the underlying data that used to program AI, and how if you don't have the right data you can come out with the wrong conclusions. There's also intellectual property issues with that. If you think of different companies hiring the same AI company to let's say just do contract review.
That's a good example. If Company A uses it for you know 20,000 contracts and trains it and then they're able to sell it to Company B, Company YY, Company 1000, who benefiting from the AI of that first company who actually owns the intellectual property for that. And there was a company that was just determined to have violated the IP of another. And all of their patents invalidated which means all the work they were doing invalidated which means that all the AI that they had that was training their product development. Where does that go what happens to that. If, if the patent was actually owned by the first company and has to go back to them. Did they get all the AI training that came from the companies that they sold it to. So there's a lot of ethical issues when it comes to that the social justice one seems to be the ones that are top of mind for most people, and making sure there are some efforts out there and making sure that one that there are what they're calling inclusive programming.
So that is a movement now is inclusive it by design, into the technology, making sure that people of all different types are involved in the programming and in the technology. And this doesn't just mean ethnicity and genders, it means neuro diverse, it means people with disabilities, it means people from various locations in the US that don't speak the same as the others. It means people have different heights. And if you don't have that kind of diversity and inclusivity and equality in the programming. You're going to come out with bias data and implicit bias is a big focus in a lot of fields nowadays and looking at that and let's see we got one. Okay great.
Librarians are often in the position of negotiating vendor agreements and contracts without having recourse to legal counsel. Oh my goodness didn't realize that. What is happening in the field to empower librarians to navigate this or is this an unmet need. I'm going to say from my legal perspective is an unmet need Darra you're much much closer to this being an archivist and the lawyer. I'm not technically a lawyer anymore and ethically I have to point out that I let my license lapse when I moved to Canada. Okay, in our state you're still a lawyer, you're just not a licensed attorney.
Okay. But yes, no, it's absolutely an unmet need and I think for the privacy perspective, we've really lost control of the ecosystem in the libraries, and so, And I guess that's a good question K, is to build on Holly's excellent question. Holly's one of my students. We don't control the ecosystem anymore to a large degree so how do we meet our privacy obligations in an ecosystem where we only control part of it. And that is a growing concern in a lot of fields, because, as I talked to people about privacy and you know you bring up privacy and people like that again. But it touches everything.
If privacy is dealing with personal data. It touches, everything. Doesn't matter what field you're in it doesn't matter if you want to be hiding in the basement typing in things all day long, no matter what your end if you just want to be a transactional person doesn't matter. You're going to touch personal data somehow, some way so how do you educate people in the different fields about how to use personal data and I'll give you a great example of some of the differences.
So, if I was to tell you to tell me if something was public data, confidential data, or sensitive data, where would you put membership in a trade union, such as due to the librarians have a National Librarians Association. We have the American Library Association, perfect. So where would you put take out the American because we don't want to specify a country, but let's say it's the global librarians Association, would you consider membership of that to be public, confidential, or highly sensitive.
Oh it is so contextual only because you understand it. Everyone else, listen to that. And what we would probably say is public, because here in the United States, you could go pay $5,000 and you could get a list of pretty much every trade union out there right, American Medical Association, the Bar Associations, even our voter registration roles, which really irritates the Europeans. In Australia, that's considered highly sensitive information membership in a trade union is high or professional association they don't even say trade union because that could have specific interpretations, but membership in a professional association is considered highly sensitive. What about bank account information your routing number and your account information, would we say that is public, private, highly sensitive. We'd here in the USA that's highly sensitive right? It's not in Europe, and it's not in most countries that define what sensitive data is, it's not even, it's not even hitting the boundaries their finances know where in there definition of sensitive data.
Do you have to protect it. Yes. Is it sensitive data, no. One of the biggest ones dichotomies that I just mentioned was political. So in most countries with privacy laws political opinions and beliefs are considered highly sensitive data, their special categories, they must be protected to the highest degree and here in the US.
Good Lord you can you can go buy a voter registration pole. Or you can have AI figure it out based on what kind of car you drive and trust me there are political groups out there who do buy that data and they do use it. So one of the last ones I'll give you is personality.
Now I love this I happen to specialize in definitions of sensitive data. In Israel personality is considered sensitive personal information. So I'm all kinds of sensitive here clearly. And so in order to understand to fully put into context the AI, the technology, the programming, the underlying data, the social issues, it might cause you actually have to understand what is personal data and then what is special or sensitive categories of personal data. And how do you get that kind of understanding out to the people that are creating this technology, reading the contracts to buy the technology, determining whether or not the output is accurate or authentic, you start getting into some very complicated problems that are not easy for people to solve. and it can't just be one field solving it. If people are relying on the lawyers to solve this, it ain't going to happen. It really does take everyone who has a part to play in this from the beginning of creating the data sets and making sure that they're all inclusive to how you use the data. There are cities who have decided that their police departments are not allowed
to use a real time facial recognition, they can use it in review, such as if they're reviewing all the people that were at a crime scene afterwards and looking at the video they can use facial recognition for that, but they're not allowed to use it in real time. Because the potential for harm to an innocent person is very big, very large. So unless we're able to get the people from the very beginning of the system to all the way to the end. How do we address this problem it's not easy. It's not easy and all of you absolutely have a role to play in that I'm including the question on how do you read the contracts for it because y'all aren't trained in law. You're not trained and most lawyers aren't trained in privacy let's be clear here this is a this is a very very small field, although when the GDPR in Europe.
Went in effect three years ago it was past five years ago, went into effect three years ago. The IAPP International Association of privacy professionals predicted that the world would need about 75,000 more privacy people now I've been privacy a long time. And they predicted the world would need about 75,000, when they went back. I think two years after the GDPR was passed, there were over 500,000 data protection officers registered with the data protection authorities in Europe. Now that doesn't mean all the people working in privacy, it means the data protection officers which are like your chief privacy officers. 500,000.
That's a lot. And those are lawyers, those are non lawyers, those are people from every walk of life that in their company was decided to be the person that knows how to govern the data. It could be a data analyst, it could be an archivist.
It could be about whoever is in charge of the data. So is this a field for our students. This is lit a fire under anyone they found a passion for privacy, the privacy bug bit them and they became passionate about it yes this is actually as I mentioned earlier, and I'm not joking.
This is the hottest growing field, both in and out of law, but you do need to make sure that you have the right skill set to fit well into privacy there's a lot of people that they're in privacy, they have the jobs and privacy, they're not good at it. And the reason they're not good at it is they're focusing either strictly on exactly what they're doing and so they're very inflexible, or they don't understand all the ways that privacy touches our lives and what it does. In other words, if you can't see that AI is problematic from the data that comes in it and before that the social justice issues that impact the data that impact that goes into AI to the outcomes and the deep fakes and the authenticity. If you can't see that whole roadmap and understand what the very very broad mind that this is a problem, then yeah you're probably not going to do good in privacy. Do you have to be all knowing and have a law degree No. As a matter of fact,
a lot of lawyers don't work well as privacy people because privacy is is kind of questionable now for me to say it's all shades of grey, but there's very, very few hard lines in privacy. Unlike other areas of law where you can move a comma and change the meaning of a sentence, you know 15 different ways in privacy, all of those meanings could still be right. Doesn't make one wrong versus another you can take out an entire paragraph, explaining privacy law replace it with something completely opposite and they would both still be right.
So you've got to have that flexibility of thinking and be willing to be creative, but then also know where to put your foot down and go No, sorry that one is a big hard black bar laying across your way and you ain't passing. There we do have a few of those. So we got about four minutes left with K. Does anyone have any burning questions that they want to get off their chest, well you have the chance, absolutely happy to hear from anyone. And you're absolutely welcome to contact me after with questions if you didn't catch my email address Darra knows how to reach me, trust me.
And in privacy I definitely believe in the transparency part. So you can find me easily on social media, just make sure you spell my first name with one letter K there's not a dot after the K there's no y. I'm as easy to find me. Thank you for doing that Darra just shared that if you're interested in working on these issues, check out the Library Freedom Project is open to students.
Oh, thank you for that suggestion Holly love that so please do check that out, feel free to reach out to me find us. If you want to know about privacy, you're also welcome like I said to go listen to the podcast. This season we shortened it to an average of about 30 to 35 minutes because people told us that 50, five zero minutes was too long to walk their dogs or exercise, but the one that was released today on the new standard contractual clauses out of Europe that are hot off the presses, and it's almost an hour.
Sorry. But absolutely, you can go back through the things we actually talked about social justice and AI in some of the episodes so please feel free. I'm not just being selfish and plugging here. It's just good information if you're looking for a quick resource on how to understand what it is if you have a question that's an easy way of doing it. We don't talk from a very legal perspective whatsoever.
Trust me nothing with me is going to be a legal perspective. In Spring of 2022 we have a new course on privacy, technology, and law in iSchool. I wrote the course I don't know if I'll be teaching it. Beautiful she's actually my reviewer. One of my reviewers my PhD is on privacy in universities in the US and all the different privacy laws that impact universities, people have no idea that here in the US, you might think it's FERPA, which is the Family's Education Records Protection Act or something like that.
It's no the only one that applies to the universities. Pretty much every privacy law out there would apply if you're doing the right activities and that's another thing to understand about privacy so with that I won't make you run to the actual hour. Thank y'all so much for having me. I hope you enjoyed the conversation, and have some insight into why AI can be so problematic for social justice and discrimination issues.
Thank you so much K. It was a real pleasure to have you. We were very very lucky to have your time and expertise. Thank you K. And thank ya'll for listening to two southerners run the program. Ya'll take care. Bye ya'll.