Mutale Nkonde: How to get more truth from social media
today on the future of everything the future of truth in social media in the last few years we have seen the convergence of several new and powerful technologies related to the internet and social media there's many more than that but let's just take a few there are that these systems that use algorithms to make recommendations now they gather data about you and your to pass decisions and they may recommend a movie you might like or things that you might want to buy but they're also used to recommend more profound things like whether you should be extended credit or whether you should be released on bail so as you can imagine the you their uses are controversial second there are these online advertising systems not unrelated to the algorithms for recommendations but these target individuals based on a deep analysis of their internet habits the sites that they frequent the things that they buy the likes that they post on facebook or twitter these systems create profiles of everyone on the internet essentially uh and they can be bought and sold between companies in order to have these profiles third as an example and one that we're all familiar with is social media where voices can be aggregated and amplified these voices may or may not speak the truth but they can be loud they can be persistent and they can gain a broad following it has also been shown that when these voices are giving disinformation it can spread faster than accurate information spreads and this is kind of makes sense because disinformation is attractive unexpected intriguing compared to reality and so people are drawn to it so what are the aggregate effects of all these technologies recommenders advertising amplified voices well this is complex most of us can think of good things that have happened we found a great product or we found a movie that we liked we learned about an organization that we admire through targeted outreach or we follow the tweets of somebody that we admire or find entertaining but it is also not hard to imagine bad things can happen with this technology in fact you don't need to imagine very much now mutale nainkanda is a fellow at the digital civil society lab at stanford university she started as a broadcast journalist and has worked on methods to regulate the use of artificial intelligence algorithms in commerce and in the business world she is the executive director of ai for the people a non-profit that seeks to educate black people about the social justice implications of the kinds of technologies i just described mutale you have written that some recommender systems have almost a baked in bias from the very start can you explain how that happens yeah and uh thank you for inviting me uh to talk to you russ this is so exciting and i love uh reaching out to new parts of the stanford community so where i really situate bias within ai technology is actually at the design phase so what i'm speaking about specifically are machine learning protocols where we're using historic data to make predictions about a desired future and what's really key about that is that the desired future is always the future that's going to optimize monetization and the historic past by design is this is a past that is encoded with racial biases sexism trans eurasia the erasure of disabled people and other people who up until pretty recently weren't even thought about as people and in the context of voting we only have a hundred year history of white women voting in this country so if you're building an algorithm that's going to look at historic voting um behavior in the united states for example there are so many people that were left out of that data set and so what i argue is that when we develop technologies that also in their design take on um the the oversights of our past then we're we're encoding we're encoding that bias um and i didn't think it was going to be really provocative but peop but people were really interested in that yeah that's that's actually one of the most fascinating things so tell me as you give this message uh do businesses for example that are building all kinds of ai you know they have data they're excited that they can be more successful by using their historical data then you say about the bias what kind of response did you get and are people listening or are they resisting or is it both um it depends on the time so i started this work where are we now 2021. so i started this work about 10 years ago and at that point i was uh i live in new york city and new york city was calling itself silicone alley you were the baby sister of silicon valley or actually the baby brother there were no women we were the baby brother of silicone at valium what we were trying to do was reproduce everything that happened into in the valley in uh the flat iron district of new york and that's where our vcs were that's where uh companies were so venture capitalists yes venture capitalists that's where that's where they were and all the quantz people so the math people as well as some engineering folks but mostly math people were situated there and they were people who had not been educated to think about the social impacts of anything and i am a a black woman an immigrant which isn't surprising to new york most of us come from somewhere um who was he was saying you have to think about social impacts i'm a sociologist and i am looking at the sociology of science but beyond being a sociologist i'm also a journalist so i could potentially tell this story and they didn't want to hear it they they want to hear that technology was good we had we were about to select president obama who was in love with the valley in love with coding and my voice was really marginalized until kathy o'neil published weapons of math destruction and what made her message so sticky is that she was a fellow quants person she has a phd in math it came from harvard she unfortunately had ovaries which meant that she was a woman so you know they were a bit like oh we don't know but she had all the bona fides to say that this is the way out you know what we're building in terms of these statistical models are not math this is quite amazing to me because 10 years ago the systems were nothing compared to what they are now in terms of their penetration in both the business community and certainly the internet and so um it's quite remarkable and i guess not surprising that they would that people would have been people like you and kathy would have been just put aside as no they don't understand how awesome this is going to be right and and also kind of like i mean to use the language the kids use we were the haters like nobody nobody want to pay any attention to us and this wasn't an industry where ethics or any of the questions that we're having right now were active and it was also a time in business culture where uh mark zuckerberg was a genius and we all wanted to be mark zuckerberg google was the best place to work and people were laughing at amazon because it had been around forever we only could get books and it wasn't profitable so did things change did things change so so i feel like so you're telling me a very credible story about 10 years ago um did people start listening what had to happen to get people's attention or has it not happened yet cambridge analytica i think what cambridge analytica did as a news story and this is now 2017. so i'm kind of in the academy doing these weird fellowships you know nobody's really paying attention to what i'm doing cambridge analytica happens but not only does keynesia analytica happen the mueller report happens so you have an and the trump presidency so you have a bunch of you know as i said i'm here in new york you have every uh person in the city walking around literally crying because hillary clinton was ours as much as donald trump is ours and we couldn't understand she won the popular vote the predictions what happened and suddenly in the news cycle people are now speaking about statistics because they're they're worried about predictions and and forecasting of elections people are now like what happened with social media cambridge analytica comes out and says that they had used these weird things called algorithms to decide which political ads people saw and then what kind of made the story even juicier was mark zuckerberg our boy god and cheryl sandberg the queen of all women in technology behaved poorly they lied and then journalists found out that they lied and all of a sudden technology could be bad and then the the final part for the mueller report in terms of bias was um when the first chapter was released and african americans had been the most targeted so suddenly all this work i'd been doing all of this um kind of you know look at me this is really bad yes because this had been your focus for several years before this it all came together in the story and uh social media companies weren't willing to do anything about it but they hated the heat they were really and so they started to listen but not in you know we had so many other social media events after that right and it has continued through the election the most recent 2020 election and even the post-election uh so i wanted to ask you about that so we have we have companies that are working on algorithms and ai recommender systems advertising and the like we also have individuals who take advantage of these uh especially on the social media side and and you could make the argument that these are two different players there are you know the industry they're they um they're using these algorithms and what they would say is they're just trying to make a business they're just trying to you know do whatever uh ev but then there are people who are not technologists um like politicians and leaders of movements who are using these tools themselves are you able to separate out these two effects or are they inseparable and like creating a a universe where they're rotating around each other and they become mutually independent i'm trying to tease apart the different players and your assessment of their motives and what if anything needs to be done uh to kind of expose this for um clearly so that everybody can make decisions about their opinions about what needs to happen so that's a that's a really good question because i think you know coming out of journalism and then going on to the business side um of tech i was i've always been very passionate about my work and i've always said the technology is agnostic the same algorithms that we use to to decide who doesn't get a bank loan we can also use very similar systems to spot a cancer right what does success look like and what is the model of success so i've always tried to let people know that the technologies themselves are agnes agnostic the engineers that are building these technologies are building them towards specific specs and while they do have somewhat of a responsibility the co-opting of that technology cannot be predicted certainly not by a group of people who are not trained to read societal markers and so when i was doing congressional work i led a team that introduced the algorithmic accountability act which was oh my god it feels 2017 feels like a lifetime away paul ryan was the head of the house we still had mitch donald trump had just come in and we knew that it wouldn't get passed so we entered what was called a messaging bill and what messaging bills are in the context of the federal government are just people staking a claim are just saying this is my issue and i'm going to work on this issue for the next congress and it was so difficult to tease out that question because you had many activists that were mad at the companies mad at individual people mad at universities like stanford right mad at technologies you had other people who were mad at trump and maddox cambridge analytica and mad and from my perspective i was like let's not get mad let's try and figure out how can we regulate these things where we can have an ai economy because last time i looked we're still capitalists but that economy is willing to take the hits in terms of monetization to make sure that these systems don't cannibalize us and even that was so um you know what do you mean cannibalizes what could right right well can i ask you so the accountability uh sounds like a great thing and so could you just go into that what what was in that act because i as you said it was a it was a messaging uh uh act and uh i lost your video no i'm coming right back on we're gonna pause for a sec yes sorry and then let's make a note of this this is that um i'm just making it it's no problem 331 left in the first session hi ray we're going to cut this out ray ray is my producer okay go back uh this accountability act sounds fascinating and because as you said it was a messaging activity and because it may now be of great interest for all the obvious reasons can you just take us into the kinds of things that specified and the kinds of procedures that put in place to try to help this problem yeah so the major driver of the um algorithmic accountability act that went into the last congress 2019 uh was impact assessment so we i wanted to encourage lawmakers to kind of move away from this minimum via viable product way of releasing uh products into the marketplace and also take a really goodly look at deep learning and really explain that when you have a deep learning technology it is untested that's what makes it so responsive and exciting but it's still using that same protocol of looking at how the technology is used in that case in recent history right it's not 1864. it's who used this technology yesterday you know what outfits puts of that and really trying to tease those things because my position always was we do need an ai economy and my position is especially the fact that we need an ai economy as we sit in economic ruin through covid as well as needing to build out this piece of the economy we needed to make sure that it was responsible it was safe and if we had this impact assessment data it gets us away from explainability because at the time the act was written there were so many people that were like well let's just open up the algorithm and see what's in it and i'm on the tick tock content advisory board we open source the algorithm people don't understand it and it's not because they're dumb or it's not because they're not geniuses that's not their skill set you know in my own work with which is in a whole different area with drugs but be before you release a drug you do a lot of studies on the impacts of the drug but there's also many people don't realize this what they call post marketing vigilance where after the drug hits the market you watch the drug as it is exposed to millions of people because sometimes you see unanticipated effects of the drug when it's introduced at scale and it strikes me that these ai algorithms might benefit from a pre-release evaluation where you do your best but you also have to be open to the fact that you missed something and that after release um would that be compatible with the kinds of things uh that people are going for oh my goodness and the reason that that example is so relevant was what we proposed was this fda type agency oh there you go look at the algorithms and we were like look we already have these systems in place we just need them here and when we're thinking about you know we're we're sitting in a biden harris uh reality hoping hoping that we get through to to actual inauguration but you know we're sitting in this reality and one of their priorities is racial equity so as well as looking for one of the things that i really advocated for within that bill was that a bias be it be racial gender i didn't necessarily care about the lines but just use the civil rights language of protected classes if people are in a protected class we want to make sure that these technologies don't entrench bias against them and we knew it wouldn't we knew it wouldn't get anywhere but what did happen to it was companies looked at it industry looked at that act and um and deep fakes accountability which kind of followed up where we talked about social media marking and truth and all of that kind of thing but industry have definitely looked at those acts some have adopted uh specifically in the banking industry which is already regulated yes we got feedback from um some big players in banking um silicon valley not so much well this is the future of everything i'm russ altman more with mutale and conda about social media uh fairness bias and ways that we can address uh the hidden biases in these algorithms next on siriusxm welcome back to the future of everything i'm russ altman i'm speaking with mutale and conda about social media and algorithmic uh ai uh in in the last segment we were talking about some of the things you've been doing uh trying to get laws in front of in front of congress and you were mentioning that businesses um had noted these things and or some of them are even adopting the other way to communicate though is through social media and i know that in the in the 2020 election you were involved in a movement in a movement called vote down coven so can you tell us about vote down covid and um what did you learn and and what is the uh the the ongoing implications of this kind of work so vote down covid was a project that my um my non-profit ai for the people launched during 2020 because we had been studying uh disinformation agents that were that were ex really putting out racially charged messages that were encouraging black people not to vote in the election so if we can cast our minds back to what seems like 30 years ago in 2020 the big story was voter suppression and they had a hashtag which was vote down ballot and that particular hashtag was uh really just saying don't bother with the presidential go go below the line and we've been looking at the literature and disinformation and how to inoculate it and found a strategy called as strategic silence which had been employed by jewish groups during the civil rights movement targeted by the kkk as they were in a community with black activists what they found were jewish people were the owners of newspapers in the south and when they didn't mention kkk activity it it disappeared those those um i want to call it this is an instinct that yeah this is very interesting because it's an instinct that so many of us have that why are you covering this because that's what some of the voices want and and people sometimes don't seem to understand but the coverage itself is a victory for those who are but anyway go please go ahead amplification and so this theory of strategic silence was something that really fascinated us because as black people we had been trained in our own families not to answer questions about our inferiority with us proving to white people that we weren't inferior we had to just kind of be black and fabulous over here and know that being black would mean that we couldn't be in certain spaces but what had happened is a colleague of mine and that i published with had found that the kkk in 2008 2019 i'm sorry 1998 had said that they loved the internet because they felt that it could spread their message so we were like well what happens if we do the strategic silence online because they're spreading their message during social media and that's what vote down covid was so we did the social media analysis we identified the key players we looked for the level of inauthenticity within our data set because 60 of the accounts we were following were not humans they were mechanical mechanically amplifying uh machines and we released our own counter data set but we coupled it with videos of black people in philadelphia that's where the the the project was talking about covid and how it impacted their lives but making the connection to a response to covert is a social justice uh response and we are going to use our franchise to advance uh this and we were um we were hugely successful people loved the videos quest love is retweeting them morgan fairchild who wasn't our target is retweeting them and the week before the election we got 8.4 million impressions across facebook and twitter and vote down ballot got 2.3 million impressions and so we now have this case study that's looking at social media silence like what happens to the algorithm yeah so let me ask though because you called it strategic silence but it seems to be a little bit more complicated because it was actually redirection right you weren't actually silent you actually said we're not going to address directly this stuff but we're gonna say let's think about covid and this election so it almost seems to me like you need a new name like strategic something else but i do understand what you did it was a direction and because we used it was so similar to the original hashtag though our hope was that we capture people that had gone with this original hashtag and didn't pay attention and and use ours but our message and our videos were really about power um and you know power and joy whereas these other people on the other side were so miserable and they released a video the same week that said you know we don't even we don't even mind if we have four more years of trump and that's when traffic to hours went up because i don't think people had thought that we were being political we really weren't we just went to see what the algorithm would do um and so in terms of learning uh you know we used micro influencers you know our theory being if you can sell if kim kardashian can sell lip gloss doing this then we can recruit a bunch of uh people and they can sell democratic messages so we're starting to think about what that economy is yes we also this is actually great to know that there's a market for positivity right because what we had been everything that had been coming out in the literature was people love negativity bad news words better than good news whereas we have this one case study where good news in the midst of what was a horrible cycle um was was really compelling so in the la in the last couple of minutes i did want to ask you about you mentioned ai for the people was part of the organizing for this uh uh vote down kovit tell me a little bit more because it's an intriguing uh the one liner on ai for people is super intriguing because of it mentions the use of um i think popular uh media and popular um uh genres to um educate black people about social justice issues so tell me a little bit more about that organization so we're you know once a journalist always a journalist we're a comms agency we um work in the same way that a photo agency would so reuters would sell pictures to the new york times uh for a coup for example if they had a photographer the times didn't what we do are data journalism investigations and then we use that as a product to inform newsrooms about their stories we're making a film right now about biometrics and facial recognition and how it impacts folks of color in lots of different ways as well as uh you know writing articles and the reason that we did that was there was this gap where so much discussion about bias and ai was going on in the academy and in journals it did get to the new york times but it didn't necessarily get to the daily news right and so we wanted ordinary people on the street to become part of that conversation because we are a technological citizenry and we're not going to be able to really push on the political power we need unless the people who are part of ai for the people understand the stakes so who are the clients for ai the people are you going after news organizations or are you ultimately going direct to consumers so to speak both so in our pilot we worked with moveon.org
which is a large progressive organization and they were unexpected they came to us we've um also worked with on the biometrics we've worked with um amnesty international on a short film and then we're producing a larger film ourselves and they're typically big white legacy organizations who want to understand how to communicate with black audiences but in each in each uh project that we take on the target is the ordinary person super important well thank you for listening to the future of everything i'm russ altman if you missed any of this episode listen anytime on demand with the sirius xm app
2021-01-31 01:53