Finding Solutions: Choice, Control, and Content Policies

Finding Solutions: Choice, Control, and Content Policies

Show Video

AMRITHA: Okay. Well, hi everyone.  Thanks so much for joining us today.   My name is Amritha Jayanti. I direct  the Tech and Public Purpose Project,   also referred to as TAPP here at Harvard's Belfer  Center for Science and International Affairs.   TAPP was founded in 2018 by former US Secretary  of Defense Ash Carter, to work to ensure that   emerging technologies are developed and managed  in ways that serve the overall public good. We have a very exciting session today. The first  in a new public events series titled “Finding   Solutions,” in which we host practitioners  across the public and private sectors,   many of whom will be our very own Fellows, to dive  into a myriad of potential solutions to address   unintended consequences of emerging technology.  

We're planning to talk honestly  about problem definition,   solutions, feasibility, and intentionality  of impact across various solution types. So I'd like to quickly take a  moment and thank Karen Ejiofor,   TAPP's project manager, for all her work  in ideating and planning this series.   To follow the series, as well as other TAPP  events and publications, we encourage you to   follow us on Twitter at TAPP_Project, and  sign up for a newsletter on our web page. And with that, let's kick things off for today. So  we're hosting two awesome TAPP non-res. Fellows,  

Karen Hao and Joaquin Quiñonero Candela, sorry,  Joaquin, if I messed that up. But I'll give   them a brief introduction and pass it over to  them for a fireside chat style conversation,   for the first 40 minutes. We'll save some  time at the end for audience Q and A. So   please use the Q and A feature in Zoom,  not the chat box, but the Q and A feature,   to submit any questions you  may have for Karen and Joaquin. So with that, and I'll go ahead and introduce  them. And if you all want to pop into the  

screen, you can do that now. So Karen is the  Senior AI Editor at MIT Tech Review, covering   the fields, cutting-edge research and its impacts  on society. She writes a weekly newsletter called   “The Algorithm,” which was named one of the best  newsletters on the internet in 2018 by the WEBBY   Awards, and co-produces the podcast “In Machines  We Trust,” which won a 2020 Front Page Award.

In March, 2021, she published a nine-month  investigation into Facebook's responsible   AI efforts, and the company's  failure to prioritize studying   and mitigating the way its algorithms  amplify misinformation and extremism.   Her findings were cited in a congressional hearing  on misinformation two weeks later. In December of   2020, she also published a piece that shed light  into Google's dismissal of its ethical AI co-lead,   Timnit Gebru, which congressional members  later cited in a missive to the company.

Prior to MIT Tech Review, she was a tech reporter  and data scientist at Quartz. In a past life,   she was also an application engineer at the  first startup to spin out of Alphabet's X.   She received her B.S. in mechanical engineering  and a minor in energy studies from MIT. Okay. And Joaquin serves on the Board of Directors  of The Partnership on AI, a nonprofit   partnership of academic civil society,  industry, and media organizations.   Creating solutions, so that AI advances  positive outcomes for people in society.  

And is a member of the Spanish Government's  Advisory Council for Artificial Intelligence. Until September, 2021, Joaquin was a distinguished  technical lead for Responsible AI at Facebook,   where he led the technical strategy for areas  like fairness and inclusiveness, robustness,   privacy, transparency, and accountability.   Before this, he built and led an applied ML  learning team at Facebook, driving product   impact at scale, through applied research and  machine learning, language, understanding,   computer vision, computational photography,  augmented reality, and other AI disciplines. AML also built the unified AI platform that  powers all production applications of AI   across the family at Facebook products. Prior  to Facebook, he taught a new machine learning   course at the University of Cambridge,  worked at Microsoft Research, and conducted   post-doctoral research at three institutions in  Germany, including the Max Planck Institute for   Biological Cybernetics. He received his PHD in  2004 from the Technical University of Denmark.

So our session today is going to focus  on solutions involving social media   recommendation systems, particularly some of the  experiences that both Karen and Joaquin have,   thinking about applications,  as well as societal impacts.   Just reading their bios, I'm sure you  all know that we're in for a really   amazing discussion. So thank you both, Karen and  Joaquin, for joining us today. I'm really excited   to see where this goes. So Karen, I'll pass it  over to you, to get the conversation rolling. [00:06:36] KAREN: Awesome. Thank you so much, Amritha.   Hi everyone. I am super excited to be here. Thank  you to the Belfer Center for having both of us.   Just for a little bit of background, Joaquin and  I met when I started working on my story about   Facebook. Joaquin was the former responsible  AI lead there. And we spent quite a lot of  

time talking together about some of the issues  that we're going to talk about today. And I was   very impressed, throughout my time speaking with  him, about his thoughtfulness, and his really   deep caring for these issues. And so I'm really  pleased that we get this public forum to talk   a little bit about some of the things that  we've been talking about behind the scenes.

So Joaquin, obviously there's this huge ongoing  debate that's happening today about social media   recommender systems. And we're here to talk about  it head-on today, and propose some solutions.   But I first wanted to give you an opportunity  to actually tell the audience a little bit   about your background, and how you  ended up at a place like Facebook. [00:07:44] JOAQUIN: Yeah, thank you, Karen. Hi everyone.  

It's a pleasure to be here today. And thank you  so much to the Belfer TAPP team for hosting us.   I didn't think I would one day work at Facebook  or work on the—at the intersection of machine   learning and social media, if I'm honest. I think  it happened a little bit like many things, by   accident through connections, through good friends  who had gone to work at Facebook back in 2011. But one piece of context that I  think is really important, is that   although I was born in Spain and raised in  Spain until age three, my parents, my sister,   and I moved to Morocco when I was three years old.  And so I grew up in the Arab world, surrounded by  

many friends of all kinds of origins, but many  friends of Muslim and Arabic origin. And so when   I visited Facebook in 2012, socially, with  no idea that I would end up working there,   the Arab Spring had been  going on for about a year. And to me, it had caused a very, very  profound impression, because countries like   Egypt or Tunisia were seeing massive revolutions,  in a way. And I had close friends living in these   two countries. For example, even my sister  ended up working in Tunisia for a while.  

And so I was blown away by the power of tools and  platforms like Twitter or Facebook, to help people   communicate, mobilize, and really change  society for the better. So it's in that   context that I felt compelled to join Facebook.  I felt the mission was simply incredible. KAREN: Yeah. I kind of want to bring people back  to that particular time in social media, because   in 2012 Facebook was quite young at that time. So  could you like paint a little bit of a picture of  

just what stage, I guess, the company was  in, and when you came to join the company,   what your professional background was, at that  point, and what you were sort of tasked to do? [00:10:10] JOAQUIN: Yeah, absolutely. I   was on a journey. As a professional background,  I was on a journey from almost having been an   academic in machine learning. I was a post-doc,  had my constituent in [00:10:21] Germany doing   pretty, pretty theoretical and abstract  research. But in the intervening time,   I had spent five and a half years at Microsoft  Research in Cambridge in the UK, initially.  

Also doing research But veering towards  applications quite—quite rapidly. And, in fact,   at Microsoft, together with some colleagues  and now very good friends, we developed the   click prediction and ranking algorithms that  helped power ads on the Bing search engine. [00:10:55] So I had a bit of background, now, not only   in the theory and research of machine learning,  but also in its application at scale. And in fact,   I did something considered a bit crazy at the  time, which was, I accepted an offer to leave   Microsoft Research and join a product team,  and become an engineering leader. So I had a   little bit of experience on both sides of the  aisle, if you will. So that was the context.

And I joined Facebook, as Facebook was a pretty  young company, like you said. It was growing   extremely fast. I remember this, during the first  months that I was at Facebook is when we crossed   one billion users. And that's not even daily  active users or anything like that. It was one   billion users total. And a lot of things were in  their infancy, you know. There was, of course,  

a couple of pretty strong machine learning  teams in feed ranking, but also in ads. But,   when you compare the number of people  working in ML, at Facebook or at Meta today,   you know, to what it was when I joined in  2012, you could fit everybody in the room   I'm sitting here, you know, who was working on  ML. And I would know them all by name, right. [00:12:15] So I joined in. And the task that I took on was,   well, let me build out the machine learning  team for ads, at first. And then, very quickly,  

I realized that we needed to invest extremely  heavily in platforms, in tools. I used to say   wizards don't scale. We need to be able to factor.  We can't hire enough ML people to do all the work   that needs to be done. We really need to take  our tools and our platforms to the next level. And, to try and make a long story  short, that led to the creation of   the team that Amritha mentioned  earlier, the Applied ML Team,   which then had the scope to help bring  ML to everybody across the company.

[00:13:03] KAREN: Yeah. So one of the things that,   when we first started talking about your personal  background, that I was very touched by, was   the fact that the Arab Spring was quite personal  for you. Because when you talk to people in social   media, they often reference the Arab Spring. But  it was very different for me hearing it from you,  

because it was something that you were literally  sort of seeing in your life, and through your own   friends and your own family. And that was sort  of the mission or the vision that you took on   when you joined Facebook, of this  is what social media could be. And obviously, to sort of skip a lot of things  that happened along the way, social media   didn't really quite turn out the way that it was  originally envisioned, this grandiose ambition   to connect people, to create these like  powerful positive social movements without   any of the costs. This past year, there  has obviously been a lot of talk, now,   in particular with Frances Haugen and the Facebook  Papers, that are reexamining some of the core   challenges with social media platforms, and why  we might be seeing some of the adverse effects.  

And Frances Haugen specifically pinpointed a  number of the risks to recommender systems. So   what are the risks that you see for social  media recommender systems, in particular, as   someone who, more than most people in the world,  understands how it works, and how it was built? [00:14:35] JOAQUIN: Yeah, no,   it's interesting. What you say is exactly right.  It's been a really interesting journey for me. I   remember the first few years at Facebook,  my obsession was in scaling ML, and making   sure that we can build bigger and better  recommender systems, using larger models,   being able to refresh those models as fast as we  can, et cetera, and make them evermore powerful. And it is true that, along  the way, a lot of issues   did arise which I did not anticipate, and  I think many people did not anticipate.  

I don't claim to have a comprehensive  overview of what the risks are of large-scale   recommender systems, and in particular,  social media recommender systems. But   since our past conversation, and your article,  a year ago, I've been thinking about this a   little bit. And so I have maybe four buckets of  concerns that I'd like to propose. And again,   disclaimer, there may be more. And I'm super  excited. I know that we have, at the Belfer   Center, people who are thinking about these things  very deeply as well. I don't know if Aviv Ovadya  

has been able to join today. But I'm going to  put him on the spot and embarrassing him. I've   been reading his work with a lot of interest.  And I know there's many other people as well. [00:16:05] So let me list out   the four buckets. And then we go into them one  by one. The first one would be mental health   concerns. The second bucket would be bias and  discrimination. And then buckets three and four  

are similar but different. So bucket three  would be the propagation of misinformation   in particular. But just of harmful content in  general. And then bucket four builds on this,   but is much more specific on personalization,  and the fact that, if you think about it,   everybody has got their own unique personalized  feed and newspaper. And so the risk in   bucket four is polarization and divisiveness.  So these are my four buckets. Should we –

[00:16:57] KAREN: Yeah. Let's go one by one. I mean   the mental health one is huge, because that was  probably the most explosive Facebook paper from   the Wall Street Journal, was the fact that there  could be possibly—that there was research done   internally at Facebook, at Instagram, showing that  there were some adverse effects to mental health   for teenagers on the platform. And I know that  you have three kids yourself. And you sort of see   some of this play out. Or you have concerns as  a parent, when you're watching them engage in   different social media platforms.  So talk a little bit about that.

[00:17:36] JOAQUIN: Yeah.   Well, to make this very personal and down to  earth, like you said, I have three kids. Youngest   one is almost 12. Middle one is 15 ½, and our  oldest is 19. And one common pattern that I see   that does concern me, is that often you'll see  the day pass. And then I'll sit down and say,   “Well, what happened the last hour or two?” And,  “Oh well, I've just been on TikTok or on YouTube   or on Instagram or on SnapChat.” And I go, “Okay,  well what came out of it? What can you remember?”  

It's like, “Oh, I don't know.” Just like, but two  hours have passed, you know. What happened there? [00:18:18] And so I am concerned about addiction   to technology. And I am concerned about that time  spent not being valuable. In addition to that,   of course, there has been a lot of research about  things like comparison. There have been some  

really interesting ethical debates about things  like, whether you should turn beautification   filters on by default, or not, which I know are  not specific to an AI algorithmic ranking system. But one point that I would like to make  right now, and you're going to hear me   repeat this again and again, is that when we think  about these issues, we should not only think about   them as an AI issue, we need to think about the  end-to-end design of the system, right. And say   if I'm optimizing for engagement, or time spent,   of any kinds, some of it is going to be achieved  through a ranking algorithm, to try to maximize   some metric that relates to engagement. But  some of it is also going to be achieved through  

UI, user interface decisions, right, whether  it's making it super easy to share stuff,   or whether it is to have, like, beautification  filters on by default and things like that. KAREN: Yeah. Why don't we continue going down   the rest of your list. So the next one, I think  you spent the most time at Facebook thinking   about. This is the bias and discrimination  bucket. So talk a little bit about that. [00:19:52] JOAQUIN: Yeah.  

To give a little bit of a historical  perspective, back in end of 2017 or 2018,   I had just spent a good five years-plus at  Facebook. I keep saying Facebook because I   never worked at Meta. I left right before the  rename. So bear with me on that. I had spent a   good five-plus years really focused on scaling ML.  And then, at the end of 2017, I started to really   think about what I should focus on next, right. I  think of myself very much as a zero-to-one person.  

Once a team or a platform is working, I get itchy  feet, and I sort of go , look for the next thing. [00:20:43] And I became very attuned,   again, to the fairness, accountability,  and transparency community. In early 2018,   there was the first dedicated conference which  was no longer part of the NeurIPS conference.   This forum had been a workshop of NeurIPS in  the past. And I thought, okay. I really want to   devote my next couple of years to responsible  AI in general. I found it to be such a vast   area, a bit daunting in many ways. And I got very  drawn, like you said, to the question of fairness.

I have a very funny story. I'll just drop it, and  we don't need to spend time on it. But we can go   back to it. It's a little bit like a philosopher  and a computer scientist walk into a bar. So   literally, that happened in New York. And  I had the pleasure to meet a philosopher,  

a moral philosopher, who was half affiliated  with industry, half affiliated with academia.   And I was so excited, because I said,  “Oh, you know what? I figured this out.   We're going to build this tool that's going  to be de-bias data and models. And then,  

you know, we're going to make it available  to the entire company. And we're going to   get rid of bias and discrimination  in AI models within like this year.” [00:21:57] And then the person slowly—slow-motion turned   their back to me, and didn't talk to me anymore  the entire evening. So it was pretty interesting.   And, of course, the reason is that fairness  is not one of those things that you solve,   right. Like there is no such thing  as like a half-solved fairness.

And to an engineer like me, with an  optimization mindset, and we get—I'm   queuing this saying for later, we'll talk  about the optimization mindset, I hope—that's   very hard to grasp, right? Because as an engineer,  you think about the world as having states,   right. And so something is either sober or  it's not sober, it's broken or it's not broken.   And I got so fascinated about fairness, because  it's one of these places where there are just   many answers. And all of them are good. It's just  that the context is going to dictate which one   you should choose. And even more,  it's not even clear who should choose,   right. And it's very clear that I shouldn't  choose. So anyway, it's such a rich area. But bringing it back to the topic,   in the context of recommender systems and  discovery and suggestions, you can think   about concrete examples. You can think about  building a tool that helps people connect to  

job opportunities, for example. And then you can  think about the bias that exists in real world   data, right. So there are many studies that show,  for example, that if you look at a certain job   opening, that has a certain description and some  requirements, and a certain salary, right, there   is a bias by which men tend to apply, even if they  have lower qualifications than women for the same   job. So, if you're not careful, and you're trying  to learn from the data, your AI might learn that  

it should prioritize males for higher paying  jobs, for example, right. Which would be pretty   terrible, because it would reflect and cement,  right, and reinforce biases that exist in society. [00:24:04] Other forms of bias that I guess   have been very much discussed, are biases  in content moderation. For example,   I guess like in the US, in 2018, there was a lot  of discussion around anti-conservative bias. And   so that led to a lot of interesting discussions  too, right, which is like, well, should you,   you know, should you sort of like suppress  a comparable amount of content from   conservative versus liberal outlets? Or  should you instead apply equal treatment,   and have like procedural consistency,  where you say, “No. this is the bar.”

And then, if one outlet  produces more misinformation, or   violating content, then there would be a bigger  chunk of their content removed. And then,   different people would have different opinions,  right. Different people would say, “Well, I   want equal outcomes.” And other people would say,  “Well, I want equal treatment.” And then you have  

to sort of explain, “Well you cannot have both,”  right. And then the question is, who decides?   [00:25:06] [00:25:06] So in   computational advertising, there's been  some brilliant papers on the ways in which   algorithms can discriminate. Again, if you're  not careful when you're showing people ads,   in particular sensitive ads about employment, or  credit, or education, things like that. So anyway,   I could go on. Like you said, fairness is the  area that I spent a ton of time thinking about.   And maybe we need a dedicated session  to talk about that in depth, yeah.

KAREN: I think we do. Yeah. I think we do. But  we'll move along to the last two. And I think   I'm going to put them together, because they are  very interrelated. And these are true, for me.   But the questions that I sort of [00:25:53]  is like how do recommender systems end up   propagating misinformation, harmful  content, or polarize people? [00:26:04] JOAQUIN: Yeah, no, absolutely. Let me—they  

are interrelated, I agree with you. I'll still  try to break it down. But it's like two sub-bullet   points, or something like that, if you will. In  a very naive way, imagine that you were to build   an algorithmic recommendation system for social  media. Now take a moment to think about how you  

would do that. The first thing that you need to  do, is you need to figure out, well, I need to   train my algorithm I need to ask it. I need to  give it a goal. So what's that goal going to be? Well, one goal could be, I want people to  be engaged. Because if they're engaged,   that's a good thing. Well, what does it mean to  be engaged? Well, it can mean I click on stuff.  

Or I like it. Or I emit any of the  reactions—different platforms allow for   different type of reactions. Figure it out. Maybe  you give a bigger weight to a love reaction, and a   smaller weight to a sad reaction. And there's  many ways to calibrate these things, right. [00:27:11] Maybe comments matter, right? Because you   feel like, oh if, actually, people actually take  the time to comment, then that means that they're   engaged. So let's focus on comments for a second,  right. If you go ahead and maximize comments, then  

on the positive side, you might see more content  that you care about and you react to. So you might   see, I don't know, a friend who got married, and  you might have missed it, right, if the ranking   system hadn't caught it. And that's good, right.  Or, because I'm balanced, I try to be balanced. [00:27:42] Another positive example, you know,   I'm super excited about my guitar teacher, James  Robinson. Hey James. I don't know if you're   listening. Probably not. My guitar teacher,  like many creators, builds their livelihood   by trying to reach the relevant audience on social  media. Not only Facebook, but many others, right. On the flip side, you can sort of see that  if I am either a bad actor, or I'm trying   to game the system, I'm going to call out Aviv  one more time. I don't know if he's there. But  

he makes this very clear statement, which is  that the way we design our recommender system   actually defines the rules of the game, right.  If you sit down to play Risk, or Monopoly, or   Settlers of Catan, or whatever game you like  playing, there are some rules. And you know   that those rules are going to  incentivize a certain behavior, right.

[00:28:34] If I know—If I figure out that comments are   rewarded, I'll be tempted to write inflammatory  comments, you know, maybe something that will   trigger strong reactions. And I'll get like a  ton of comments, right. And then, it's almost   like the—I don't know how to pronounce it—so  ouroboros serpent that eats its own tail, right? Like you get this sort of a [00:28:53] saying,  “Look what cheap Julien, who is an incredible   ML researcher by the way, has been talking about.”  She calls this the generate feedback loops,   right, where people get, then, shown more  of that content and less of other things,   right. And then you get—People also act  by imitation a little bit, right. If you   see more of that, you might be tempted to  create more of that conflict, right. Well,  

until everybody leaves the platform,  which is sort of one mode of failure. [00:29:21] So there is a danger that optimizing for   engagement will, one, incentivize people  to create content that is inflammatory   in many ways. But also, the problem  is, that you get this “winner take all”   phenomenon, where some of that content can,  as you were saying, go viral and dominate.

And one thought that I wanted to plant here, I  know that I might be jumping ahead a little bit,   because I know we want to  talk about solutions. But   you can think that you can police these  things after the fact. You can think, well,   this is not a problem with the recommended system  itself. It's just like bad behavior. And then   we'll add, you know, some integrity or trust or  health. Different companies call this differently.  

But it's policing—afterwards. I think, of course,  you should do that. But I think the more you can   frontload incentivizing the  right behaviors, the better. [00:30:24] So now the second sublet point,   which is polarization. Well, one  of the key challenges here is that  

the recommender systems are highly personalized,  right. And that, again, can be a good thing.   That helps James, my guitar teacher, reach the  right audience. So that's awesome when it works.   But it also means that you get phenomena like  what Guillaume Cheslot has studied, which is   how, if you create a blank YouTube profile, and  you start clicking on the next suggested video,   and you start to sort of go a little bit more  towards either liberal or conservative content,   it kind of tends to tend a little bit to the  extremes, and going away from the center. So one of the things that is also heavily being  studied, is this idea of the disappearing common   ground, right. And this is interesting, because  people have been talking about this before social   media. People have talked about this already with  the emergence of cable TV, where, you know, once  

you have hundreds of channels, it's easier to sort  of live in your own little silo and only see your   own media. And then, if you sit down with other  people, you might not have ever watched any piece   of news in common, right. So it's difficult to  have any kind of civic engagement in that context. [00:32:01] So, there is a risk, as well, that   people get polarized, or even radicalized. People  are—There are some articles that study preference   amplification, which is kind of interesting, which  is this idea that there is actually an interaction   between the recommender system and a human being,  right. So I might come in with a certain set of   beliefs, but then through prolonged exposure,  even if—and this is very important—Even if no   individual piece of content violates any community  centers or policies, right, if I'm only exposed to   a certain type of content, right, over time, I can  become radicalized. And I can become polarized.

And I'd like to mention—I'd like to cite  some work that is not from the AI community,   from architecture. There is a brilliant  architect called Laura Kurgan at Columbia,   who has been writing beautiful articles on  the analogy between urban planning and the   design of social media recommender systems. So  where is the connection? The connection is this.   The connection is, whether neighborhoods are  heterogeneous or homogeneous in the cost and   size of housing and so on. And what are the  factors that increase or decrease homophily?

[00:33:29] And so homophily   is a mechanism by which you will feel even more  connected to people who are similar to yourself   by any—by specific demographics. This could  be, obviously, income, education But it can   also be perceived race or others, right.  And so in her work, she shows that—Well,   she references a lot of work, a lot of studies  over the years, that in connection to civil   rights, for example, that showed that more mixed  neighborhoods ended up resulting in people being   less racist, right. And so I find her work very,  very interesting, because it points to this idea,   right, like this question of how do we inject  diversity into recommender systems? Is that even   possible, right? I think it's very hard, and  we should probably talk about that in a bit. [00:34:31] KAREN:   Yeah. I think I really love that example, by the  way, the urban planning. And someone asked in the  

chat who that was. And it's Laura Kurgan. So I'll  just type that in the chat. But, so we sort of   talked about these four different buckets of risks  that are coming out of recommender systems as   you see them. And I want to start jumping to the  solutions, seeing the time that we have right now. We obviously—Like you've touched both on the  benefits and promise of recommender systems   and the risks. So obviously, like we can't  really just throw the recommender systems  

out. That's not necessarily the solution that I  think we should be spending time talking about.   So like what are the possible solutions that we  should be thinking about? Let's start with what   could companies be doing differently? What could  people within companies actually be studying? Or   what could different teams be outputting, to  actually facilitate better public understanding or   actually change the way that recommender  systems work as they do today? [00:35:44] JOAQUIN:   Yeah. I think I—Again, another disclaimer.  I don't have the solutions But maybe there's   four—and this is a coincidence, the fact that  it's four—four thoughts here, in no particular   order. Because many of these phenomena are so new,  I think voluntary transparency and accountability  

as well—But let's just begin with transparency.  I think it's extremely important, right? Again, what could this look like in  practice? This could look like voluntary   daily reports, or public dashboards, that  show you what content is going viral where   across the world, right. And, if you think  about it, in most platforms, whether it's,   again, like YouTube, TikTok, Facebook, Instagram,  and so on, stuff-- stuff that goes viral is,   by definitely, public. It's not a private message  from me to you, right. Like that'd be a problem   if that went viral, right. And so I think  the privacy concerns can be addressed  

in that context. And I think the value to society  would be tremendous, right. It's almost like you,   at a certain level of distribution and size and  impact on the world, you're almost like a public   utility, right. And you almost owe it to society  to report back how things are going, right. [00:37:10] And so one would be to report on what's going   viral where. But also, and it would be difficult  to implement, but I think it's worth trying,  

also segment the population in different  ways, by age, socioeconomic status, language,   region, other attributes, and try to also report,  you know, what kind of content are different   groups exposed to? And how heterogeneous  versus homogeneous that content is, right? We won't have time to talk about something I've  talked about in public in the past a lot, which   was the 2019 India elections, and some fairness  concerns there. But, you know, there, for example,   region and language in India are two very clear  indicators that correlate with other things,   like religion and caste. And you would want to  know, what are people seeing, especially if there   was harmful content going about? So that's  the first one, right. I think transparency. [00:38:05] There are some good examples. Facebook/Meta   publishes these community standards enforcement  reports, which are public. If you Google that,  

you will sort of see the latest one. I  think it's a good step. It sort of shows,   hey, what harmful content, you  know, is going on? It's not a   realtime dashboard. It doesn't get into  the specifics. You don't get to see,   you know, what exactly is going on. I think  we need a lot more of that. So that's one. KAREN: I want to briefly follow up on that.  I mean, one of the things that Facebook has  

been criticized for is the fact that it sort  of games its metrics on those reports. And   the realtime dashboards that we do have  available, which is through the CrowdTangle tool,   is something that Facebook has  deprioritized and underfunded and   started shutting down. And there have been  other examples of, you know, data that used   to be available to researchers that was meant  for transparency, that it's now been revealed   that Facebook is either denying access to these  researchers, or it's giving them incomplete data. So how do we actually—I   agree with you, that that's a great idea. How  do we make Facebook do it properly? Like who   are the people involved that should be holding  Facebook accountable to transparency standards? [00:39:25] JOAQUIN: Yeah. Facebook, and  

everyone else I would say. I'd love to see TikTok  do this. I'd love to see YouTube, everybody.   My intuition is that there would have to be  regulatory pressure for this, which is maybe   the second bucket here, right, is accountability  mechanisms or accountability infrastructure.   I am not an expert in freedom of speech. It's  a fascinating topics, and especially in the US,   and especially coming from Europe, where the  perspective is a bit different, especially now   that I have close friends who have gone back to  China. And we talk about the cultural differences.   And again, it's like fairness. There's no  right answer, right, to freedom of speech. [00:40:19] But, when you look at   Section 230 of the Communications Decency Act,  I think it's reasonable to ask oneself, well,   in what circumstances does it make sense to  uphold that? And in what other circumstances   should we be actually asking for more  accountability, and for an obligation to   actually report on what's going on? So that  would be the second—yeah, the second thing.

KAREN: Yeah. You've also talked to me  about this idea of having participatory   governance, this idea that it is rather  concerning to ask platforms themselves   to be deciding some of the things that we  sometimes impose responsibility or burden   on them to decide. So could you talk a little  bit about what participatory governance is,   and how you could see that actually functioning  in practice, given that Facebook already has an   external oversight board, but it's not necessarily  working the way that it was originally envisioned? JOAQUIN: What I—I don't know whether the  external oversight board is working or not   the way it was originally envisioned. For me,  I view it as a very good proof of concept that   may not scale, that may not satisfy the—it  may not go as far as we need it to be. But   let me just give you an example. So in  May, 2020, back then President Trump   wrote, both on Twitter and on Facebook, and  probably in other places too, a tweet along   the lines that contained his statement, “When  the looting starts, the shooting starts,” right.  

And not only Facebook, but most  media had intense debates on like,   okay, what do we do? Do we keep  this up, or do we take it down? [00:42:18] And at the time, the   external oversight board was not operational. And  I remember that it was extremely painful. Because   I was dying for it to be there, and to sort of  say, “Hey, here you go. Here is an interesting   and extremely difficult example where Facebook  should not be making the decision on whether   that content should be up on this site or not.”  And I say the site, it's an app. It's a platform.

Another example would be,  right after the Capitol riots,   again, the same, I guess—I don't know whether  he was formally former President or not. But   sort of said things like, you know, “We love you.  You're very special. Great patriots,” and so on,   you know, talking to the rioters. And then,  that was like the straw that broke the   camel's back, right, in a way. Then Facebook  indefinitely suspended Trump at the time. [00:43:16] And then, I think   the fascinating thing is that then, Facebook  still did, what I think is the right thing,   and sort of went to the external oversight  board, once it was stood up, right, in 2021,   and said, “Hey. This is what we decided on. Here  is the data. Did we do the right thing?” And the   way the external oversight board came back, was  really interesting. They said, “Well, yes and no.  

Yes, that content was unacceptable  and should be taken down.   But no, you cannot indefinitely suspend anyone,  because you haven't defined what that means,   and in what conditions you would do that, right.  So you've got to clean up your rules, right.” So I think what was really interesting, is that   the external oversight board provided feedback in  two different ways. One was very specific, right,  

on a piece of content and a behavior. But the  other one was feedback, even about the governance   itself, right, saying, like, “Hey, improve  your rules.” And I think that is us, right. [00:44:10] And so I think we need to see   a lot of that. But what I don't know, and  another sort of plug here, is for Gillian  

Hatfield, who is an amazing researcher  in Toronto. She has been working a lot on   regulatory marketplaces, and  this idea that we need to—we   need to find other ways to create regulation,  because the old ways are too slow. And the reason   I'm interested, the angle that fascinates me, is  one of like, how do we bring democracy into the   process of deciding the rules that we big tech  companies need to create, in order to operate? KAREN: Yeah. JOAQUIN: That was three. So  I have one more. [laughter]

KAREN: I'm going to interject right before you  get to that last one, and just remind people that,   if you have questions for the Q and A portion, you  can put it in through the Q and A feature on Zoom.   And I see that there's already one question. And  so I'll get to that shortly. But if other people   want to pop that in while Joaquin talks about  the third one, I can [simultaneous conversation] [00:45:18] JOAQUIN: The fourth, the fourth, yeah, yeah, yeah.   So I wanted to have a prop, and I didn't get  organized. My prop is a book. And the book is  

called System Error: Where Big Tech Went Wrong  and How to Reboot, by three phenomenal authors.   They teach the CS-182 course at Stanford called  “Ethics, Public Policy, and Technological Change,   I think. I might mis-remember. But the important  thing is that Rob Reich is actually a philosopher.   Mehran Sahami is a computer scientist. And  Jeremy Weinstein is a political scientist. [00:45:58] And the reason I'm bringing this as an example,   is that book talks about the dangers of  the optimization mindset, which is, “Hey.   The mindset I grew up with. I'm an  engineer, right. I like to optimize things.”  

And the problem there is that you might  incur failures of imagination. And one   of the things that has been very painful,  for me, in my last years at Facebook, was   people would tell me, “Why are you working  at Facebook? It's an evil company.”   And I would just not understand, because that's  not the reality I was living on the ground.   I'm like, “Listen. I can give you  countless examples where we've done  

risk assessments and have not shipped  something because it wasn't ready.” And zooming out, when you think about what might  be happening, is less about anyone being evil,   and it's more about systemic issues, where, almost  like the culture on the approach of things, it's   just not diverse enough. Like you can't only have  engineers making the most consequential decisions.   You need the Rob Reich philosophers. And you need  the Jeremy Weinstein political scientists. And you   need diversity across not only, you know,  your education and background, you need to   really sort of embrace diversity and inclusion,  and make it count. You need to give it teeth.

[00:47:21] KAREN: This is   something that we've talked about a lot,  that this system optimization mindset,   and how—Because I was also trained as an engineer,  and how I sort of went through a similar journey,   as starting to realize the gaps in that. And one  of my favorite quotes is like, “Engineering is   all about the how. And humanities is all about  the why.” And that's part of the reason why you   need interdisciplinary teams to talk about these  issues, because you need to figure out whether   or not the question you're even asking, or the  problem you've scoped, is even the right problem. We have a bunch of questions that are starting  to come in. So I'm going to start taking some of  

them. And then I have a couple other questions  as well, that I'm going to try to weave in. But   the first question from Hamid is, recent events  in Ottawa—so this is referring to the trucker   convoys—and elsewhere have shown that extremism  has found its way into the virtual world. Given   the recent revelations about Cambridge Analytica,  in what way could Democratic societies protect   citizens and their Democratic systems? Will it  be through more public oversight and regulations   of the algorithms, and/or metrics, or better  enforcement of corporate tax systems, et cetera? [00:48:34] JOAQUIN: I think   it's all of the above. I don't think  that I have one single answer to this.   Maybe I'll bring up—So I think a lot of what we've  discussed already addresses the question, like the   four buckets of solution that I mentioned are the  ones that come to my mind. There's one more that I   just thought about. In the same way as sometimes  we have a public health education campaigns that  

say, “Wash your hands. Because if you do, then  you'll kill germs.” And I think—Oh, my memory is-- There is another Belfer Fellow—oh,   who used to be Chief Technology Advisor to  Obama. Damn it. You know who he is. It'll   come back to me. Patel, yeah. He used to  talk a lot about the example of the impact   of people starting to wash hands in hospitals  and so on, or like in the medical profession.  

Believe it or not, there was a time where  doctors and nurses wouldn't wash hands. [00:49:44] And so I think that we need to   invest a lot in the public understanding of social  media and educating people. Just to bring in a bit   of optimism here, in the middle of this very stern  conversation, I am actually encouraged sometimes.   The flip side of my kids spending so much time  on social media, is that we have these amazing   breakfast conversations, and also it's amazing  to see them educate their grandparents. Like,   “Hey, grandma,” you know, in German to their  German grandma, or Abuela to the Spanish one—or   Abuelo, you know, the grandpas—“Like, by default,  you shouldn't believe what you see people share   something on WhatsApp with you. By default, don't  believe it. It's like spam.” And they're like,   “Why would people do that?” And my  kids kind of like explain it to them.

[00:50:30] So I think—I think—I'm hopeful that,   at least in some of the younger generations,  people are developing some sort of an immune   system of sorts. But again, like I'm an optimist.  But yeah. I think investing very heavily in public   education, in addition to all of the regulatory  steps, from transparency, accountability,   and parts through governance, I think, is  necessary Will it be sufficient? I don't know. KAREN: There's another question here. It's a  really hard one. So it says, it's from Derek.  

It says, you pointed out the Facebook oversight  board model of accountability doesn't scale   well. So are there accountability models that you  think do have the potential to operate at scale? [00:51:13] JOAQUIN: I don't know. I came across papers on, I   think it was called fluid democracy, or something  like that, which to my computer scientist brain,   translating to a tree. And trees can be  efficient constructions, right, where   you could imagine—Well, going back to social  structures that, through the centuries have   ensured that social norms sort of prevailed,  right, like you have maybe the elders in the   village. And then the elders in the county.  And then the elders in the region, and whatnot. And so the problems you need to solve are  not just participatory governance in general.  

It has to be localized and in context, right. I  was saying earlier that freedom of speech means   something very different in China, in the US, and  in Germany, and in Spain, right. And there's no   right or wrong. But you need to localize it,  right. So the question is, it's again, like,   do we need to sort of reinvent democracy in a  certain way, make it very fluid, and figure out   how do people elect their representatives  that will actually make those decisions? KAREN: I'm going to ask you a question  before I move on to more audience questions.  

So one of the things that you invested a lot of  energy in, at the end of your time at Facebook,   was diversity and inclusion. And  that's sort of part of, I think,   it folds very much into this conversation of  finding solutions, and how companies need to   shore things up internally, to actually  tackle some of these issues head-on. So why   was that important to you? And how do you see that  fitting into this conversation that we're having? [00:53:00]  JOAQUIN: NeurIPS 2019. So, for those of you who  don't know, NeurIPS is one of the main machine  

learning and AI conferences. I've attended,  I think, all of them since the year 2000.   YoYo Ma, the famous cello player, was invited  to a workshop. He came. We chatted. And then,   I was lucky, we bumped into  each other. I had a few minutes  

to talk with him, which  was incredible, one-on-one. And I told him I was working—I was starting  to work on responsibility. I was starting to   understand how do you help people trust AI?  So I asked him that question I said, “Well,   what would make you trust AI, YoYo?” And he said,   “Well, the most important thing is I need to  understand who is behind the AI. Who built   it? What are their motivations, their concerns?  Who are they? What are their lived experiences?”   And I thought, okay. It's not looking super good,  because it's a bunch of people like me. [laughter] [00:54:04] And, of course,   every company has been talking a lot  about diversity, equity, and inclusion,   DNI, inclusion and diversity, you know, the  order and the acronyms change and so on.   And I think I was frustrated by the fact that  it was a lot of talk. But I wasn't seeing the  

results. And so, I decided to dive in and trying  to understand, well systemically, what is going   on? And I realized that the problem is that I  didn't think leaders were being held accountable   for creating an inclusive culture, or for evolving  recruiting, to really create equal opportunity. Although I need to emphasize, the most important  piece in diversity and inclusion, in my opinion,   is actually the inclusion part, so actually what  happens with the people who are on the team now.   You don't address diversity and inclusion by  hiring a black female into your team, or by hiring   an Asian transgender person into your  team, or a veteran, or whatever it is.   That's great if that means that you  have a consistent process that gives   equal opportunity to everybody, and these  people were great. Super. That's really good. [00:55:10] But, if you don't really have   an environment where everybody can contribute  equally, right, and where decision processes   aren't dominated by a small minority, then you  have achieved nothing. And so we—I like to work  

on hard things. You know, we teamed up with HR,  with legal, and said, “Well, how do we change our   performance review system, so that, how much we  pay people, whether we promote them or not, and   so on, actually depends on very clear and concrete  expectations?” So again, we did another session   for this, to go into the details. But we did this.  We shipped it. And I feel very happy about that   step. That doesn't solve the problem. But I think  creating hard accountability, right, and making   what people get paid, depend on clear diversity  and inclusion expectations, is essential. [00:56:01] And then people sometimes say,   “Well, but how does this help the business?”  Look. It's very simple. I think you make   better business decisions by having  a much more diverse and inclusive   taskforce. And you can anticipate problems  that you didn't see, right. And going back  

to some of the things that we discussed  earlier, and maybe the Ottawa question.   I think that technology companies need to have  philosophers, more philosophers [00:56:32],   political scientists in the leadership team,  at the very top. And that they need to be put   in roles that have teeth on an even level with  engineering. And obviously, those are functions   and disciplines. I think that diversity and  inclusion needs to sort of extend beyond there. KAREN: Yeah. I mean I think one of the  things that we've talked about before is that   the side effect of bringing more diverse lived  experience is that you also, you have more diverse   expertise experience, or whatever you want to call  the expertise. Because a lot of the people who  

are from more marginalized backgrounds that  don't typically appear in these types of roles,   are actually in the other disciplines. And they're  like—it's because they're, for whatever reason,   they're pushed away from tech into perhaps  a social science, or from AI into AI ethics.   And so there's like a really complementary  effect that happens when you open up   the table, I guess, to all of  these different perspectives. So I'm actually really curious, because  we've never talked about this, is like,   what actually happened once you implemented  this at Facebook? Like have you seen noticeable   differences yet? Or do you  think they're still to come? [00:58:08] JOAQUIN: I think they're still to come. These   things take a long time. What I can say, is that  the level of awareness increased dramatically,  

right. And people would actually discuss,  systematically, diversity, equity, and inclusion   during performance reviews, which is something  that happens at least twice a year. And I say   at least, because you have the big ones, and  the small ones, and the informal ones. And  

if you're a manager, you spend your time doing  performance reviews in one shape or the other.   Making something be top of mind, I think, is  a crucial first step. And that did happen. [00:58:48] There were a lot of pretty awesome improvements   to recruiting processes, which were already  pretty good. And I also saw a lot of—I saw a   change towards really acknowledging and rewarding  people who were investing in building community.   And you and I have talked about  this, in some of our conversations,   this triple whammy of sorts, right,  where I would be in meetings, working   on diversity and inclusion and incorporating  that into performance review. And I would look,   and I'd realize that I was the only white  male in the meeting. And so everyone else  

was donating some of their time, right, to work on  something that might not even be rewarded, right. So first of all, working on something that   is an opportunity cost[?]. You could be working  on something you know is rewarded, right.   Second, well, this work you're doing doesn't  get rewarded. And third, you might even be  

perceived to be annoying, right, like a fly  in the ointment or something like that, right.   And sort of say, like, “Listen. You're  annoying. Can you get away? Like we have   some business goals here, right, to achieve.”  I lost my train of thought. [laughter] [01:00:12] KAREN: Yeah. Well no. I mean the thing   that I love about—The thing that I love about this  particular anecdote that you're talking about, or   these reflections, is the fact that specifically,  that you decided to take on that triple whammy   as a white man. Because typically, the reason  why women or people of color are the ones that   take up the triple whammy is because, for them,  it's an existential thing. Like they have to do  

it. Otherwise, they're not going to survive at the  organization if it doesn't get better on certain   fronts, or they're not going to progress in  their career if these things don't get better.   But for someone where it's not an existential  crisis, to actually put their weight behind that,   and lend that, I think that's—yeah,  it's really incredible. And it's a   very good demonstration of how to  be a good ally in those situations. We've spent most of this time talking  about like what companies can do. I  

do want to just briefly touch on, what  are some of the things that people who   are not inside of these companies can do,  whether that's civil society, or the press,   or regulators? We've already touched on some  of the things that you think regulators can do.   But what are some of the things that  should be happening outside of companies,   very explicitly, to help us push and topple these  problems around social media recommender systems? [01:01:30] JOAQUIN: Yeah, a couple of things.   Let me start with the job seekers, right. We are  seeing a pretty dramatic revolution, I would say,   in the labor market, where I guess people talk  about the great resignation. Others talk about  

the great reshuffle. People are choosing to leave  their current job and thinking about where they   want to go. Well, one of the first things to do,  is inquire about the values and principles of the   company you are going to, right. Like  job seekers have more leverage now than  

probably ever in history, right. So I  think this is a great time to do this. [01:02:14] And it's working,   right. LinkedIn published this really interesting  study, which is, within 12 months—and I'm sorry if   I'm misquoting the numbers—but they approximately  go like this. Within 12 months, we've gone from  

one in 67 jobs offered hybrid employment,  where you could work from home some part of it,   to one in six, right. That's crazy. It works,  right. So markets are very powerful, for sure. Speaking of markets, I forgot to mention  this. You know, I guess another one is to   further encourage competition, right. There is  something we will have to talk about some other  

day, but I'll just plant it here in case people  want to look it up. And probably most people at   the Belfer Center will know about this work.  Stephen Wolfram proposed this idea of saying,   “Hey, listen. Maybe we should force companies  to open up, to a market of ranking providers,   right, where maybe you could have” --And, by  the way, the engineering design of this would be   extremely difficult, right. And there's a lot of  privacy concerns. But bear with me for a second. [01:03:28] I know Twitter is pretty excited about this.   I have good friends there who have been vocal  about this, right. So where you could almost have  

like an API or way to plug in your own ranking  provider. And again, right, I don't know. I   love the outdoors. Maybe there's even like the REI  ranking provider. I'm kidding. But, you know, you   could choose like the Fox News ranking provider,  the NPR ranking provider, right. Like I don't  

know. I listen to German radio, Deutsche Welle[?]  ranking provider, whatever it might be, right. So I think competition on marketplaces could play  a big role here. There's probably more ideas.   Maybe I'll throw one more, which is, let's have  more courses, like CS-182 in Stanford. And I know,   with all respect, I know there's the equivalent in  most schools these days, where you see philosophy,   political science, and technology  converge, and really create a forum where   we just really imagine how  society should function.

KAREN: And CS-182, it's their  ethics and computer science class? [01:04:47] JOAQUIN: Yeah, that's   the ethics, public policy, and technological   evolution. I don't want to get derailed and  start typing. It's almost that, the title. KAREN: Yeah. This is a good segue into this  last question, because you were talking about   some studies on just the future of work. And  we started on a personal note about how you   made your way to Facebook. So I just wanted to  end on a personal note, of where you're going  

to head next. Because you've left Facebook.  And you recently went on your own journey   to figure out what you want to do, continuing  to work sort of in this responsible technology   space. So what was the process that you went  through? And tell us where you've landed. [01:05:25] JOAQUIN: Yeah, yeah. Well,   I've just joined LinkedIn, as it happens.   And this was—And I'm super excited. At LinkedIn  I'm going to be a technical Fellow focused on AI.  

But what really excited me was, on one hand,  the mission of the comp

2022-03-02 22:02

Show Video

Other news