Economics and Ethics of Misinformation

Show video

I'm going to hand things over now to Mr. Pat Sorek  who's been collaborating with our team here at   Duquesne in the launch of the Grefenstette Center.  Pat please take it away. Thanks Darlene. Uh it's   my great pleasure to introduce the next panelists.  Uh first off we have Jane Campbell Moriarty.  

She is the Carol Mansmann professor of uh Faculty  Scholarship. She holds a Chair at Duquesne Law   School, she has risen steadily up the ladder of  leadership positions at Duquesne Law School and   other Law Schools. She's an expert in areas of  scientific evidence, law and neuroscience among   others. She has won awards for her scholarship  and I know she has made singular contributions   to excellence at Duquesne University Law  School. She's being joined in this panel by  

Dr. Michael Quinn. Uh Dr. Quinn is the  Dean of College of Science and Engineering   at Seattle University. Um he uh it's another  one of the hotbeds of software engineering   other than Pittsburgh. His work likewise has taken  him to positions of academic academic prominence,   he covers an incredibly diverse array of subjects  in programming and now has lent his expertise   and mastery to ethics in the information age.  Jane and Michael it's great to have you here and  

please proceed. Thank you Pat for the kind  introduction. It's a pleasure to have the   opportunity to speak to you today about  the Economics and Ethics of Misinformation.   A few weeks ago the Pacific Coast was ravaged  by unprecedented wildfires. More than 5 million   acres burned in California, Oregon and Washington  and several small towns were consumed by fire.   False reports that anti-fascist groups were  setting the fires ricocheted across social media.  

Lonnie Sargent, from Woodland Washington, posted  on his Facebook page tweets from Scarsdale Antifa   and Hamptons Antifa. Some Pro-Trump websites  featured these stories to promote their candidate.   Misinformation on social media was also an  issue during the last Presidential Election.   In 2016 a group of entrepreneurs in Veles,  North Macedonia, found a way to profit from   the dissemination of misinformation about  the candidates. The most successful of   these teenagers were able to use websites,  Google advertising and Facebook to earn   five thousand dollars a month more than ten  times the average income for that country.   Here's how they did it. They created a website  with a reasonable sounding name like USA Politics,   they signed up for Google Adsense, which lets  Google auction off advertising space on the web   pages. They created fake Facebook user accounts  pretending to be Americans. Every day they found  

an outrageous story, copied it to their website  and inserted advertising spaces into the stories.   After that they logged into Facebook, found  groups that would be interested in the story   and posted links to the story. The entrepreneurs  were apolitical, but they quickly found they could   make more money by posting Pro-Trump stories than  Pro-Clinton stories because Pro-Trump groups had   more members and Pro-Trump stories were shared  more widely. This diagram shows how it works.  

First you create a website, then you post a  provocative Pro-Trump or Anti-Clinton story   on the website with blank ad space  that can be auctioned off by Google.   Log into Facebook and post a link to  the story in Pro-Trump affinity groups. Other Facebook users read the post. Some of them  will click on the link and go to the website,  

a few of them will click on an advertisement.  When users like or re-post the story,   that's when the magic happens.  Facebook alerts the user's friends.   Some of them will follow the link to the website  and a few of them will click on an advertisement. When they like or re-post a story on their own  Facebook page, their friends are alerted and so   on. Periodically Google sends a payment based on  the number of ad impressions and click-throughs.   The work of these Macedonian teenagers was  overshadowed by the Russian effort to spread   disinformation during the 2016 Presidential  campaign. Given the closeness of the election,  

some people have speculated that this effort was  enough to swing the election to Donald Trump,   but researchers have not found strong evidence  that this was the case. A study by Andrew Guess,   Brendan Nyhan and Jason Reifler  reached the following conclusions:   about a quarter of adults visited a Wake Fake news  site at least once, even though they were exposed   to fake news these adults still saw plenty of hard  news. Fake news did not influence swing voters.   The harm of fake news is that it creates echo  chambers for extremists as Kathleen Carley   explained earlier. Facebook is often the vehicle  for the dissemination of fake news. Twenty years   ago Cass Sunstein in his book Republic.com,  warned that information technology might   weaken democracy by allowing people to filter out  news that contradicts their views of the world.   The situation now is even worse than he imagined.  It's not just that people can find media sources  

like cable TV channels that align with their  views, now we have platforms like Facebook   actively pushing content at people. Facebook's  goal is to keep people engaged on its platform.   It does this by building profiles of its users  and feeding them the stories they are most likely   to find engaging, meaning the stories that align  with their world views, as Michael Colaresi talked   about. Over the past 25 years Democratic attitudes  about the Republican party have become much more   unfavorable and vice versa. This is an example  of the political polarization that David Danks  

described. To the extent that unfavorable views  are based on falsehoods, that's harmful to our   Democracy. Let's return to Lonnie Sargent. If  you visit Mr. Sargent's Facebook page you'll   see lots of posts about trucks and hot rods  but he also re-posts stories like this one.   Is Mr. Sargent ethically responsible for the  misinformation he spreads? You might argue he   was acting out of good will trying to warn his  friends about a legitimate danger, but a simple   Google search returns many sites revealing that  posts from Scarsdale Antifa shouldn't be trusted.   Instead, Mr. Sargent uncritically accepted  evidence that affirmed his worldview,  

which is an example of the well-known phenomenon  of Confirmation Bias. This is a growing problem.   As Pamela Walck pointed out, there's been a  decrease in the ability of the public to filter   out information aligning with their views. At  the very least, Mr. Sargent spread a false rumor,   his actions may have panicked some people and  encouraged others to take violent action against   innocent strangers. In the meantime social media  sites are feeling the heat for being conduits for   false stories. They are stepping in to stop the  spread of information and disinform misinformation  

and disinformation. Twitter has suspended  the account of Scarsdale Antifa. If you visit   Lonnie Sargent's Facebook page today, you'll see  a gray window covering the Scarsdale Antifa post,   a message warns that there is false information  behind the window. If you click on the white   CY box you'll find a reference to a USA Today  story refuting the rumor. What are our personal   responsibilities as consumers of information.  First we need to understand that Facebook   constructs profiles of its users and attempts  to keep them engaged by feeding them content it   thinks they will like. That business model leads  to the creation of Ideological echo chambers.   Second we need to understand Confirmation Bias.  Our brains are pre-wired to uncritically accept  

information that conforms to our views and filter  out information that contradicts one of them.   Third we need to be skeptical. All information is  not created equal as the author identified himself   or herself. What is the author's qualifications?  Website may look neutral but that may be   deceiving and if uh if the website is  affiliated with a particular cause,   then you should look at it more skeptically.  Have fact checkers weighed in on the story? What do fact-checking sites like Politifact.com,  Factcheck.org or Snopes.com say about the story?   Are the images authentic, is the  author making logical arguments?   Before reposting a story, you should deliberate,  particularly if the story affects you emotionally.  

Make sure you take the time to be a smart consumer  of information as described on the previous slide.   Reveal this, you should also reveal  the sources of the information,   ensure your own claims are based  on sound, logical arguments   and hold yourself accountable by revealing  your identity and qualifications. Following these standards would be characteristic  of a person who is responsibly consuming and   sharing information on the Internet. We can use  these standards to examine Mr. Sargent's posting   of the Scarsdale Antifa tweet from a virtue  ethics point of view. Mr. Sargent didn't check   out the Antifa tweet, he didn't discover that  no one claims authorship of the site and that   it contains parodies rather than substantiated  stories. He didn't visit Snopes.com or another  

site to fact check the story. In short, he didn't  deliver deliberate before re-posting the story.   To his credit he put his name on his Facebook  page and shared the source of the information.   Mr. Mr. Sergent certainly didn't appreciate  having one of his posts called out by Facebook   fact checkers. This is what he posted after  Facebook flagged the Antifa story. Okay it's safe  

to say Lonnie Sargent isn't the most sophisticated  consumer of information on the Internet, you may   have told yourself you never would have fallen for  that story. But before you get too smug consider   these examples from the mainstream media. Fortune  reported C-SPAN was hacked by Russian television,   the Washington Post reported that Russian  hackers penetrated the US electricity grid,   CNN reported on a meeting between Donald  Trump's Presidential transition team   and the Russian Investment Bank before  President Trump's inauguration. NBC news   reported that Russia was behind sonic attacks  that sickened 26 U.S. diplomats in Havana Cuba.  

All of these stories turned out to be false,  but they generated a lot of buzz on social   media before they were retracted. These stories  should serve as cautionary tales for all of us.   That we need to slow down, take a breath and  spend some time triple checking the facts,   to repeat, like what Michael Colaresi said, before  spreading juicy stories throughout social media.   And I agree, we need to emphasize the development  of media literacy literacy schools in our schools.   Thank you very much. Thank you very  much Dr. Coyne that was terrific.  

Um such a conversation about some of these  issues, the title of your presentation focuses on   the economics and ethics of misinformation.  So we've heard a lot this afternoon about   misinformation and disinformation and  we've heard a little bit about what the   difference is between these two. Is one more  problematic than the other and is there is does   misinformation spread more  easily than disinformation?   I don't think you can make a generalization. I  think the impact depends more on the content than   the intent. What were we looking at? You provided  examples in your last few slides. Were those   misinformation, disinformation or both, and  how do we, can we tell? I think I I suppose   it's a judgment call because disinformation means  sent with the intention of swaying public opinion   or propaganda and so to some extent it  means understanding the motive of the sender   and I would say that's typically a judgment  call. Although, if you would say if you can   trace it back to Russian interference in an  election, I think it would be fair to say   that would be a disinformation campaign. But if  you're talking about particular individuals um  

I don't you know I I'm a little bit uncomfortable  saying for sure what's in that person's mind when   they're actually distributing the or re-posting  a story. With the exception of the Russian Bots,   but we're pretty sure exactly, so let's get back  to the economics piece of this uh discussion we're   having. Um who else is making money off of this?  Are social media giants making money off of this?   Are influencers? You know we're all familiar  with people like the Kardashians who influence   millions apparently. Who's making the money  off of misinformation and disinformation? Well I think certainly the two biggest  money makers would be Alphabet, the parent   company of Google and Facebook, right. So  they're selling the advertising and they're   making their money by keeping people online  engaged and exposed to to advertising so that   they can make money either off the impressions  or off the clicks. So those are the two biggest   money makers although there's certainly plenty of  private individuals who are also making uh money   off of it even today. As uh we as we saw the  Macedonian entrepreneurs in the last election.  

So let's assume Facebook, we don't have to  assume we know, Facebook's making money off   of misinformation, disinformation. Google may  be Alphabet, Alphabet owns Google I guess.   How do we stop that and furthermore  should we stop that? Is that up to   Congress, society, people who use it? What do  we do about this, who stops the flow of money? You know there are groups uh, there was a  Stop Hate for Profit uh movement in July,   there were more than a thousand organizations that  said they would stop advertising on on Facebook   during the summer. Now in the end it didn't put  a very big dent dent in uh Facebook's revenues,   I think their quarterly revenues were down maybe  one half of one percent as a result of this   campaign which uh was launched. But there  I think Facebook understands that there  

it has some responsibility it can be  bad for the brand if it is seen as a conduit for fake news or for  misinformation. And and so I I do think that   to some extent that even if their motive  is simply to try to protect the brand or to   keep people on the platform, I think that they  have had some motivation to to do some work. So   if you think about what they're doing to  reduce the economic incentives, right, because   I know this contradicts what an earlier speaker  said, but you know if you go to Facebook,   they think a lot most of this information or a  great deal of is pushed by an economic incentive   so if they can remove the money making from  the from the process then they could reduce   the spread of the information. So they're trying  to reduce the creation of you know fake accounts  

so you can't have Macedonians pretending to be  Americans. And they're uh you know now using   fact-checking groups to go in there and actually  suppress stories. They suppressed 40, 40 million   posts in the spring with false information  about Covid-19. So are they doing a good job?

I don't know, I I'm not convinced they're doing  a good job. I think they're working the problem,   but it's a very big problem. Um it's  it really is remarkable how many people   um really believe a lot of false information  and simply cannot be shaken from those opinions,   no matter how many fact-laden  stories you provide them with.  

Which brings us to the ethics part of uh your  title, which is the title is Economics and Ethics   um you know as are these compatible or are they  utterly incompatible. We there's a joke about   as you know business ethics and legal ethics  that these are oxymorons. As a Professor of   Legal Ethics I like to think that's not true,  but is economics inconsistent with ethics   or not? I think if your goal as a company is to  maximize profit, then you're going to run into   you know more ethical issues for sure. You're  going to be crossing some ethical boundaries  

and this example was brought up earlier,  but you know if Facebook can make more money   by feeding people stories that they  like, because that keeps them attached,   then why should they feed people a variety of  stories even though that might be to create   a social benefit which gives people exposure  to a greater diversity of political views. So   if Facebook were really interested in social good,  they might be trying to make sure that people   encounter ideas that they disagree with. But  that's not their model, their model is to build   a profile of a user and then to try to feed the  user things that they're going to enjoy seeing.   So are we expecting too much perhaps  from a social media platform to have   really strong ethical guidance that governs what  they decide to be posted and not post? I think   different companies have different philosophies  and I think in general it's fair to say that   corporate corporate vision of success  or social responsibility I think the   typical view now is different than it might have  been 50 or 60 years ago and Milton Friedman was   talking about maximizing profit as the only goal  of a corporation. So I think many corporations are   looking to have uh to do more than just return  shareholder value but they're thinking about   making the world a better place and and all that.  And so I think you're going to find a spectrum  

of views from a variety of companies. I  think Facebook is fairly notorious for   being out there on the side of let's try this and  we'll pull back if there's a big public protest.   I'm thinking about their Beacon campaign way  back in 2007 which was one of their earlier uh   advertising campaigns and there was there were  such howls of public protest that they withdrew   that the the Beacon offering because it just  clearly crossed the line. It was not something   that was going to keep people on the platform.  Well if you remember the origin of Facebook was uh  

ranking women's looks at Harvard, I guess  it's uh not a big stretch to see where we are.   Um so you and I have talked uh before this we we  discussed the famous New Yorker cartoon about the   dog sitting in front of the computer and he looks  at his little dog friend to the side and he says   on the Internet no one knows you're a dog. Um this  is a really big problem uh for two reasons, the   first is of course the anonymity right you can be  anybody you want on the Internet. And the second   is expertise. Unlike books for example which have  publishers, editors, hopefully fact checkers,   we have no idea often where information  originated um and and two we often don't know   who is and who is not an expert. I'm I think I'm  a very good consumer of social media information  

but I'm sure everyone thinks that. Um and I've  noticed I don't know who exactly I'm following   on on Twitter at times for uh health information.  I usually try to dig deeper but often it's hard   to tell. Is this concept of expertise simply too  old school to survive in the social media world?   Does everybody have an opinion and is everybody's  opinion valid? I mean I teach expert evidence so   I immediately think no it's not. But what do you  think? Well of course there are people with more   expertise than others and we still need experts.  I agree with you, it can be harder at times to  

identify the experts and it really comes down to  being skeptical, particularly if it's a person   that you don't know well or you know doesn't  have a track record, doesn't have a reputation.   It's so important to be able to if you see some  information to try to chase it upstream where did   it come from. I one of the tools that I think  is just so amazing and this is an example of   computer scientists perhaps helping address the  issue the problem, is that Google has a image   reverse image search. So you can take the UR, you  can click on an image you see in a in a web page  

and you can get the URL of the image and then  you can feed it into images.google.com and it   will tell it will show you other places where  that image has occurred and then you find the   earliest use of that image. And that's a great  way to find out places where people have taken   an image from an older event and they're using  it to characterize a new event. So for example,   some of the stories about the rioting in Portland  Oregon are using uh pictures taken from natural   disasters from years ago. But they're using  those images to to give the idea that the   city is in chaos and the you know the entire  city's being burned down or something like that.

Do we have some questions, look at the, um, at the questions on the chat line. Um one is  from Leela Toplek what's the responsibility of   tech companies that are not social media networks  but whose consumer or enterprise technologies may   be used to amplify or share misinformation,  for example marketing clouds, CRMs, etc. You know I had a conversation with Brad Smith  who's President of my of Microsoft last year   and he was talking about the role of companies  and of course Microsoft is really working hard   to be seen as a as one of the good guys and and  you know promoting values in technology and he was   he was comparing a lot of what they do as  sort of a public utility in the sense that   you know it can be you know you can't just  simply withdraw a service because some people   might be using it uh poorly. And so uh it it is  a difficult issue to to think about, how they   roll. I think you know it was part of me that  feels like saying if we could just slow down a   little bit and there could be more thinking more  critical thinking before things get passed along,   that would help a lot should technology be used  to slow things down. But of course news means  

current right and if you slow down something too  much then it's not news anymore. So there's a   tension between wanting to know what's happening,  wanting to know what what is the news, right and   this idea that there's such a competition to get  on board, to be out there to break the story or to   that that that pulls in the opposite direction.  So it's really a case like so much in human life   where we have to try to hold the tension and  find that middle ground between one extreme   and the other. One of the problems of course  is that we all, if we slow down one piece of   social media or the internet, another  one arises to take its place. You   there's there seems to be no way to it's it's  a whack-a-mole problem I think when you start   you know trying to push one site down the  other one's gonna pop up more quickly.  

We're in a largely unregulated environment and  that's why a lot of this is happening. And I think   many companies I mean it's been interesting to  see in the state of Washington how Microsoft has   stepped forward and proposed certain regulations  uh around the use of artificial intelligence or   facial recognition or other things, simply as  the way they put it is to to be seen as guard   rails right, like we just can't we don't want  to go off the road with this thing. And so some   companies are advocating regulatory guard rails to  at least keep the the behavior within some norms   of reasonableness. But then uh uh you know the  question is will uh you know is that the way that   that you know that our legislators want to  want to go so or the public will support.   Yeah, well thank you very much. I I think am  I, are we Professor sir we're running out of   time here. I think we're at the end and I think  we're very close to keeping the schedule. Um so I  

think I will um just turn it back over to Darlene,  we've been great about uh keeping to the schedule   and it would be great if we continue to do that.  Thank you Jane and Michael for some really topical   and highly informative exchanges on those issues,  especially about Facebook which as we know covers   uh one 1.8 billion people and crosses more state  borders than any other uh feature of human life.   So thank you very much for that. Uh um Darlene,  you you uh got it now? Yes thank you so much   um and thank you Dr. Quinn uh and and and  Professor Moriarty for that wonderful session.

2021-03-11

Show video