Hello, everyone. It's now the top of the hour, so I'm gonna go ahead and get started. Thank you so much for joining us today for this exciting webinar.
I do have a few housekeeping items to cover while folks continue to filter in. The closed captioning link has been paces in the chat box. We are recording the session today and you will be e-mailed a link to this recording. Once it's processed and uploaded, all of your microphones are muted. We'll be taking questions at the end of the presentation.
You can type any questions you have in the chat box. Please make sure that you're sending to all participants. When you do so, I will do my best to collect all questions asked in the chat box throughout the presentation. So feel free to type of man at any time so you don't have to try to remember them until the end of this webinar. Does grant one MLA fee to claim? Please fill out the survey that will pop up in your browser after you hit the webinar. We will also share the link to the evaluation in the chat box at the end of the webinar.
If for some reason you do miss the link, that's no problem at all. Just send me an email. My email will be shared towards the end of the presentation and I can share the direct link to the evaluation. Before we dove in, in case we have audience members who aren't yet familiar with the NLM, let me give a quick overview of who we are and what we do. The network of the National Library of Medicine, or NLM, serves as an outreach and engagement arm for the National Library of Medicine, which is part of the National Institutes of Health, or NIH.
And NLM provides training, funding opportunities and more to a wide variety of member organizations to help support our mission of advancing the progress of medicine and improving the public health. Institutional membership is free of charge and joining the network is very easy. For more information about NLM, please visit Zelensky Soave or connect with your Regional Medical Library and NLM dot gov slash region. Now onto the fun stuff again. Welcome to the second installment of N.N. Elam's new full webinar series, Identifying and Combating Health Misinformation.
My name is Kelsey Coles and I'm the academic coordinator for the Middle Atlantic Region and Anila, which is based at the University of Pittsburgh. I'm one of the organizers of this webinar series which is hosted by N.N. alums, Wikipedia Working Group. The Wikipedia Working Group, which is composed of an NLM staff from across the country, organizes biannual Wikipedia editing campaigns aimed at improving the health information on Wikipedia, which is one of the world's most widely used health information resources.
And I just have a few items that I'd like to plug that might also be of interest to you since you're here today. First, you can view the slides from the first misinformation webinar installment on the course page, which will be linked in the Chatterbox. This webinar discussed health information misinformation, particularly surrounding COVID This course page is also where you can download the Slides and Resources Guide for today's webinar. Just go to the course page link and scroll down to course materials and there you'll find the links to these documents. Next, please join us for our third and final webinar this fall on October 14th, evaluating health and medical information on Wikipedia with Wikipedia and Dr. Monaca.
Single Jones going to register will be in the chat box for you momentarily. October is a really busy month for the Wikipedia working group. In addition to our final misinformation series webinar, we're also running a month long Wikipedia editing campaigns surrounding the topic of maternal and child health. For more details, please visit.
And No editing experience or technical expertize is necessary to join us. And you can commit as little as a few minutes to contributing. Or you can join us for a virtual editing event on October twenty nine where you'll get a brief training session and then join colleagues, students and online staff and collaborative Wikipedia editing a registration link for that event.
We'll be in the chat box. And finally, if you're interested in gaining a deeper understanding of Wikipedia, it's health information and how you can utilize it in your library. We're offering a four week free asynchronous online course. Wikipedia plus libraries.
It begins October 5th and registration is still open. Again, the registration link for this will also be in the chat box. Now, without further ado, allow me to introduce today's guest speaker, Dr. Leticia Boat. Leticia is a provost distinguished associate professor in the Communication, Culture and Technology Master's Program at Georgetown University.
She researches the intersection of communication technology and political behavior, emphasizing the role communication and information technologies may play in the acquisition, use effects and implications of political information and misinformation. Leticia, thank you so much for joining us today. We are thrilled to have you with us. And I'm gonna go ahead and give you the presenter ball so that you can take over the slides and go ahead with your presentation.
Thanks so much, because bringing me. Yes, we can make the requisite teleconference questions, but you have to start with. I'm really excited to be here today.
And by here, obviously, I mean my basement. It is it is a strange world that we live in. And I hope that you will not hear my four month old who is upstairs with my husband. And I hope that you will not hear the construction that's happening on our house right now.
And you will just hear me. I just thought somebody in the chat thing, they're not hearing anyone. OK. Other people are doing me OK. I'm just going to go on this.
Everything is working and so I am a political scientist by training. But in the last five or so years, I have started working more in the area of health communication. So most of my research in political science is in political communication. I got more interested in misinformation. I moved more towards health myths and misinformation and sort of political misinformation for reasons that I'll outline a little bit later.
But I work in a variety of areas that are informed by a kind of technology and society. And today I'll be talking about correction of health information on social media. This is something that I've been working on for five or six years now. And there are a whole bunch of articles listed in the resource guide where you can learn more about this, including a Washington Post article if you just want to get kind of a bird's eye view. So to start, I want to talk a little bit about the challenges of misinformation and correction on social media.
So starting with the challenges of misinformation, you may have heard the misinformation on social media is a problem. Social media is often criticized as a source of misinformation. There aren't a lot of established gatekeepers on social media by design. Right. The whole idea of social media is that people can share with one another without any kind of limitations. So that functions really differently than kind of traditional media, traditional journalism.
There's also a really strong ability, again, by design, to sell selected communities. The idea being that you can find people that share your interests and learn from them and share ideas with them. Obviously, this works for good in that people with, you know, really weird rare interests can find one another. But it also works for bad. And we've been hearing more about the bad lately.
Right. So not just misinformation, but also kind of harmful speech and harmful interests. Right.
If you are interested in how to hurt people, you can find other people that are interested in how to for people on social media in a way that maybe you can't on other platforms. And finally, by reality. So, again, by design, social media works really fast. Things can be shared really quickly. And that can make it really challenging to get a handle on things like misinformation when they share when they spread so quickly. And people are worried about this issue.
People are worried about misinformation circulating on social media. So if you look on the left here, both of these are from a Pew study from Soave 19. The percentage of people saying that fake news is a very big problem in the country today is 50 percent, which may not sound like that much. But look and see that it's above things like violent crime, which is getting a whole lot of attention right now, climate change, racism and sexism, which are both having kind of a resurgence in the last couple of years. So this is definitely something that people are worried about and they think that it affects things that they care about. Right.
So it affects confidence and government confidence in one another and the ability to kind of get things done. So that is part of why we care about this, this mission. Now, I want to take a moment here and emphasize that misinformation on social media is something we should definitely care about, but it is not as rampant as people may think that it is. So this is from a study from a couple of years ago looking at people's Facebook, sharing data. And this is on the left.
Here is a graph showing the number of links that they shared on Facebook. So lots of people sharing links on Facebook. In the time period that they're looking in, I think the total number shared is one, but not the scale assessment here is in the thousands. So a thousand, two dozen, three thousand tons of links being shared, no fewer in person. I would pause here and say, now, what percentage of those do you think are this information and get answers from all of you. But you don't have that opportunity today.
So I will just show you the other side of the graph. And then, you know, hopefully it will it will still resonate. But we are talking about a totally different situation.
So people share lots of links. That's not what's keeping them from sharing this information, but the number of links that the average person shares. That is misinformation is extraordinarily low.
So you'll see the mode here is zero. Most people are not sharing this information at all. And those that do are probably sharing one story and then never again. And you see this.
This is happening among different parties as well. So classified, but in two different partizan. So that's kind of the first caveat that I want you to take away here, is that it's not that we shouldn't care about misinformation on social media. But there may not be as much misinformation on social media as we're led to think based on the kind of publicity that it gets over the last few years in particular.
So today we're talking about health misinformation on social media. I. Focus on health misinformation in my research. Usually I have to do a whole lot to explain why I look at health, misinformation and other types of misinformation and often talking to political scientists or other people think about different types of misinformation. I think this audience is a little bit more willing to help.
This information is something that we need to care a lot about. So I won't spend quite as much time on this, but I'll give you a little bit of insight into why I think that this is a useful way to think about misinformation on social media. So the first thing I'll say is, is this is the definition of misinformation that I use primarily in my work, which comes from Brendan Nyhan at Dartmouth and Jason Reisler Exeter University.
They've done a lot of work in political misinformation, and they define misinformation as people's beliefs about factual matters that are not supported by clear evidence and expert opinion. And I really want to emphasize here the clear evidence and expert opinion elements of this. So this is what my coauthor and I am Bret Baier at the University of Minnesota, really focus on when we're deciding what types of misinformation to consider. And it turns out to be fairly complicated because there aren't actually that many factual matters that are clearly supported by clear evidence and expert opinions. So this is from the article that we published earlier this year that is perceived as kind of establishing a hierarchy of information issues and most issues and do not have clear evidence and clear agreement from experts. They are not what we call settled issues.
What we try to focus on in our misinformation research is that tiny little pyramid, top of the pyramid up there, that little triangle top, because that allows us to say, like, really clearly like there's a right and a wrong here. There's a false and a true. And we are going to, you know, offer the corrections along those lines. But what I want to emphasize here is the vast majority of issues, even within health misinformation, are not settled science.
So the exceptions here are things like vaccines. Right. A couple of years ago, the social media platforms got some really good publicity because they were facing down on tiebacks or kind of misinformation. That's really the easy case. The vast majority of cases, including some that we've looked at in our own research, are not settled. Even when you think there's so little we did a study about a year ago looking at sunscreen use.
And right after we did the study, the new information that came out about sunscreen absorbing into the skin more than they thought that it did and for longer than they thought that it did came out. That doesn't necessarily change the science. But it's you know, it's still shows that the science is in more flux than we thought it was. Whereas, you know, before that, you might have thought everybody agrees you should wear sunscreen and it's safe and effective. So all I want to emphasize here is part of the reason health misinformation is an interesting type of misinformation is because you do have a few more of these settled issues than you have in other places, like political misinformation.
Almost everything is emerging or controversial. So that gives us at least the opportunity to have a really clear dichotomy of true and false. But even there, those issues are few and far between. Another reason to look at health misinformation on social media is that people eat a lot of science and health news on their social media feed. Another Pew study from two Esptein.
And you see that about half of people are saying they're seeing science and technology or health and medicine in their feed. So that's pretty frequent as far as news goes to Facebook or Facebook and Twitter. Both are a little like on news, Twitter less so than Facebook. But the fact that people are seeing this type of news means it's important to know the types of content they're getting and what to do about them content that they're getting on the subject. And along those lines, there is frequent misinformation in the topics of health Clinesmith information on social media.
So this is something that platforms have really been struggling with in recent years, kind of this wave following the 2016 election. Clamping down on political misinformation. And then there was kind of a subsequent wave of trying to clamp down on Clinesmith information. They got some really bad press about like bogus cancer cures that were proliferating on social media and possibly killing people, telling them not to seek traditional treatment, just, you know, drink cabbage juice and things like that. So that's all a nice little trick. Here is what happens when I have people in front of me to remind me to stay on track.
You're going to hear about cabbages. So the platforms have been working on this. But a recent report suggests that they still have a ways to go in terms of curtailing this kinds of health misinformation. These are all tiny, tiny words, and I don't expect you to read any of them. What I want you to take away here is just that the yellow bars that you see are bigger than the red bars.
Yellow bars in this case being misinformation Web site used. And the red bars being quality health information views that the things like the CDC and the WHL, things like that. This is from a report that just came out about a month ago and they found that there were, based on their estimates, there are three point eight billion views of misinformation related to health health issues that were seen on Facebook in last. Lester. And finally, the reason to look at health misinformation is that there's a significant public impact of it. So I probably don't have to tell anyone on this call, but if you have better HIGH-QUALITY health information available to you, you make better health decisions for yourself.
And that affects both your individual health and the health of your community. So things like eating more vegetables, getting more exercise, getting vaccinated. And I think that this is, you know, been made particularly salient in the time that we're living in right now when a lot of the individual choices you make also have a community impact. Right. So if your social distancing, if you're wearing a mask, all those things are going to affect not just you, but the people around you.
So that is kind of my overview of of misinformation and health misinformation. There are various approaches to dealing with misinformation on social media. So I'll go through a few of those and talk about kind of some pros and cons of them and why I focus on the areas that I do in terms of how to approach misinformation. The first is government regulation of social media.
And this, I think, has a lot of appeal for a lot of people. It seems simple. The government regulates other media platforms fairly effectively for a different, different outcome. Maybe we can just have the government down on social media and hold them responsible for their misinformation, and that will fix everything. However, countries that have done this have done it primarily in a way that's been criticized quite a lot for essentially resulting in censorship and often censorship in a way that benefits the government rather than other actors. So a lot of governments are basically using fake news, lots of the way to quell dissent in their countries, which is probably not what we're really going for here.
So there may be kind of unintended consequences or possibly intended consequences, depending on how cynical you are of these types of laws. So that's not necessarily the best approach. Another approach is asking or you hoping that the platforms themselves will engage in some of these behaviors to curb misinformation. So having social media label content as misinformation or take down confident content that they have provided is misinformation. And this is a couple of examples of what that looks like. The one on the left is from Twitter.
They have a variety of approaches to misinformation. They actually have like a nice little table that depends on how severe the misinformation is and how likely it is to cause harm that determines their response to it. And then you see Instagram's response to false information on the right here.
They have an interstitial that covers up misinformation. You can still see it if you want to, but you can also learn more about why you're seeing this kind of lag of false information. So there's an alternative approach which certainly has some promise. The two big negatives that I see with this approach. First of all, I'm not sure that we want private social media companies or public social media companies, media companies of any size making the decision of what is true and false.
Now, some of them Facebook, notably Instagram, as a result that their parent company have tried to get around this by using third party fact checkers, which I think improves the situation. But it's still a major corporation deciding what gets counted as true and false. That, as we just talked about, is really complicated. And most of the time there isn't a very fair, true and false on any given issue.
The second downside of it is just one of scale. So social media are huge. And we're talking about millions of posts every day. So even if you have a really well trained classifier to find misinformation and you have appropriate armies of fact checkers to identify it and debunk it, they're still going to be a lot of misinformation on platforms that doesn't get those labels and take down doesn't get checked by third party faction. So there's there's a real limitation of scale.
Anytime you're dealing with social media just because it is in big. And that same report that I showed you, the red and yellow bars from that just came out last month found that according to their definition of misinformation, only 16 percent of posts on Facebook were labeled as misinformation. So the vast majority now Facebook, obviously those that number is not right and they're using the wrong definition of misinformation. But even with the numbers, in any case, there's going to be some content that gets through. A third approach to dealing with misinformation is kind of a big umbrella category of media literacy efforts them in here and I include correction, which is what I focus on in my research. This includes, you know, getting people to think more critically, giving them higher quality information and giving them corrective information as well.
My cat is on the keyboard. We go. Now, there are a lot of challenges to this last category that I mentioned.
So the challenges of protection. I don't want to gloss over their decades of research on corruption and how difficult it is to change people's minds about things. And one of the main reasons that this is true is because people like to be told that they are right. They don't like to be told they're wrong. They like to have information kind of fit into what they already believe about the world. And it's difficult to challenge that.
I'm not a psychologist, but the reason that this is this is true, that it's hard to correct misinformation is a series of kind of psychological biases that we all have, things like familiarity, biases, motivated reasoning, schema, all these different sorts of things make it so that misinformation is easy to remember and correction is more difficult to remember or more difficult to accept or some combination to. The misinformation is often simpler than the correction. You can you can make a very simple lie, and often the truth is more complicated and harder to communicate. So that gives misinformation a leg up.
There's definitely a familiarity bias. So people have found and research that even when. When you start off, so if you see a piece of misinformation once and you say like, oh, I don't think that's true. If you see it again and again and again, even if initially you thought it was true, you thought it was untrue. You start to believe it to be more plausible the more you see it, even if you started off thinking that it was not true.
Misinformation is also sticky. So it is difficult to correct because it kind of sticks with you. So if you see a piece of misinformation initially and then someone corrects you, you you may decide, you may state, if you were asked that you believe the correction, but the misinformation continues to affect your attitudes and behavior. So there's a researcher at Syracuse named Emily Thorson that calls this belief echoes. So the misinformation continues to affect you even after you've updated your kind of your communication about it. And a great example is that that was found in the 2016 election, that people will update what they whether they think Donald Trump is lying or not lying, which is to say, if you give them corrective information to show them that Donald Trump is lying, they'll say, yes, OK, he's lying.
But it doesn't make him make them any less likely to vote for him or support him as a candidate. So there are various elements of misinformation that stick around in that regard. And as I mentioned, form of the reason tends to be non to the extent that it fits into your existing worldview and identity. So people a shorter way of saying that people don't like to be told that they're wrong. And this is just another cartoon emphasizing that.
So my research focuses on what I call observational correction, which is a really unsexy term for it. And I'm always looking for a better way of describing this. So if you have one, please let me know.
So the idea behind this is that social media might not just be a source of this information, they might also be a source of correction. And there are various reasons that this might be a promising direction. First of all, there are the times that you have with people as social media are weaker than the times that you have off line as a general rule. But that means that they tend to be more heterogeneous, which is to say they are more different online than they are offline. And that's because you keep in touch with a lot more people online than you possibly could offline.
So in a given month right now, it's not a good example because I don't see anyone covering a pandemic. But I might see, you know, less than two dozen friends, family and coworkers in a given month, whereas I have something like seven hundred Facebook friends. So that includes people that I worked one summer with, people in, you know, in where I grew up in Texas and where I went to school in Wisconsin and where I live now in Washington, D.C. And all those people are going to be much more different than the people that I see on a daily basis or as a general rule, going to be very similar to me. So what that means for a correction is there should be more opportunities for correction because people with different information flows are exposed to one another more often on social media than they are in real life.
In addition, there's this this element of what I call observational. So this is my very fluffy example of observational fiction at work. This is from my actual Facebook feed. A friend of mine from Wisconsin shared this story that Chabi is giving away free markers, that if you share this post and this is the sort of thing that doesn't really matter that much as far as this information goes, but you see it a lot.
And then you see underneath there, somebody has said, sorry, dear. That's not true. And shares a note from which they are, in fact, checking the site article that says it's not true.
So the benefit here is that I'll walk through the benefits. First of all, I'm watching someone else get corrected. So I watched my friend get corrected for sharing the Sharpie story that I knew that the Sharpie story was not true before I even saw it on my own feet. Right.
So immediately, as soon as I see Sharpies doing the thing, I see the thing thing. No, he's not doing nothing. And that means because they're they're correcting my friend and not me. I'm not offended because I'm not getting corrected. They're potentially lower barriers to correction on social media because you're kind of not in defensive mode here, looking at cat videos and pictures of babies and you just kind of scrolling mode and you're not necessarily absorbing you're not preparing yourself to fight or to defend yourself in the same way. In addition, there's immediate and immediacy and proximity benefits, which is just to say that as soon as I see the misinformation, I see the correction.
So that stickiness that I talk to you about a couple minutes ago is less likely to act here because I don't even absorb the misinformation before I see the correction. So it's kind of preventing me from believing the misinformation in the first place. And then finally, related to the issue that I was thinking about was what platforms are trying to do about misinformation. There's potential for huge scale here so that I can have like 700 Facebook friends by post a correction of someone. Not all seven hundred of them are going to do it because of faceless algorithms.
But there's at least the potential for lots and lots of people to see it that are a part of my network and that are part of the person's network with I'm correcting. So there's a potential for a lot of people to be corrected, even if individual correction effects are relatively small. So if you don't take anything else away from this talk, let me tell you that this works. So observational correction, we've done a bunch of studies with a bunch of different issues.
Here are some of the issues that we considered the origins of the secret virus, whether genetically modified food is safe to eat, whether raw milk, it's safe and nutritious, whether you should get the flu vaccine and several others at this point. I mentioned sunscreen earlier as well. We've done this on Instagram, we've done on Facebook, we've done with them Twitter. You've done on video sharing platform. And consistently we find that somewhere between our is seven percent and our high and something like twenty five percent decreases in misperception when people do this correction of other people happening.
And there are several ways that we've tested this. So the first and maybe most obvious is what we call expert correction. So this is some kind of expert health organization inserting themselves onto social media to correct the record when someone is sharing this information.
So this is an example of what this might look like. This is a simulated Twitter feed. Of all of the stuff that I'll show you today is where you basically just see a screenshot of a social media feed. And some people see the screenshot with a correction and some people see it without a correction.
And then we're comparing between those two groups. So it's experimentally manipulated. And what you see here is a Twitter feed where the CDC Twitter account is telling this person and this case called named Tyler Johnson said this is not true. This has been discredited by world leaders and disease control and health, and they provide a link to their own website. Not surprisingly, this is very effective.
So the CDC has a high credibility organization. It was before the pandemic started and they kind of flubbed their communication strategy. But recent polls before the pandemic showed them to be somewhere around 80 percent high trust across party lines, which is unusual at this point that any institution has that high trust level. And especially across party lines. And what you see here is compared to the control, which is on the far left of your screen, the white bar. Anytime the CDC is correcting, we see a pretty dramatic drop in this.
And this was this was just the reason there are three of them, because we are also seeing what happens if another user, a regular user, is directing in addition to the CDC. That's a little bit more complicated than I wanted to get. So the takeaway here is that the CDC is an effective corrector expert. Corrections are effective. And we've tried this with several different expert institutions as well.
So that's maybe a little bit obvious. And also maybe not particularly helpful, because we don't really expect institutions to engage in these types of behaviors. Right.
The CDC has limited resources. They're not going to go track down everybody that's wrong on the Internet and tell them that they're wrong. And that's probably a wise decision for them. So along the lines of platforms dealing with this information, we also tested whether getting a correction from a platform itself is effective in helping to reduce misperceptions.
So to do this, this is actually the first study that we did back in 2015. We used Facebook related links. Alternatively, over the years, called this related links, related stories, related article function, which when you see a link, it suggests other links that you might be interested in based on the content of that link.
So in this case, paper showing people fact checks that rebut the original post and seeing if that effectively reduces their misperceptions from seeing that coming directly from Facebook. And I want to start by showing you what doesn't work. So on the left side of the graph, here are people that when we asked them about genetically modified foods and whether they were safe to eat before they thought any of the experimental materials, they said correctly that genetically modified foods don't cause illness. Pick the right side of the graph is the opposite. Those are the people that were misinformed to start with.
And what you see, a lot of the people that were relatively well informed to start with is nothing. So there's no statistically difference, statistically significant difference, depending on whether we show them a fraction. We showed them a confirmation of the misinformation they thought or we showed them control condition. There's no difference at all when we look at the people that were misinformed.
We see a pretty significant drop in perceptions when we show them a corrective article. So this is similar to what we see from the expert class. And this is reassuring because this is sort of what some platforms are trying to do. Facebook actually started doing this exact thing after we published our article. They started surfacing back checks from their third party, fact checkers from the International Fact Checkers Network in there related stories.
We showed that this was effective. Now, a big caveat here and probably wasn't capable until you, is that rebound that this is true only for one of the two issues that we tested. So we tested this with the issues that I just showed you, genetically modified foods and their safety. And we also tested this with whether the MMR vaccine causes autism.
So a very well debunked claim. And we found a fix for genetically modified foods and not for the autism vaccine link. No, I don't have an explanation for why that is the case. I will say for all of the issues that we've studied over the last five years, this is the only issue where we've ever found no effects, which is reassuring because it means in general, this type of correction works.
But it's also frustrating because it means that there's nothing we can necessarily point to clearly to say this type of issue you should correct on in this type of vision. You shouldn't because we don't know exactly what it is about that issue that made it less effective. Our speculation is that the MMR autism link is a kind of ingrained issue at this point.
So people have made up their mind about it and it is increasingly an issue that's tied to people's identities. So the way that you think about yourself as a person, as a parent, is going to be influenced by what you think about that. And that's going to mean that it's harder to change your mind about it in general. Anytime we see politicization of issues, climate change is a classic example of this. It makes it harder to change people's minds about.
And finally, we tested this user to user correction as well. And this is personally, I think, the most important, because I think it's it has the most real world implications for individuals and for the people on the call today. I think it potentially has the most impact for you in thinking about how you can facilitate user correction. So this is an example of what use a correction might look like. Down here, we just have individuals. In this case, we've blurred their user names.
In other cases, we've had user names, but was gender neutral profile pictures and names. But as far as the person engaging in the experiment knows, like, these are just random Facebook users. They're not people that he or she knows, which again, is a. Probably increases the impact if you do know the people.
So we'll probably be more conservative in the way that we estimate. And they're just basically saying this is false and sharing link and importantly so what you see here at the red is the control condition. The blue is the condition that I just talk to you about. So the harm during the passing through their little stories and the social corrections, that is the one that I just showed you with users during the correcting. And what you see is, roughly speaking, the social correction is just as effective as a platform correction. So two random users doing this correction is just as effective a thing.
Two related stories during the correction. A couple of caveats here. First of all, you need multiple corrections from users to be as effective as a platform or an expert.
And this is know repetition and persuasion is kind of a classic known effect. Right? The more you see, it's almost like putting a tally on one side of the equation versus the other. So the person shares the misinformation then to tell you on one side somebody correct and then sell it on the other side, somebody else correct them. That's another tally with the corrections side. So that's one way of interpreting that.
In addition, you have to provide a link to a credible source if you're a user. So we tested this with with those links and without the links. And I should note that you can actually click on the link. So this is really just a cue that I've done my research and know what I'm talking about by including a link. Not that they're actually getting additional information from the link itself, but what you see here is correction without sources, the three point eight four.
That's not statistically significantly different from the control condition. Only if you include that source. Does it become a significant drop in misperception. So how can we relate this to this particular audience, might this be useful to you? I want to have a little bit about providing best practices, based support for correction on social media.
So to do go first, meet and talk about what that's best practice for correction. Ah, so the first one I already talked about, including a link to an expert organization. Again, it seems like this functions of a credibility to you that you know what you're talking about, but you have additional information even if you're not sharing it. The second part we also have just talked about, which is repeating the correction.
So whether that comes from multiple users engaging in correction or it comes from a single user repeating correction in different ways, either can be effective. And there's also there's a great article by Esptein and wound up being a bunch of colleagues published in 2012 that really emphasizes offering an alternative explanation. And this is going back to that idea of the stickiness of misinformation. If you can replace the misinformation with some other explanation, then it helps that to stick in someone's head instead of the original misinformation. So these are the kind of three things that I emphasize to people about how to do correction bell. And this is all I'll give you a couple of examples.
Walk you through a couple of examples of what this can look like. This is one of my favorite examples of correction in the wild. This is a guy that tweeted towards the beginning of the pandemic that was making.
I could not find him in the store. And the vodka brand that he tagged in his post actually responded to him and said that he shouldn't do that because the CDC says that Hansen says there needs to contain at least 60 percent. Alcohol and vodka only has 40 percent.
And this is 100 percent best practices. Correction from the vodka company. So first, they include a credible link at the bottom of their corruption. They include a source to the CDC, which, again, at the beginning of a pandemic is a very high credibility organization. And so that's that's a great call.
They repeat the correction. So here they have it, both in text and in this image that they share. This is also from like a social media perspective. Good practice to share an image because it gets more attention than texts. Also, they're tracking. So it shows that if you show an image, people pay more attention to just text.
And finally, they provide an alternative explanation. So they the CDC says it needs to be 60 percent alcohol. We are only 40 percent alcohol. So it's kind of a different thing to to keep in your mind other information.
Here's another great example from the WHL. They have this series of graphics that they called Myth Busters. And this is one example. Apparently, there's a fairly common misconception that adding pepper into your food will prevent or cure Soave 19, which, if you were wondering, is not true.
Peppers are delicious, but they will not cure you. So they again, go through this with. They include a organization. It's not actually a link because this is already kind of the know in-house. But there is an incredible organization there repeating the correction. Another thing you want to do is really not repeat the myths.
So they're leading with the correction on both the left and the right of their graphic here. And finally, they provide an alternative explanation. You shouldn't I mean, you can eat hot peppers, but you shouldn't count on them to cure you. Instead, you should stay apart from people the way you should.
Social difference and you should wash your hands. So, again, giving somebody something else to kind of grab on to instead of. So how can you support these best practices? I feel a little bit like I'm preaching to the choir here, because this is kind of what you do for a living. But a few suggestions. First of all, making easily accessible, high quality information available. So breaking things that are complicated down into simpler terms, making sure that people know how to distinguish between a high quality source and a low quality source.
Making high quality information available. Right. So it's not behind paywalls. It's not difficult to find all these sorts of things.
Essentially, being a library and creating north is citing sources. So, again, I think this is something that academia and library sciences do really well. It doesn't necessarily happen in other places, though. So something that I started consciously doing is asking people on social media when they share something.
What article did you find that in? Or if it's a meme and it's like no source whatsoever say, you know, that's really interesting. But have you checked on the numbers? Something like that. And then on the flipside, if people are citing their sources being really deliberate about emphasizing that, you appreciate that they did that. I actually had a friend share something today that was the Biden campaign came back.
And so President Trump was the thing that Biden should take a drug test before the debate. And the Biden campaign came back and said something about just you want to decide this debate based on your own. And it was like a whole thing. And I couldn't believe that they said that.
And my friends literally just shared the quote from the Biden campaign. And so I went and looked it up in a story and then shared the story with the, quote, thing. I didn't think that was true because it sounds too crazy to be true. And here it is. She did say something about Iran. So kind of creating those norms that this is what we should hold one another, too, and that we should be citing things based on facts and information along those lines, reinforcing norms of correcting other people.
So norms are really powerful things. And people claim, according to my research, at least to date, they think correction is good. So they may not be thinking about being corrected themselves when they say that. But 56 percent of people say they like it when others correct people that are wrong on social media.
Sixty eight percent say people should respond and correct when people see this information. And 67 percent say it's everyone's responsibility to do so. So the more we can kind of invoke those norms, encourage those norms and remind people that they think that facts are important. And correcting is an effective way to provide those facts to better. And then finally creating shareable graphics, using best practices. So.
Kind of along the lines of the first bullet point, making it easier for people to make it so that there are Okay you're doing part of the work for them to find these good information sources, summarize them and make them easy for other people to understand. Kind of along the lines of the examples that I said I showed a couple minutes. So I'm going to stop there and see what questions you guys have, because I'm sure there are things that you're interested in that I can talk about, that I thought to include in presentation.
So I hope that the chat has been active and I think healthy is going to learn. Yeah, I've been collecting questions. I'm just to let you know, we have about six right now, so that can give you a baseline on how much time you have for each question. So the first question we have is what are some examples of popular non quality publications or Web sites where people are encountering health misinformation? Popular non quality health misinformation. OK. So there's kind of a short and long answer to this.
Or maybe I can make the list. Academic answers to this. So there are a bunch of academic articles that try to basically identify low quality information sources, usually at the domain or you are at a level. And those are kind of publicly accessible, so that's something that I could add to the resource list if this is something that people are interested in. The tricky thing about misinformation is particularly when the platforms try to shut it down and they do so of you are a level, the misinformation jumps to another. You are all essentially, though, people that are creating misinformation intentionally in order to make money off of it.
The way they make money off of it is to keep sharing it and have people keep clicking on it and generating ad revenue. So they have incentive to move it from from domain to domain, from Web site to Web sites in order to facilitate the. So that makes it really hard to generate a complete list of these types of things in terms of popular sources where people see that it can be anything. So let me jump to you later. Yes.
So this is an example of a study that was done of the 10 most shared health articles on social media into I meeting. And a lot of these, you'll see, are not actually the kind of rhetoric the color is more misinformation that is or less credible. It is. And this is actually evaluated by scientists.
Anyway, you'll you'll notice some familiar you are out here, right? Huffington Post time, the Guardian even can have poor information. Anybody can do it on kind of a one off kind of thing in general. It's, you know, more more credible organizations do it less.
So I guess they don't have a great, great answer to that except to say, like, there are there aren't a lot of popular publications that are notorious for doing this that are somehow just slipping under the radar. Most of the misinformation that you're gonna see are coming from not known sources. Great. For the next question, I'm going to give you one that several folks had. So I'm going to sort of try to combine them all into one big question. So I think it's a really great question.
So does what counts as a credible source vary among users? And what sorts do you have about affecting people who double down when corrected with a link to a credible source because they don't consider that credible source credible at all? So this is maybe the most consistent question that I get when I present on this. And right now at least, it's one that I don't have a great answer to. So 100 percent. There are different opinions about what constitutes a credible source. And to me, that's actually one of the advantages of users engaging in this, because, you know, the person you're correcting might give to gauge what they think would be a credible source or not a credible source. So it gives you more flexibility in kind of speaking to them on their own, on their own level or on their playing field.
Having said that, we don't have great information about this. There are some organizations that are very high credibility. And as I mentioned, for across party lines. But there aren't a ton of them. And we've also found that there isn't a lot of polling research on a lot of organizations.
You might think that there should be. So we've used like the American Medical Association as one of our credible organizations. We couldn't actually find polling data saying how many people think that that's credible or not credible.
We've used the Pew Research Center as one of our credible organizations. Again, we couldn't find polling data on that. So it's very much. Something that individuals may disagree on in terms of how to react when they double down. I've seen.
I don't know. I think you can. TOS as a user.
How far down you want to go? Right. So the kind of logical response would be explaining why it is a credible source. But at some point, you have to kind of, you know, decide whether whether this is worth your effort anymore. If you're not going to convince that person, although what I want to emphasize that you're not convincing just that person.
You're convincing all the people that are watching. And sometimes thinking about it that way can also help you be a little bit more dispassionate. So if somebody comes back with a lot of emotion to direction me can make it easier to kind of pull back from that emotion. If you're thinking about the audience and then the actual engagement with that person. So that's kind of a very wishy washy answer to a very good question.
All right, the next question is, has your research looked at the reverse of observational correction where someone posts misinformation and reply to something that's accurate? And does this have a significant effect in misleading people? And someone actually commented an interesting reply that I wanted to share, which was I came off a Facebook post that wrongly corrected true information about coded clicking on the fact check link led to a scientific study that has since been withdrawn and there's no way to report the problem. Oh, that's terrible. Particularly the there's no way to report the problem elements of it. So the short answer to this is yes, which is to say we have done the reverse. We're actually working on publishing that study right now. And the mechanism works exactly the same.
So eventually what people are queuing on is that somebody is willing to go out of their way to correct. And they have a source to back themselves up. They are not necessarily, you know, somehow intuiting that is the inherent truth of the thing. Which means you can use it for evil, right? You can you can be a collector that goes around sharing this information as, quote unquote, corrections, and that will effectively make people less well-informed. The flipside of that are the good news of this, I guess, is that there's research that suggests this does not happen very often.
So if you ask people if they've ever been correct, they are assured misinformation, if they've ever been corrected. And you think about it like a two by two table. So you've either shared misinformation and you haven't.
And if you share misinformation. In either case, you've been corrected or you haven't. The cell where you have not shared misinformation but you've been corrected is very, very small. It's single digits.
Usually something around three to five percent. So there's not a lot of people that are saying, I share true information. And somebody said that it wasn't true. So it doesn't seem to be happening that often, but if it does happen, it has the same effect as the opposite.
But yeah, there needs to be a way for people to report that when it does something for sure. All right. I think we have time for maybe one more quick question.
The question is, can you explain or talk about a possible role for influencers in social media with regard to misinformation and observation? That's a great question. And basically, since we started doing this research, this has been something that I've been interested in being less Bhabhi or whatever I would have called it celebrities rather than influencers. The same thing applies for sure.
And one of the notorious examples of celebrity misinformation with regard to health is Gwyneth Paltrow. Right. She has her whole wellness line called Goup. For some reason, it shares all sorts of misinformation related to health and well-being. And clearly, that influences people's opinions about things, at least to the extent that they are willing to purchase these things based on the information associated with them.
So I think that it's really interesting when misinformation comes from celebrities. I think that more than anything, the amplification that happens from a celebrity telling this information is really interesting. So Jenny McCarthy and vaccines and autism is really an interesting example of that. Ten years ago. But you see all sorts of that happening even within like YouTube influencers and stuff like that today as well. On the flipside of that, I think that celebrities can do a lot to correct because they are so visible and influential with with their audiences and even and even with creating those norms that I was talking about, of making it OK to admit when you don't know something, we need to correct other people and citing sources and relying on credible information and those sorts of things.
I think there's a lot for celebrities to do to do on that front. The media literacy project has tried to do some of that with celebrities kind of emphasizing to think critically and check check the source and things like that. And. John Oliver had a great segment on that recently to where he was. He was getting celebrities to do PSA is for checking your sources and things like that.
So I definitely think there is a role to play. It's not something that I've looked into yet. It seems really promising. Also, I think we're just about out of time for today. Can you go back to the slide that has our contact information on just so that can be up at the very end in case people want to get in touch with us with further questions? So thank you so much for joining us this afternoon, everyone.
And thank you. A T-shirt, of course, for that excellent talk. Please complete an evaluation form to claim CE or just to share your feedback with us. And please feel free to continue the conversation with us. Email or social media. I hope you all have a wonderful rest of your day and we hope to see you at future NNLM course offerings.
Thanks for watching. This video was produced by the Network of the National Library of Medicine. Select the circular channel icon to subscribe to our channel or select a video thumbnail to watch another video from the channel.
2021-01-28