USF Muma College of Business Certificate: Session 3: Understand Your Organization

USF Muma College of Business Certificate: Session 3: Understand Your Organization

Show Video

so so so good evening good day and welcome back welcome back to the diversity equity and inclusion and the workplace certificate this is module three and let me tell you for all the 130 000 of you we're just so happy and thrilled to see you again also we are extremely honored and excited to see that this certificate is generating amazing conversations and and many companies and groups are reaching out to us they are organizing um watch parties discussion formal informal and there are so many of these informal groups in social media discussion discussing the this certificate the topics and also following up with the with this great being comfortable with the uncomfort we cannot be happier but please understand also that we cannot moderate each one of these discussions or groups and also we cannot be held responsible for what is being said there so we appreciate your understanding there also as a follow-up on my discussions last week uh we already have as you remember we already have the um auto caption with linkedin and as we promised you i'm so happy to report to all of you that starting this week we will have live caption on youtube and this is just delighted to be able to do that and also complete transcript of session one and session two are now available on the certificate website you can go there and follow at any time now let's move on to module three i know you cannot wait after the just incredible module two you're asking yourself how can it be better i think this one is getting even better every week and remember the overall objective of our certificate is to help you throughout journey of dei in the workplace the purpose of this week module 3 who will be facilitated by the wonderful terry daniels and she will talk to us about understanding your organizations um terry will be providing us with tools know-how to analyze the career the current organization progress in terms of dei but also how to uh be aware of customers and vendors expectations and different policy for dei so that we can starting next week move on on that journey so it is just a great great session today and also before we start the instructional sections we have two segments so the first one i'm just so happy to be welcoming one of our best academic leaders great friend of mine dean eric eisenberg the dean of the usf college of art and science with three of his incredible colleagues who will continue our discussions about microaggressions and will provide us with insights on how to deal with these micro aggressions i i think you're going to really enjoy that that first segment the second segment also is really exciting i don't think you um will see this topic addressed in many um different platforms it's uh we can address it because we have one of the best world um wide most influential scholars in this area um my colleague and good friend uh dr balaji parabana band who is an expert in this area he will really help us understand can artificial intelligence be racist and he will talk to two amazing scholars in this area to help us detect some of these biases embedded in artificial intelligence and different modules but also again provide us with insights on how to deal with these biases embedded in artificial intelligence as you can see we have an amazing amazing session for you today i hope you will enjoy it and i cannot wait to see you next week thanks again and please enjoy my good friend eric the floor is yours thank you hello everyone and thank you for that kind introduction welcome to the next installment of our certificate program on diversity equity and inclusion in the workplace my name is eric eisenberg and i'm both professor of communication and dean of the college of arts and sciences at the university of south florida i feel so privileged to speak with you today and want to start by congratulating each one of you in attendance today for seeing the importance of this topic and for actually taking the time to learn more about how a deeper focus on diversity equity and inclusion in your organization can positively affect both your workplace culture and your business results as you're discovering each installment of this certificate program focuses on a different aspect of why diversity and inclusion matters in organizations and how you can best improve i understand that last week you had a terrific session focusing specifically on racial and ethnic diversity the specific purpose of today's panel is to share with you some key insights from the latest research on diversity equity inclusion at work to get to provide you with emerging best practices that you can try in your own organization toward this end i have invited three of the most talented people i know to share their practical wisdom on this topic you can read more about them in their bios but very briefly please meet dr patrice busnell who is professor and chair of the department of communication at usf dr diane price herndall who is professor and chair of the department of women's and gender studies at usf and dr michael deyoung professor and chair of the department of religious studies at the university of south florida you're going to hear a lot from them in just a couple of minutes but let me give you a little bit of framing now our plan here is to have a deep honest and frank conversation about what the research says are the best ways to approach diversity and inclusion in your organization so let me set the stage with a few high-level findings from research in communication and psychology research has shown that while it seems that we all live in the same world our perception of that world our world view our perceptual set there's lots of different names for this thing varies depending on many many life factors such as where we're born our position in the world our finances our race our religion our gender our bodies our life experiences all of these shape our world view or how we see the world human perception is wildly selective and people with different backgrounds and experiences literally notice and focus on and attend to different worlds it's not just that they interpret the world differently they see the world differently uh if you have a friend i've been watching you know basketball and march madness and you see people who are seven feet tall seven and a half feet fall if you meet someone who's very tall they're always on the lookout for whether they're going to bump their heads but depending on what your background is how you look your ethnicity your religion your body type your profession all produces different world views and also the messages we receive when we're very young about who we are and how we should see others can have a big impact on our world views now you all know this from your workplaces in the world of work athletic trainers and physical therapists notice people's fitness hair stylists notice people's hair realtors notice qualities of neighborhoods that other people don't notice in your organizations the sales and marketing people are a different breed from the engineers the physicians and nurses are different from the hospital administrators and the lawyers and again it's not that they interpret things differently it's that they actually see a different world they have a different worldview so now what happens when people with different worldviews different experiences drawn from their own lives uh interact with each other and come into contact with each other what happens then so problems arise in communication only when people confuse their world view drawn from their life experience with the true right and only world view psychologically it can be so easy to think that what you see is right and what others see is either crazy or unreasonable and all you have to do is look at politics in america right now and the polarization we're experiencing it's so difficult for people to see the world from another person's perspective but if a leader thinks that they see the true world and everyone in their organization is crazy or doesn't get it two things happen first other people feel alienated excluded and not heard and second the leader is robbed of the opportunity to learn from others diverse experiences what i call learning by expanding your worldview or your perspective so what's a better way a better way involves establishing an inclusive culture and inclusive dialogue across differences in world view so how do you do that the way you do it is when someone you work with says something that doesn't fit with your perceptions ask yourself if you're reacting with judgment or with curiosity acting with judgment is why would you think that that's crazy i can't believe you you have that idea acting with curiosity is wow i would have never thought of that tell me more about that why do you think that research clearly shows that if you see differing world views in an organization as a source of information and strength you will have more effective teams more effective solutions and a more inclusive and happy and satisfied culture so at work we must begin with the idea that everyone has a somewhat different worldview that's what diversity is and if we can approach these differences with appreciation and curiosity not judgment or denial we will create inclusive better workplaces and make better decisions so that's the big idea from my perspective but let's get my colleague's take on this important topic uh patrice dr buznell i'd like to bring you in here i know you've done some work on cultures of inclusivity at work can you talk a little bit about what that's all about thank you eric um and thank you for just a great lead into what we're going to be talking about today in terms of uh your take on leadership and perceptions world views uh and curiosity just a lot of complex ideas in a short period of time so some of my work has been with colleagues at purdue university through a provost all institutional over a million dollar diversity inclusion initiative and through an nsf grant for the college of engineering on professional formation of engineers in terms of uh your take on leadership and perceptions world views uh and curiosity just a lot of complex ideas in a short period of time so some of my work has been with colleagues at purdue university through a provost all institutional over a million dollar diversity inclusion initiative and through an nsf grant for the college of engineering um professional formation of engineers uh your take on leadership and perceptions world views and curiosity just a lot of complex ideas in a short period it's something that leaders have to care about right right on the surface it sounds like something that's it's relatively minor and you're right eric there there is confusion about what a microaggression is but we've been using the work of daryl sue and others who are defining it as everyday slights insults put-downs invalidations and offensive behaviors that arise interpersonally in everyday conversations regular interactions and often the people who um engage in these microaggressions and and speak and interact in ways that other people find invalidating often don't realize that they are doing something that's offensive um or demeaning to others and and it's important because what they're doing is is demeaning or offensive or a slight to others as members of particular kinds of groups so for example you can have microaggressions that are oriented racially or ethnically they can be gendered directed toward people with disabilities uh people who are immigrants who self-identify as lbgtq plus and so on there's a variety of differences uh in which microaggressions could possibly happen during interactions so for example asking someone who looks asian where they are from or remarking that they speak english well uh can be demeaning now you may think that you're being curious you know where are you from or you intend to be complimentary wow your english is so good but the assumption is that they don't belong here that they weren't originally from the united states and the comments immediately set them apart from other people in the workplace yeah it's interesting i think we see from the census data that more and more people are describing themselves as biracial or multiracial and something i've heard over the years from some of my friends and colleagues is that it's it's pretty insulting when people come up to you all the time and say you know what are you what really are you you know this kind of thing and not showing a lot of sensitivity to the fact that you shouldn't have to be fielding those questions all the time are there some other uh other things other than microaggressions i one of the things that that i know has already come up in this uh in this certificate program is the idea of pronoun use i think again to people my age and older the idea that we're going to move away from what our traditional pronouns in the united states is is confusing so can you give a shot at explaining why pronoun use is important and and maybe diane you want to jump in here too if you have thoughts about it but patrice that would be great if diane would jump in too so i can bring a specific example in terms of our workplace um that we certainly are grappling with pronoun use as we want to affirm particularly our younger departmental members and and i don't want to make an assumption that's always a generational difference although you know eric you're right that that is difficult for some people who've grown up in a particular way to actually change and think about their pronoun use differently but in our workplace the younger members are figuring out who they are and who they want their gendered identities and sexual orientations to be and they want them to be recognized and affirmed and and so it's not that they're not willing to correct pronoun use um they may have moved from she uh to he uh or maybe from from she today and now have settled on he um but they but they're very frustrated uh when that when particular people will repeatedly misuse their pronouns and so it's not something like i've heard people say oh it's something that not in my workplace but oh it's something that's cute right or and admittedly it takes some getting used to especially you've known somebody previously with one pronoun and then they shift to another um but in this case working to affirm the identities recognizing you know when you've been corrected or maybe working towards self-correcting yourself and practicing that will just go a long way to creating a more inclusive workplace environment and and keeping it up and you know not not lagging behind in terms of not not using the pronouns somebody else would want to affirm their identities diana do you want to throw something in so i'll throw in a couple of things one is that some research has shown that especially for young people um that people who do not identify as either male or female are very very high risk of suicide or suicide attempts because it's a very difficult way to get around in the world and studies have shown that if people make an effort to use the pronouns that they recognize that those folks actually the suicide attempt rate drops by 40 percent so this is really profound and i one way that i think about microaggressions especially as they're related to something about gender affirming pronoun use is i think of the death of a thousand cuts it's like that one little insult is not the thing that's going to trigger somebody it is the fact that they're getting little things all day every day and eventually just get worn down by it and that happens with people in terms of their pronouns that it's not just that you do it or that the guy in the store says something ugly to them about their clothing choices like why are you wearing that skirt why don't you wear your hair shorter whatever um it is that that's happened to them all day day in and day out and you feel like you're not doing anything bad and you're really not doing anything all that bad it's just that when it piles up it becomes a terrible burden for people absolutely you know um you stumbled across a word that i think uh especially outside of academia people are just getting used to this idea of triggers and trigger warnings and things like that and obviously there's a whole big cultural debate about about that i wonder if either you diane or actually any of you could sort of talk about the logic behind um behind what it means to be triggered and why this isn't just a matter of of people being overly sensitive um because because i think i think you were getting at it right there right which is that it's it's a matter of the constancy of it right right yeah and i'll pick up here because diane said something important that you've just remarked about eric and that it and that it is uh consistent it's consistent challenges to their rights to be in places um to do particular kinds of work um to be identified as they wish to be identified and we see this very prominently with people of color who are being questioned constantly about why they may be in particular places or or aren't they really supposed to be somewhere else they couldn't be an engineer they couldn't be the manager executive or things that couldn't be your house you could shouldn't be in this neighborhood right yes yes it's the consistent challenges to their competence that is so demoralizing and so absolutely exhausting and so some of the things that that happen are common responses when people engage in microaggressions again racially gendered pronoun use of a variety of them is that sometimes the person who has engaged in them dismisses them and goes oh you know it wasn't really anything much or denies it or just said oh you're overly sensitive or overreactive or you're just really defensive but what we can do what we really can do is just own up to it we all engage in microaggressions we have done it before and we will do it again and the thing is we need to constantly learn about what other people would consider to be microaggressions and then thank the person who would correct us and try not to be defensive and engage in more appropriate behaviors the next time and so it really you know when you talked about curiosity and we've been talking about affirming and inclusive workplaces these look like little things but they they really help to build an inclusive environment and culture yeah that's beautiful patrice i think you know really another way to think about it is is uh what dr dean lamayam talks about is that diversity and inclusion it's a journey it's a learning journey and if your stance is that you know what i don't know everything about how people come to their work experience but if somebody says something that i don't understand i need to learn i need to learn and understand where that's coming from as opposed to shutting down getting defensive all of the things that you were just talking about well if you don't mind i'd like to pivot a little bit to a specific set of examples that uh diane and i have been talking about so probably the most common activity in most of the organizations of people who are on this certificate is meetings and there's a lot of uh uh humorous uh work that's been done around meetings some meet people love to hate meetings uh of the ten meetings that you have in a day probably one of them is really valuable the other nine are you saying you could have sent me an email but there's another thing about meetings which is how the issues were talking here about diversity equity inclusion how do they show up in meetings and diane i think you can talk a little bit about what are the barriers to inclusion that people face when they come together and have meetings which they do throughout the day throughout the week so um so it's interesting there's been a lot of research on especially on gender it the research on ethnicity and race is not as easy to come by but the research on gender is really clear that no matter the ratio of men and women in a room unless it is 75 or more women the talk time in the meeting will be 75 men and um if if anybody is interested in kind of tracking this in their own meetings there's an online thing called are men talking too much.com that um is actually a little timer that you can run on your screen while you're in the middle of a meeting and you just click it um about who's talking and it might shock you to run these during your meetings i've run in during several of my meetings and the only meetings where men talk less than 75 are the meetings that are all women um so that that's an interesting thing and it becomes a kind of problem because when men dominate the conversation then you're not hearing all the points of view and when you're not hearing all the points of view there are things that are going to get unacknowledged or unconfronted that might be useful there is also um another related phenomenon where which is called misattribute and this is when a woman or a person of color in a meeting comes up with a really good idea and actually does voice the idea and the meeting just kind of moves on after that that caught that suggestion doesn't really get acknowledged and then about five ten minutes later one of the guys brings up exactly the same idea as his own idea and then suddenly it is a really good idea and it gets legs and it gets traction and the woman or person of color who made that suggestion is sitting there thinking i thought i was the one who said that um this has happened so studies show that 38 of women uh say that this has happened to them in meetings and there are a lot of things that you can do about these two particular kinds of activity so one is if you're the facilitator of a meeting it's your job to make sure that nobody is dominating that conversation unduly and it may take training you may actually have to work at it you may have to run one of those talk time things yourself so that you know because it's easy to get caution and not recognize what's actually going on in the dynamics of it um the second is if you hear somebody attributing you probably need to say you know i am pretty sure that's what barb over there said several minutes ago maybe we should circle back to bar yeah and these are really useful things now you don't have to be the one running the meeting either um in feminist and especially women of color circles we talk about amplifying and this is when you hear one of your colleagues make a really good suggestion you leap in it's like what a great idea um barb had a wonderful idea there and that is a way of making sure that someone gets credit for their idea well thank you diane and this uh it just strikes me that if you're if you're if you're a white man if you're part of the majority culture it's so easy to be blind to these kind of things and so step one that's why i love the idea of this timer is to sort of get a sort of get outside of yourself and kind of see what the patterns of interaction look like from a different perspective and then that allows you to address them in a way that otherwise you might you might be blind to well that's that's terrific if you if you don't mind i'd like to run this just a little bit and and and bring michael into the conversation and say okay so we we've talked a little bit about race we've talked about microaggression we've talked about gender and sexuality but we know that organizations today are struggling with the diversity of religious perspectives so how does religion fit into all this do you see a connection between what patrice and diane is saying and and the work that you've read and worked on yeah absolutely thanks eric uh the first thing i want to say when we talk about religious diversity is i like to emphasize that our concern is not just with people who identify as religious but also with people who don't so issues of religious diversity inclusion and equity to my mind incorporate all the different ways that people relate to religion and that can be positive negative or neutral so everybody needs to be aware of this even if you're not particularly religious uh having said that one thing you want to know in the workplace is that religious diversity requires special attention i think because of the privileged position of religion in american law so there are cases where an employee might have the legal standing or an exemption to a particular company policy on the basis of religion that they simply would not have on the basis of their age or gender or sexual identity but what we see is that despite this legal position of privilege regarding religion studies have shown that college students spend significant time learning about people of different races political affiliations and sexual orientations but much less time learning about people of different religious groups so this suggests that while issues of religious difference are central from a legal perspective they might be marginal from an educational or a training perspective so it may be necessary then to give explicit attention to religion as a potentially neglected aspect of diversity in the workplace that's so interesting michael and i i could i could hypothesize about why people are afraid to to take it on but what do you see as the challenges in dealing with religious diversity yeah great question i i tend to consistently see two first is simply a lack of education so if we aren't learning about religious diversity in educational or training settings that means we're getting our information about religion from the media or from social media and that information might be sensationalized or even untrue so the first goal just has to be we need to get some basic information and education the tricky thing though is that the second problem follows quickly when we solve the first that is a little information can be counterproductive if we use that information in a one-size-fits-all way so as with all issues of diversity we need to remember we're dealing with individuals who as you said eric are shaped by the groups that they identify with but they remain individuals right so we don't want to say inform ourselves about for example certain muslim practices and then leave from that experience thinking well all muslims do x because we know that there will always be people who identify strongly as muslim who nonetheless don't do x so that's the challenge i think is to inform ourselves about religion as a kind of group identity without losing sight of the individuals who participate in that group in their own individual ways as with any kind of individual identifying with the group yeah that's such an interesting and kind of subtle point in the world of academia we call about this we call this essentializing that when somebody's part of a group we say that don't make the mistake of assuming all members of the group are the same but you and i have discussed this michael we we remember that when hospitals got sensitive to religious differences they started putting posters up this is years ago and now and saying if you have this kind of religion here's what the person is going to want and that may be true and it may not be true right so uh do you have any concrete suggestions for organizations on how to attend to uh religious differences yeah i think one relatively easy place to start thinking about this is with the calendar or the schedule if you think about it in the united states we operate with a holiday schedule that is christian or at least it's a christian origin and now secularized right there are standard breaks built into the academic and work calendars around major christian holidays of christmas and easter for those who don't identify with the culturally dominant religion of christianity the work calendar itself can be a source of microaggressions right it's a subtle constant reminder of religious marginality so i would include i i would encourage managers say to know at least the major holidays of various religious traditions inform yourself when ramadan is the month of fasting for muslims inform yourself about major jewish holidays but this gets back to the point about essentializing remember to be alert for those individual cases that inevitably fall through the cracks those people who perhaps identify say as jewish but who don't see holiday celebrations as a crucial part of that identity and especially i would say especially be aware of the possibility that there might be individuals who identify with little known religious traditions so to to kind of emphasize a theme that i think all of us are returning to in general be curious be informed but remember to be flexible well that's great and we're getting towards the end of our time but there's something i think patrice and diana need you to address which is the difference between diversity and inclusion inclusion having to do with the structural changes that would sort of support the life of a diverse group would one of you take a crack at that in terms of how do we address those things for the longer term sure i'll i'll take and try to leave some time for patrice so one of the things that we have to remember is that simply diversifying the workforce but expecting everyone to still be exactly the same as white men that they have been working with is not genuine diversity diversity means actually letting people be different letting them live their differences and this sometimes means that you have to make structural changes to workplaces around parenting perhaps or around religious traditions or it may mean that you need to understand different ways of communicating with each other patrice anything dad i think diane just did a great job and certainly brought in what michael has been talking about as well um in terms of understanding the logics by which people live their lives and structure their worlds and that's inclusivity and when we start building that kind of inclusivity into our workplaces we often find that everybody benefits it's not just the people who happen to not fit this standard ideal worker male mold absolutely um you know that we've had for you know decades and so uh thinking about it as a way to make the workplace better for everybody uh is just a great way to look at inclusivity well we've come to the end of our time and i i really want to thank the three of you for these wonderful insights and recommendations and even more so i want to thank everyone watching today for your continued interest and engagement to sum up um i just want to say that we the research on diversity equity inclusion tells us that we must always be on guard personally for the human tendency to confuse our experiences and our world view with the correct or only right world view and the way to do that in your work interactions is by remaining humble remaining open and remaining curious about others perspectives that are different from your own and just to sort of draw a line on it this is not just about making people feel good inclusion is not just about building uh cultures of engagement but it's also about effectiveness and productivity and organizational success i would recommend to all of you a wonderful new book by scott page called the diversity bonus where he studied hundreds of teams and discovered that it was the diversity of teams that led them to be more productive in the end and it wasn't just about feeling better or feeling more connected although those things are certainly important so i want to thank you all again thank you all for listening if you want more information about any of the topics you heard about today feel free to email me at eisenberg usf.edu and i will get your request to the right place we wish you the best as you complete the other parts of the certificate program take care hello everyone i'm a professor at the usf muma college of business and for the last two decades i've worked professionally in the machine learning and data science areas now what is machine learning and why do we need to talk about this in the context of diversity equity and inclusion the easy part first machine learning refers to learning from massive volumes of data in the early days we took the perspective that data was this ultimate pristine thing that had all the hidden troops in it that we need to discover and use for business value and that worked great for a while and then we realized that data is not necessarily this ultimate pristine thing it also has embedded in it actions that we took along with any state of the world at various points in time and all the hidden biases that may have come along with it uh this led to you know what what i think of as machine learning 2.0 now where we understand that algorithms are very useful and can do a lot of good things but left alone they can also do things that they were not trained to do and there could be unanticipated consequences that we have to manage these algorithms can also be put into systems that may do things which are unexpected that we should start looking for in this segment we'll hear from two amazing people whose insights will help us understand how to leverage and optimize the promises of these algorithms while at the same time mitigating the potential harms particularly in the context of fairness and equity that are not just important but in a sense the call of our times thank you and hope you enjoy these interviews uh dr obermeyer has a very interesting background he is a professor at the university of california berkeley uh he is also an attending physician an emergency room physician who has who practice to practice he has an md from the harvard medical school and most interestingly and relevant to our conversation in this program his research is at the intersection of machine learning and ai and its impact on health with a focus on policy as well so thank you uh ziad for joining me today very happy to be here so you know one one of the things that i know you for is your recent work in uh looking at ai and machine learning in the context of health but specifically to highlight issues of bias that can come if you're not so too careful uh so can you tell us uh a little bit about what you found uh in your research sure so we um you know i came into this research because i i really think that algorithms are going to be very positive and transformative for medicine for decision making in hospitals um and one of the cases that we were studying you know with a view to making algorithms work well was decisions that hospital administrators people who do public health and population health management use which is trying to find patients who are going to get sick so that we can help them now and so every hospital every health system has this population of patients and some of them are headed for deterioration and we need to find those people today so that we can get them the resources they need to help them with their health needs their chronic illnesses so you know this is a perfect use case for algorithms it's something that we really want algorithms to be helping us with to look into the future and figure out which patients are going to do poorly so we can help them now so a lot of health systems are using algorithms to target what they call population health management resources but you can think of these things as just extra help for people who need it so what we studied was a particular algorithm that this piece of software is used to make decisions for about 70 million patients every year in the u.s the family of algorithms that behave just like it the industry estimates are used for 150 million people so you know the majority of the u.s population is being fed through one of these algorithms via the health system that takes care of them and what we found is that those algorithms were biased in the sense that they were rank ordering people you know in priority for who gets extra help and what they were doing was they were essentially letting healthier white patients cut in line in front of sicker black patients and you know by our estimates if you look at you know one particularly high risk group that was fast tracked into these extra health programs the overall population that uh that we were studying was about 12 black that high-risk group that got preferential access to resources was 17 black and so you might think great news uh you know um black patients are overrepresented in this group how could there be bias when you look at the health and needs of that population though by our estimates it should have been about 47 black and so there was this enormous bias that was built in that we might have missed had we just looked at the fraction black of that high-risk group so there's a really large amount of bias that was affecting millions and millions of people across the country i mean two betting things in what you said uh you know the first is just the baseline to think off right so when people think of de-biasing sometimes the baselines they think of is just the population distribution yeah but i think you raised a very important uh point where sometimes that's not the right baseline at all to look at so i'll just you know file that away in my conversation but the second thing i think that that you brought up was also you know when we think about why this happens uh you know the uh the naive thought there is oh you know people are engineering these things to do nasty stuff but but we know that's not the case at all right that problems like this are not happening because people are consciously uh engineering algorithms to be bad they're happening in fact because people are not thinking about you know doing it the right way and and they're not aware of some of the issues that come up if you're not careful uh so in in your own work you know i know you did a lot of thinking on to why the algorithms did what they did uh so what what what did you find about the why the reasons for this uh happening yeah i first wanted to really reinforce your point which is that um what we found and what i'll tell you about is that this wasn't a case of you know people doing um bad things or being biased it was the case of a subtle technical problem with an algorithm that lots of people were were introducing into the algorithm not just the company we studied but every other company that was making these algorithms not just companies but also academic medical centers and even government agencies so this was not a kind of bad apple uh story this is a kind of technical problem in a space that we're only just learning about and learning how to address bias and diagnose it in these settings so what here's the essence of what we found is that let's look at two patients one black patient one white patient who were scored the same way by the algorithm so they had the same risk score they went on to have the same likelihood of getting extra help because the algorithm was used to help allocate those extra health programs what we found was that the black patient on average would go on to be sicker to have more flare-ups of chronic conditions to have worse blood pressure worse diabetes worse kidney function than the white patient so their health needs were greater at the same algorithm score so they were being treated the same but the black patients needed more help as measured by how their health turned out over that next year why was that well the algorithm was supposed to be finding people who would go on to get sick the the subtle technical problem that i alluded to was that when we train the algorithm we have to figure out how to measure going to get sick so you know when we're talking just one human to another you know what i mean by going to get sick but there's no variable in any data set that is called going to get sick or get sick so you need to decide okay what is the particular variable i'm going to train this algorithm to predict and the variable that this team chose and as i said lots of other companies chose lots of government agencies and academic groups choose is how many dollars does that person go on to cost the health care system it's not an unreasonable choice because sick people do cost money to take care of the problem is that not every sick person costs the same amount of money so you can imagine if you or i has a heart attack we're going to get good care we're going to call an ambulance we're going to head to the er we're going to get all of the care we need if you're a poor patient or a black patient you're going to be less likely to get the care you need both because of barriers to access to getting into the hospital but also because the health care system just treats you differently there's a breakdown of trust in that relationship between doctor and patient that makes the doctor less likely to run diagnostic tests and get to the bottom of the problem all of those things add up and mean that black patients on average cost less when they have the same medical problem as a white patient so when we train an algorithm to predict cost as a proxy for those health problems we introduce that bias now you know that that's a again as you said earlier this is a very understandable thing and it's a subtle technical issue it's do i train on this variable or that variable but those small technical choices can actually have huge implications for bias it's such an easy mistake to make because in many institutions the people building the algorithms are often the data scientists you know optimizing accuracies and so on but then when it gets deployed somebody else figures out how to use the algorithm and if it gets deployed with a different objective in mind uh it doesn't make sense to expect that the algorithm will optimize the objective you have in mind right and i like the title of your paper too right so you said uh you know if you're going for a how do you expect to achieve b right or something that's a great idea it's a version it's a version of something that many people are familiar with which is like when we incentivize um one thing that is exactly what we get so when we incentivize teachers based on test scores we get good test scores we don't get well-rounded educated students necessarily unless that's a kind of byproduct of what we're incentivizing so just like people algorithms are incentivized to predict exactly the thing we tell them to predict not to predict the thing that we mean but don't have the data to express so i'm sure on that note a lot of our hospitals and insurance companies are now doing exactly this they're using their own data they have access to machine learning algorithms and they're building all kinds of models you know patient related and you know even operations related so this will be important for them to reflect on but i wanted to ask you more specifically if i'm looking at this from the past from the perspective of a hospital system or from an insurance company what should i be doing today based on what you know yeah maybe i'll start by saying what i don't think you should be doing which is paying a lot of attention to the the things that a lot of people talk about like oh do i allow the algorithm to use the race variable do i need to make sure that i'm you know curating the inputs to the algorithm what we found in our research is that the inputs are far less important than the output in other words what is the exact variable that i'm training the algorithm to predict so that that decision is really the most consequential decision that we can make when we're training an algorithm but it's often treated as an afterthought and so really paying much more attention to that decision about what what am i telling the algorithm to do what is the value system that i'm putting into the algorithm by training it to predict this variable that's a really important decision and critically this is not a pure data science decision this is a business strategy decision that needs to be made at very high levels of the organization in very deliberate ways this is not something that can be pushed down to kind of some you know lone technical person who's building an algorithm these are deeply consequential decisions for the business and for strategy that happens to be manifested as data science problems so i think really putting a lot of thought into those those decisions is important and as we showed we actually worked with the company that made that biased algorithm to begin with and by retraining their algorithm with their support and cooperation to predict the set of variables that were more related to health and less related to health care costs we we're able to dramatically reduce the amount of bias in the algorithm and more importantly or just as importantly get those extra help resources to people who need it so what we were doing wasn't affirmative action when we fixed the algorithm it wasn't saying oh we have some cutoff where we want black patients to get more resources no we want the resources to get to people who need the resources those people just happen to be disproportionately black and and happens to have been excluded from that earlier version of the algorithm that was trained on what we viewed as the wrong variable so fixing these algorithms is also possible but it all starts with that very important decision of how do i train the algorithm what am i incentivizing the algorithm to produce for me that's outstanding and and very actionable now on on a a final note we've been talking about some of the biases behind machine learning algorithms uh but you know obviously you've been working in this space long enough to to see that there are lots of promises and positives that can come from the use of ai and machine learning in health how confident are you that you know we'll be able to squeeze out all the goodness from ai and machine learning while making sure that it doesn't go crazy on us from from the perspective of equity in particular in the healthcare context yeah thanks for asking i think you know as i mentioned at the beginning i i'm basically optimistic about the role that algorithms can play in medicine i think as a doctor myself anyone who's practiced medicine knows how many mistakes you make when you're practicing medicine and how much you don't know and so you know for me as a practicing physician it's really exciting to think ahead to all the things that algorithms can can do for us and my work on where algorithms go wrong is fundamentally motivated by that attitude which is that you know we need to fix these problems but we're heading in basically a very exciting new direction and so i'll just give one example um you know that's maybe more positive from my work of an algorithm that can reduce bias um what we what we did is we trained an algorithm that the normal approach when we're for example teaching an algorithm to read x-rays is to teach it to read x-rays like a radiologist but of course part of the exciting thing about machine learning is that we might be able to do better than radiologists not just teach the algorithm to mimic what the radiologist is saying so instead of training the algorithm on the radiologist's opinion about the update we trained the x-ray to predict which knees we were studying knee arthritis which knees were patients going to report as painful and in so doing we actually discovered a number of new things about x-rays that radiologists don't currently look at that better explain the pain that patients were experiencing which is not surprising when we first developed you know in medicine or knowledge of x-rays we were studying coal miners in lancashire in england so it was a very specific form of knowledge um that that happened to give us some general insights about human physiology but it's no surprise that um supplementing that knowledge with new populations in new time periods is going to yield additional insights so not only did that algorithm better explain pain that modern populations in the us were experiencing it also had this de-biasing effect where the people who benefited from that additional pain explanatory power of the algorithm relative to the radiologist were disproportionately not just black but also poor and less educated because those people were different from the original populations that were that were used to build up our medical knowledge about arthritis so i think algorithms can have this really powerful role in pushing forward our knowledge um that will have an equitable component but it will also just push forward knowledge which is also good thank you thank you that's wonderful and thank you so much dr zia we'll let you go and we look forward to seeing your work uh in this in this area you know continue and thank you so much for your time okay bye-bye it's a pleasure to talk to dr nicole turner lee nicole is a senior fellow at the brookings institute and also director of the center for technology innovation at brookings dr turnerley has degrees from colgate university and a phd from northwestern in sociology and since then has done incredible work at the intersection of technology and public policy so she's probably spent more time in it than most people have done and incredible work on the digital divide and broadly in the last few years work on particularly diversity and equity and the role that algorithms and technology can play so thank you so much uh nicole for joining us well thank you for having me so you know uh recently your group at the brookings institute put out a uh you know i think a very high profile report on algorithmic bias mitigation and detection and not just based on your own research and you published extensively in this area but also based on a wonderful focus group where you brought in uh i think close to 40 leaders which is a wonderful opportunity and a great thing to have done and in in what were some examples of of high profile examples of algorithmic bias that that came out you know thank you uh for having me for this conversation and i think that this is so important for us to talk about algorithmic bias we often think when we talk about computers and machines that there's just no way that there could be bias actually placed into discrete mathematical models but when you place those models within social context and i'm a sociologist by training there are socio-technical implications of that model that have to be explained and in the case of that particular paper and focus group which is available on brookings website uh it's called algorithmic bias detection and mitigation what we found is that there are a series of examples that could be perceived as innocuous and then examples that can become much more uh consequential when you look at the populations that will be affected so an innocuous example is you know the type of credit card offerings that are presented to different populations latonya sweeney and harvard suggest that uh black sounding names tend to get higher interest credit card offerings or they tend to have more predatory uh financial offerings when they are online and when they click those offerings they get more of them right because the algorithm is trained to pick up on that behavior uh i say it's somewhat innocuous because it doesn't necessarily mean that they apply for it but it's something that is delivered to them in the form of an ad that reflects the external systemic inequalities that we currently experience the more severe examples is when we start looking at the application of algorithms in criminal justice when we begin to think about the use of algorithms to determine whether or not someone should be released or detained and that data in which the algorithm is trained is based on uh collected arrest records now i don't know how many people are currently watching television but what you're experiencing for example the george floyd trial is that african americans tend to live in communities where they experience disproportionate numbers of arrests higher rates when compared to white suburban communities so you can only imagine what that data is what data is being used to train that algorithm right and that data therefore then replicates the types of inequalities that we see externally through these machine learning models and so a lot of that is my work and i could go on and on from employment algorithms that are trained on mail data and kick out women's names in colleges to uh facial recognition technologies that misidentify people because it cannot pick up on the complexion of their skin if it's too dark or the lighting is not right at the end of the day if left unchecked these algorithms have the potential i think to deepen the systemic inequalities that already exist for certain populations yeah it's just incredible work and uh you know it's amazing that people like you with the scientific background you know are working in these areas because now you can actually come up with solutions right but uh you know you talked about inocuous to consequential and in the middle you also have the everyday life examples and sometime back there was an online recruitment example where a company was processing resumes of for job applicants and you know they figured out that it was selecting people in a very biased manner too and so every day simple things like you know applying for a job being selected uh these things are uh algorithmic in many cases but we are starting to see problems there too right yeah we are and it's one of those cases where i think when we begin to look in this gray area it's really interesting right because one part of it is there are discrete demographic variables about populations that we know of but unfortunately the internet doesn't tell you specifically that i'm a black woman for example that's a certain height or weight or wears glasses what you find out is that i may be a person who has an affinity towards um you know other members that look like me on facebook or i may read certain journals that might suggest my race or my zip code may act as a proxy for the type of community in which i live and so i think we have to be careful because combined these types of variables actually generate the type of hidden bias because at the end of the day the internet is so opaque and it's this causality of relationships that actually exist that i think you know which is your question why would sociologists care about this but we care about this because we care about systems we care about structures and we care about how systems and structures interact with people and at the end of the day if you are a machine learning um scientist or you're an engineer or a computer scientist in general you may not understand that these things actually exist and i think that's where when we start seeing these uh algorithms applied to everyday decision making we have to step back and say do we need a human in the loop or maybe we say do we need a sociologist in the loop who can actually make sure that we're not having this model create more bias and i i think you brought up an incredibly amazing point which is you know when computer scientists and machine learning folks look at data you know you tend to look at this data as this objective thing that you want to learn from and build models but when you bring in the human perspective the data is not an objective thing the data is a combination of things that happen and things that we do that show up in the data and and if you're not conscious that the data reflects the actions of people then you run the risk of learning from the actions of people in ways that you may not want to learn and i think that kind of sort of takes us in a different direction which is i think when you look at the reasons for why these types of biases algorithmic biases exist uh there are many right and in your focus group i think one part of what you did was just talked about why right it's usually not that people don't start off saying i want to be biased i want to build an algorithm that's biased that never it never happens that way but somehow the result turns out to be biased and and what are some you know maybe one or two reasons that you think that these algorithms tend to be this way if not if you're not careful well you know i think it goes back to a couple of things i think first and foremost we have to think about whether or not everything that we do should be automated right there are going to be instances where we think that we should automate a particular function because it may be more efficient for government to institute a child abuse hotline that allows them to screen calls away we may believe that we need a healthcare algorithm that allows us to find eligibility when it comes to chronic disease but some of those things we actually may want to step back and say you know i use this analogy is it responsible is it trustworthy is it lawful is it inclusive and the suggestion is that there are going to be some functions that you may want to sit back and say this really requires greater scrutiny before we actually automate this what are the some of the assumptions the norms the values of the developers what are the values the normative functions that actually exist in the world and what outcomes you know sort of whiteboard out what outcomes may actually happen that may have you thinking about whether or not this is something you want to apply a model to when that's the case i always tell people it's important for you to also then understand what data are you using if the data you're using is underrepresented or over-represented of the populations that you're trying to serve then you need to go back and fix that data as a researcher with a phd i can't just go out and interview populations without some type of irb or human care statement i think we need to see that in some cases you know like financial services housing employment where there will be opportunities that will be foreclosed on vulnerable populations because we didn't take that that time to really think about and evaluate the outcomes of the model and then i think they're going to be decisions that should be appealed right they're going to be some decisions that are just not right um you know unfortunately when i use facial recognition technology and it doesn't recognize me because i change my hair often i want to be able to share that with the designer and say hey this is not optimized for me as a black woman it's not optimized based on the lighting i'm using it's not optimized based on what size of the bed i woke up the way my hair looks today and i think that type of feedback loop is not often in place it also contributes to the type of biases as well i just said here i'm working at brookings on something called an energy star rating which basically sort of takes the same model of this uh better housekeeping seal when i was working on the project that you're referencing and the opportunity to work with paul reznick out of university of michigan and a woman by the name of jeannie barton who worked at the um better business bureau and it was the longest paper we ever did because it was a sociologist and engineer and a lawyer and if you put the world you don't get any compromise but what it taught me is you have to look at algorithmic bias from like a three-legged approach what are the best practices when it comes to technical cadence where are the policy prescriptions that are necessary to ensure that they're lawful and responsible and trustworthy and how do you involve civil society and consumers to give you that type of feedback so that you can be sure that the algorithm is optimized to perform the same on a variety of different contexts and and that's how we sort of define bias when similarly situated people places and objects receive both differential treatment because obviously i like blue dresses compared to someone who likes red but we also see where people receive disparate impact where that credit card offering becomes part of the widening wealth gap that keeps people out of the uh you know financial market or women out of the uh careers in engineering those are the disparate impacts that we need to be careful of more so we're looking at algorithmic bias thank you now now obviously the good thing now is that we know to look for these things you know many years back we didn't know now we know so if you had a magic wand and uh you know you could make every company in the world start to do something now what would that be well first and foremost they should come and talk to me first let me read on paper make every person read that make sure they get it i would actually just say to every company that's thinking about this a lot of companies are putting together fairness models and trying to put together a fairness team but i would tell companies when they actually think of concepts like fairness that they remember that these are elusive terms they have trade-offs what is fair to you is going to be fair to somebody else which may be different to somebody else and i think it's important for companies to remind their quantitative modelers and engineers that when they're developing these models that they either bring a set of unconscious biases or explicit biases that two that no model is going to perform the same under different contexts and three when you're defining fairness or you're trying to devise algorithms so that it can at least be democratized that you have to remember that you're placing the model within an already flawed system and i think if i got more companies to sort of hear that message that yes i am all for computers and technology when i grew up i used to run home and watch three cartoons uh the jetsons where george jetson was in an autonomous vehicle and riding around the space the flintstones were fred flynn stone drove a car that had rocks as wheels and fat albert because that's what every african american kid did when they would run home and watch that albert but george just did not survive fred flintstone didn't and it's important for us to understand the consequenc

2021-04-11 01:07

Show Video

Other news