Ethical Impacts of Artificial Intelligence on Society | BG Ideas podcast S7 Ep4

Ethical Impacts of Artificial Intelligence on Society | BG Ideas podcast S7 Ep4

Show Video

from Bowling Green State University and the institute for the study of culture and Society this is BG ideas I'm going to show you this with a wonderful experiment you're listening to the Big Ideas podcast a collaboration between the institute for the study of culture and society and the school of media and communication at Bowling Green State University I'm Jolie Sheffer professor of English and American culture studies and the director of ICS as always the opinions expressed on this podcast are those of the individuals involved and do not necessarily represent those of BGSU or its employees Bowling Green State University and its campuses are situated in the great Black Swamp and the lower Great Lakes Region this land is the homeland of the Wyandotte Kickapoo Miami pottawatomi Ottawa and multiple other indigenous tribal Nations present and past who were forcibly removed to and from the area we recognize these historical and contemporary ties and our efforts towards decolonizing history and we thank the indigenous individuals and communities who've been living and working on this land from time immemorial my guest today is Dr John Dowd who joins us as an ICS faculty fellow in Fall 2022 John is an associate professor in the school of media and communication here at BGSU he teaches courses in communication specifically communication ethics and rhetoric his ICS Fellowship project was titled Ai and everyday life finding our footing in contemporary Digital Society thanks for joining me today John thanks Shirley will you describe for us some of the research you've been working on during your fellowship period with ICS yeah I've been looking a lot at the role of artificial intelligence or AI in everyday life and the ways in which it has increasingly been I hate to use the word infiltrating but slowly creeping into more and more aspects of our day-to-day experiences familiar could you explain a little bit about some of sort of what we mean by AI what's included in that term and some of the ways in which it is in fact shaping our experiences absolutely yeah and that's actually when I started looking at that and thinking about it I was quickly made aware of how complicated that that that question actually is I assumed that I knew what it was and the more I started reading about it the less I knew about it um so I think this is a helpful way of kind of parsing that out there are a few different definitions that uh we can kind of look at and unpack more if if we want to the typical industry uh definition would be that AI is a set of Technologies computer technologies that attempt to mimic the problem-solving and decision-making capacities of humans um that can be further broken down into both strong and weak AI to be kind of clear about those two what we experience when we talk about AI or when we're interacting with AI now currently is all weak AI what we refer to when when we talk about strong AI is what people also call AGI or artificial general intelligence and that would be sort of the science fiction version of AI Skynet Hal from yeah and all the other you know the ex machina robots okay that would be running around terrorizing us and taking over and by week AI are you is that things like chat Bots that you might use on a website to you know answer fa cues that sort of thing yeah I mean and I think you know we can unpack some more examples of of this as we as we move forward but everything from search queries chat Bots when we access our entertainment on Netflix and Amazon when we go online to shop and there's a whole variety of different ways in which we interact with these Technologies and of the weak variety the alternate definition that some people are are pointing out and and I think is important to consider is that AI is a marketing concept that's used to kind of push and hype Technologies and data Centric techniques to sell a whole range of products and services that are sort of obscure from transparency and scrutiny what Drew you to this topic as an area of Interest like how did this evolve out of your previous research yeah I had you know I had yeah I had always been interested in science fiction so so there was there was that sort of natural pop cultural interest that I had you know and and that was more of a positive interaction with with the idea or concept of AI the more and more I started to learn and read and study um the more and more I learned about the harmful impacts and some of the problems within the industry itself and some of the histories of Silicon Valley itself um and and and the racist histories to be quite clear and so I started looking more closely and deeply into those issues and questions well some of the things I think you're referring to maybe you could speak a little more to this are the ways in which we sort of have this fantasy that the AI is this objective and neutral alternative to human bias when in fact of course it reproduces the biases of humans and we've seen some egregious examples where kind of the calibration of image recognition um sees dark skin and doesn't recognize it as fully human or things like that is that kind of what you're talking about absolutely yeah and again you know I think you know I'm happy that she brought that up and we can talk more deeply about that as as it arises or we can touch on it now I think you know when we start to talk about the language of AI and some of the concepts I think there can be a challenge and part of the roadblock that we run into is getting bogged down sometimes in technical discourse now I think it's important to be able to make distinctions between some of the concepts and tools that we're talking about but do I think it's necessary for people to be able to know how to code in order to critique the systems that that there's experiencing on a day-to-day basis I don't think that's necessarily the case we should be able to kind of you know think about it in a broader sense and be open to different ways of talking about it rather than just in the technical domain I think two discourses in particular and again we can we can pause with that or or or unpack it a little bit more or you know the discourses of neutrality and determinism so I'm glad that you brought up you know the first uh discourse there and and that kind of idea that suggests that these are simply tools that that they're not harmful in and of themselves that it's that they can be used for for good or for harmful purposes but I think there's a little bit more Nuance to that right there are there are biases both in terms of the negative you know racial and gender-based biases that we talked about but also biases from a media ecological perspective that we can touch on the other discourse I think that's important to kind of broadly frame the conversation is is that of determinism and that's you know discourses of determinism have long been closely correlated with discourses of progress right so this idea that technology is naturally and inevitably marching in a particular direction and there's no point in trying to stop it or to put the genie back in the proverbial bottle and and so that can sort of instill a false sense of a lack of agency right that we want to kind of push back against your work highlights the importance of having accessible language surrounding Ai and data analytics since ai's recent rise in popularity we've heard terms like neural networking deep learning machine learning but these are really Advanced ideas that may not always be clear to people exactly what they mean and I think it sort of gets back to what you were saying about the sort of the more jargony or Tech focused we are people just sort of opt out or feel like oh that that's not for me that doesn't affect me that's a you know that's for a coder to figure out so what would you like to see happen around the language we use to talk about Ai and algorithms and these Concepts yeah that's a great question I think you know I think one of the advantages that sort of I had when coming to this issue myself is that I was coming to it as a non-specialist a non-expert so I think a lot of the the Imposter anxiety a lot of the trepidation and feeling you know feelings of insecurity in in approaching some of these very it oftentimes complex and Technical ideas I'm experiencing those same things that I think the general you know population often experiences and so it that sort of helped me gain a kind of perspective with you know okay well how how can I get a handle on this you know I'm supposed to be talking about this to others you know what what are what are some kind of ways of thinking about this of talking about this that that will help me understand them and then in that process potentially could could potentially help my students or help other people sort of understand it mediacology has been Central to that process for me um you know I if we talk about one term in particular you know you mentioned uh early on in the discussion that these Technologies are extensions of us and that is a very media ecological idea right they are extensions of some human process or capacity but they are also extensions of existing ideologies and ideas right I think bias is another important term that we can think about and that's something that I've used quite often so we have bias in the traditional sense right these these ideas of implicit bias um but in in media ecology we can we also think about bias as a set of characteristics that are that are that are somewhat in innate to a technology or media themselves that don't overly determine how they're used or the effect that they have on an environment but they make certain behaviors make more sense than others so if I could provide a quick example of that and then talk about some key biases that I think that are at play with a lot of these Technologies I think that could be useful so one example that I often give to my students is when I was in grad school first thinking about these ideas I was in a pub having having lunch and there was a big window and I was watching I watched a car drive up and there were two young women that got out and they pull out their phones when they got in sat down they smiled at each other and then they for the next 45 minutes proceeded to play on their phones I honestly am not exaggerating when I say that they did not exchange one word between each other for 45 to 50 minutes at the end of that time they put their phones away looked at each other nodded and walked out and drove away I presume they knew each other and came there together and so you know you know I kind of asked my students to think about well what happened there right and and just to kind of get at you know trying to think about the relationship between the technology and human behavior and and so we could you know we could ask well did the did the did the phones make them behave that way you know and if so what are the implications of that or was it was it that they had certain beliefs or behaviors you know they were just kind of rude or indifferent and so therefore they were just using the the technology in particular way so in those cases notice how those explanations both demonstrate both you know discourses of determinism and neutrality right so I think it's you know somewhere in between you know when we say that there are biases inherent in the media themselves or that the media are environments what we're saying is that we are living in the environment of the smartphone and within that environment certain behaviors make more sense than others prior to that technology it would have been very awkward and strange to see two people come in together and not exchange words and then and then just leave right but with this with the smartphone we we understand that behavior right and and so trying to make sense of what that means for Human Relationships interpersonal communication and Society at large I think is you know are important questions for us to think about I'm going to take a quick break thank you for listening to the Big Ideas podcast [Music] consider sponsoring this program to have your name or organization mentioned here please contact us at ICS bgsu.edu hello and welcome back to the Big Ideas podcast today I'm talking to Dr John Dowd about the role of AI in our lives and the need for accessible language about technology we've talked about the understanding of complicated Concepts in AI could you talk a bit about the ethics of AI in our lives and you know where are we now and what are some ways maybe we should be rethinking the role of Ethics in technology yeah ethics need to be a central Concept in you know the design and use and distribution of Technology particularly technology driven by AI but we should also ask you know ethics are are contested you know as they should be and and whose ethics are we are we talking about right or you know because there is a lot of public-facing ethical principles being put forth by by companies and I imagine it's a lot like what I read about a lot of Dei discourse right that there's a lot of talk about the importance and the need for Dei work and by diu referring to diversity equity and inclusion absolutely thank you um and and yet that doesn't necessarily always translate to action right and and we know that that that that ethics without action question right in fact you know the way in which I understand ethics ethics implies action right it's you know ethics for me a working way of thinking about ethics are you know the ways in which we are or are not enacting the moral values that we claim to uphold and and so you know it's not enough to just say we care about your privacy or here are some of the things that we plan to do to keep you safe right we need to I think Center the uh the concerns and the well-being of people who are most disproportionately impacted harmfully impacted by these Technologies and often those are individuals who are already individuals and communities who are already marginalized within Society did you get some examples for that because I think part of what you're talking about are like with data privacy right like we are giving away so much data about ourselves all the time and how that can connect with police surveillance and other kinds of monitoring that are not equally spread out yeah let's let me list a few and then let's let's unpack some of those right I think you know we've we've got examples of predictive policing right so if people are familiar with the Minority Report right this this this story of of the precogs right who are predicting crimes before they happen that is actually occurring in many communities um only we're we're using AI driven systems credit score monitoring um access to resources you know public resources and and and and when I when I sort of came across you know the ways in which uh employers not only could but were actively monitoring and you know surveillancing their employees through office tools that we use on a day-to-day basis I I shouldn't have been surprised or cut off guard but but I couldn't help but be they're also you know on an entertainment level right and this will get back hopefully to this this question of of a set of biases that I think are really kind of guiding a lot of these practices and we can hopefully scoot back to that later but ring doorbell um I I believe owned by Amazon there is there is talk that Amazon is in the process of creating a reality television show that uses camera footage gathered by ring doorbells um to you know I mean to delivery drivers and people are already you know surveillanced and monitored to an extensive level already but this you know takes it to you know a level of of of mockery and dehumanization I can only imagine I mean it's not far of a stretch to imagine what's going to happen there's you know it's going to show people treating people's property poorly right it's going to show people falling over into bushes and and you know and humiliating or compromising situations all of this sensationalism that comes with that um and and that's that content is being delivered or created directly from the data taken from ring doorbells a product that many people you know utilize in their daily lives and it's products like that that people use for one purpose and don't realize that the data that is scooped up they are relinquishing their rights to it in accepting the products right this is true of what like Nest thermostats um roombas um vacuum cleaners there are certain kind of smart mattresses all of that data you lose control of as soon as you accept the terms of service yeah I mean something that I've you know that I'll do occasionally is is just kind of ask people you know to pause and think about you know all of the different smart devices that they are either wearing uh or that are within intimate distance of them on a regular basis and it takes a little while and and sometimes you know oftentimes they will be quite surprised to recount all the different technologies that that they themselves have have have purchased and and that use and then when you see them make the connection between you know what these devices actually do and what they can do um I think there's there's a little bit of a of a wake up there for for many of us I know there was for me um yeah a lot of these you know these Technologies I mean if we can unpack some of them a little bit more I mean predictive policing in particular is is really problematic we talk about you know Black Box AI in terms of the technical aspect but there's also the element that many of the municipalities and institutions that are utilizing this technology even if the designers and technicians understand how it works the people that are purchasing them and implementing them don't most of the time what we find we don't really have clear numbers on how many police departments are utilizing this technology but we know that it's increasing and and the way these work are are really disturbing and problematic there was a lot of a lot of research and Reporting on a system utilized in Chicago where they where they would use AI predictive software to generate potential criminal lists with names of individuals and the police would knock on these individuals doors prior to them even committing a crime and they would be told that they're being watched and you know we're paying attention to this there was the incident I forget the gentleman's name but several years ago with the facial recognition software this was up in Detroit where he was wrongfully accused of a crime he was brought in being interviewed by the detective uh and and and said this clearly is not me and the detective wouldn't believe him he said well the computer says it's you you know being the degree to which people are being gaslit you know by these Technologies when when you know we're we're letting it override our own very common sense uh as to what we we know to be a certain case well and these practices were existed before the technology as well right I mean it's basically when you talk about policing it's the practice of sort of over policing certain communities and the more if you bring police there to focus they will find more crime doesn't mean there's actually more crime relative to the population than in the white suburb next door absolutely but if you draw attention to it of course you're going to find the thing you're looking of course yeah uh the more heavily policed an area is the more stops are likely to be made right there is the expectation that gets created that some crime is going on there um not to mention our own implicit biases about who commits what kinds of crimes and where again these are these are extensions of existing biases that have existed for a long time I mean this you know this idea of the criminalization of Blackness right that Khalil Muhammad talks about in his in his amazing book starts with the emergence of statistical analysis and social sciences to begin with right so it it makes diabolical sense right that the technologies that are heavily dependent on mathematics and statistical prediction to sort of take these to a whole new level right so so again the shock you know we should constantly be shocked at this stuff right we don't want to lose our ability to be shocked at this but more of the amazement is is at you know not that it's happening but at the scale and Pace at which it's capable of Happening Now giving these new technologies and the sheer amount of data that's being collected on a daily basis both voluntary and involuntarily and the kind of I think a lot of folks just sort of assume or hope maybe that that data is going to make them safer right when in fact that is not necessarily the case right it's not necessarily the case you know I wonder there is some interesting psychology going on here right and I was reading um about the phenomena of people essentially the argument was that people who live in more homogeneous societies tend to be more supportive of social benefits but when when those societies become more diverse then all of a sudden social benefits are crutch and we don't want to support them right I wonder if there's something similar going on with this idea of you know surveillance and data collection right where you know when we feel more safe and comfortable we're less worried about surveillance and Technology but when you know we we are in a situation to where we are frightened by the other right uh then then we think that these Technologies are an easy fix for for these inherent fears that have been stoked um for a long time one of the things that has come up a lot recently is the negative mental health effects of social media but really it is about the algorithms right I'm thinking about the studies on Instagram um on the mental or the self-image of teenage girls could you talk a little bit about where you see kind of mental health and Ai and algorithms intersecting yeah you know in fact more recently I've been writing a little bit about the relationship between media social media digital media and loneliness there's a lot of a lot of studies that are again you know the whole cliche that correlation is not causation but there are some pretty compelling correlations between the rise of these digital platforms like Facebook and Twitter and now Instagram and some of these other platforms as as these Technologies become more widely used we see a concomitant rise in feelings of depression and loneliness feelings of isolation and so you know I think it's again we don't we don't want to assume that there is a direct causal link between the two but there it's hard to ignore that there's not a relationship between the two could you tell us a little bit about your community engagement work with Bowling Green High School students this semester yeah yeah I'm working with Dr JoBeth Gonzalez who is the director of the Bowling Green High School theater class that she teaches and essentially our particular project is focusing on questions of technology and the experiences of teenagers who are on the front lines using this technology themselves and so I've been working with them kind of thinking through some of these issues talking about some of the concerns that they have and and they will be sort of utilizing some of that knowledge and information putting together a short performance that highlights some of the tensions involved in these discussions and we will be well I will not be provided I hope and they will be performing it at way Public Library I think in December December 3rd or 7th somewhere we'll clarify and what are you hoping that they take away from this collaboration well I think what I hope that they take away from it is the same thing that I take away from it which is perspective I think you know the older I get the further away I get from the experience that that that they're having I know I have my own personal experiences in relationships with technology and they are not entirely different but certainly the lives of teenagers growing up with this technology is is is in some in many cases radically different than than my own experiences so I hope that you know they sort of Garner a sense of understanding about the technology and respect for it in terms of you know this you know this this false idea of it being a neutral tool I hope that they can learn to develop healthy boundaries you know between themselves and the technologies that they use and not feel uh so much Tethered to it as as they may right the sense of you know the the fomo that that often gets experienced by people that aren't fully participating in these Technologies I know that that group does a lot with peer-to-peer conversations right and developing part of the theater work is often kind of developing short performances designed for their peers that are kind of educational as well as entertaining you know what do you what are you seeing from that conversation about the ways peers are talking to each other about how to create may be different kinds of boundaries or to think differently about their use of social media yeah I think one of the biases inherent in our digital environment is is this idea of rapidization right our experiences of time have I think radically changed in some cases and so it can be difficult to pause and to take a step back and and we often get caught up in in the moment right I mean think about and I don't think it's unsurprising that many of the technologies that are being developed now are designed around the short creative burst right you know I can only speculate that more that that future Technologies are going to be similar right predicated on a shorter attention span well and thinking about like social media the way those algorithms work is they prioritize the most recent posts so you truly are missing things if you're not checking it all the time absolutely and they are personal analyzed right I mean this idea that you know the the this notion of personalization I think is a very powerful one and it's a marketing Ploy right built on the idea of engagement right the more that we can engage things the more that we can relate to you personalize things for you the more ideally you will be likely to purchase our product Service uh or idea you know and and we see some of this you know moving over into the political realm as well I think we're all going to have to sort of deal with the impacts and consequences of this but certainly you know younger kids adolescents children whose brains are still developing I think are more susceptible in different ways not because they're incapable of enacting some form of agency but again neurologically there is some developmental stuff that's different than adults who have some of us right have have had the opportunity to cultivate more experience in life and sort of have a greater con context for some of the experiences that we're having on online well I think too about it's really sort of unprecedented this is a generation that they are the experiment right that like they have from birth been exposed to technologies that for older folks they were adults or young you know older people before those were introduced so they did have a before and after and yet we've had a generation that we've treated them as guinea pigs by introducing them to these tools at really young ages absolutely I mean I think we're all we're all guinea pigs and and I mean literally we are being experimented on and this was around the time of the Cambridge analytica scandal in Facebook where we we discovered that they were actually manipulating people's news feeds right to see if they could generate different affective responses uh and and is is diabolical as that sounds it is but it was you know the the it was also but now right it was to see if we could personalize ads in such a way that would make an introvert you know more likely to buy a product can can I create an ad for the same product that is different for an introvert than a perceived extrovert and it was really kind of circulating around a lot of those those marketing tactics and ideas what would you like our listeners to take away from your research this semester any advice or recommendations for folks who might listen about what they can do to be more informed more engaged more maybe aware of the role of AI and algorithms in their lives yeah I think you know I think um I think be open to a lot of the work that is currently being done uh and and I will sort of highlight some of that work I think you know one project in particular that I am you know happy to sort of report on deals with the idea of consentful tech practices right and this is a project that was spearheaded by Uno Lee and Tawana Petty for the consentful tech project and I think it started in Detroit but they created a great curriculum designed around this idea of consentful Tech which advocates for consent and data privacy and safety to be built into the front end of technological platforms rather than an example would be in opt-in ethos rather than an opt-out and by can you just briefly explain for folks who maybe aren't quite thinking about what that means what's the difference what the current default is opt out which means what which means I download an app and I am agreeing to the collection of any numerous data points that is stipulated within the legal agreement by the company and I am automatically signed up for that sometimes the the process for opting out of that is laborious and complicated and confusing they make it as difficult as possible to opt out and a lot of times people just sort of resign themselves to you know ease inconvenience this is this is the challenge those very things that technology purports to assist us in which is frictionless access right Technologies of ease and convenience this also I think makes us a little bit more susceptible to not sort of taking a step back and sort of thinking about some of the things that were actually exposing ourselves to in that process so the opt-out feature is we are automatically enrolled in those agreements um and and we have to actively go in and change those settings ourselves and that's also an example of kind of putting the onus on the individual absolutely as opposed to the system level absolutely we've given you the opportunity to opt out we've done our legal part right if you didn't do that that's on you you should have read the 534 page agreement user agreement right whereas the opt-in ethos would be here are all of the pieces of data or the or the or the collection of data that we are going to that we would like to take you can choose to allow us to do that or not before you even sign up for the product or service right and make that is seamless and as frictionless as possible and so they're doing a lot of great work with that Joy Bolam Winnie who sort of spearheaded this work on facial recognition and the biases involved in that it was actually her dissertation project that sort of recognized the problem with recognizing faces of Darker skinned people particularly darker skinned women and she sort of spearheaded the I think it's the algorithmic Justice League right and and she she talks about inclusive coding and she has sort of three very important but related questions well I'll put them in state statement form who codes matters why we code matters and what we code matters right so this this is sort of involved again on the design end rather than trying to mitigate all of the harm that's happening after the fact right being you know having more inclusive coding teams uh on the on the design end also being more responsible in terms of curating data that we are training these systems on so that rather than just dumping the entirety of the nine rings of hell of the internet into these models to spit something out and then be surprised when it starts making racist or sexist claims we should sort of slow down and and and sort of curate that data much more responsibly and inclusively thanks so much for joining me today John it's been a pleasure talking with you listeners can keep up with ICS by following us on Twitter and Instagram M at icsbgsu and on our Facebook page you can listen to Big Ideas wherever you find your favorite podcasts Please Subscribe and rate US on your preferred platform our sound Engineers for this episode were Randy Kyle and Marco Mendoza with audio editing by Deanna mckeighan and Marco Mendoza research assistance was provided by Ellie dapcus with editing by Joe Elia [Music] thank you

2023-03-02 10:13

Show Video

Other news