UX in ATX: Design Thinking and Ethical Considerations for AI Technologies
all right I am really excited about this topic design thinking uh and ethical considerations for AI Technologies because I've been learning and about Ai and using AI so much it's really exciting too because it's like I feel like we're in the wild wild west you know like if for those of you that remember the 90s when the web was like really getting started and everything was brand new and we were all figuring this all out and and creating the all the rules around it all at the same time I feel like we're in that and I feel like we have a really good uh expert here with Sarah uh to talk us through it so uh with no without further Ado Sarah uh you have the floor thank you so much Cindy I'm super passionate about this topic and I can get long-winded about it and speak quickly about it so if ever I lose you in any of this I'd really love for you to raise your hand and ask a question and interrupt me um I don't see interrupting we're collaborating together to make sure we all understand this um so we'll just dig right in maybe some of you feel like Josephine here where she's a creative working in the ux field and is feeling bombarded with all of this talk about ux in Ai and isn't quite sure how to impact AI Tech through her contributions she's worried that if she doesn't get on board to better understanding AI Technologies she will miss job opportunities can any of you relate please know that you are not alone why am I in position to talk to you about this well I've been working at the intersection of algorithms and user experience for over 20 years I started my career working on algorithms and bedside monitors moved over to working on algorithms that suain stained lives and then spent five and a half years building a user experience research team that shipped face ID shipped Vision Pro and a host of Health sensing algorithms on the watch for the last three years I've continued this effort as a consultant and it's dawned on me that as I have hired user experien professionals to support my clients but there's a opportunity for us to evolve design thinking practices for AI and that's why I'm here so we've heard all the hype AI is here to help us be more efficient to optimize our decision making to offset risk and increase accuracy and in some cases it does that and in a host of other cases there is room for improvement the whole point of AI is to deliver new levels oops sorry new levels of delight to users um and we're not quite there yet so I want to take a minute and for those of you that are able to come on camera or do something I'd love just an indicator from you of would it be useful to spend a few minutes talking about what is AI and how does it work and it's okay if the answer is no I'll move on to a different set of slides what do you think Cindy what were you thinking Craig as how you come off mute yeah um I would say just because it's it's so commonly misused I would I think it's always good to square up on terms so I'd be interested to hear what what how you define AI great let's go there okay so traditional algorithm where I started my career I worked alongside Engineers where we created the rules and the parameters of how we wanted the machine to function in this case is this a normal ECG wave pattern and we gave it tons and tons and tons of ECG data what felt like tons to us at the time and we created a feature it's very simple logic one plus plus 1al 2 in this case it's the typical wave of an ECG waveform we can defend the logic but as we've moved into the world of machine learning the way that our machine learning algorithm works is you give it the data you tell it the features that you're looking for the machine learning algorithm makes sense of that and in a very narrow application like in this case space ID it makes a decision is this the person that is authenticated with the phone yes or no okay but as we move into deep learning algorithms they are compiled of multiple layers of neural networks well what's a neural network it's taking vast amounts of data we're talking about millions of data points and it's the engineers that are building this are making connections from one data point to the next from this data point to a host of other data points and in there the algorithm is extracting what's meaningful the an engineer can't always explain how the algorithm is making those decisions but what we can see is that the algorithm can then predict what we think is the fastest way to get from point A to point B or to layer on those predictions Drive auton iously okay so the thing to keep in mind is that the neuron Network it's tons of data the engineers have applied the logic of how the data points um relate to one another there's layers and layers of algorithms here but now we're in the Brave New World of generative AI we're talking data on the magnitude of billions of what we call a node which is lots and lots of data points and here the logic between the nodes is fuzzy okay the engineers don't necessarily have the oversight to predict what the outcome will be or even repeat it but the beauty of it is that we can create new content okay so another view about this is that artificial intelligence is one big view of making H making machines think like humans the machine learning pieces it is where we're teaching the algorithm we're teaching the machine how to learn how to make sense of the data and the various patterns of that is the model that gets chosen so the model is a reinforcement learning model or a supervised learning model or a deep learning model which is layers and layers of these different kinds of models and in the new world that we're in generative AI generative AI is a very small part this overall land landscape of all the different kinds of models that we can use you with me so far okay so where does this happen in real life so another way to think about this supervised learning if you will is the data gets labeled and this is narrow AI okay it's a very particular application and once all that data is very specifically labeled the algorithm can predict and classify for the example weather right you'll see this in like forecasting purchasing for a particular user okay another application unsupervised is that the data is not labeled and so the algorithm gets to uncover these connections and learn from it it the logic is Buzzy to the engineers building it and another application I'm machine learning another model type is a reinforcement model and this is like the case of all the cars that you might see around the streets of Austin that have got all these sensors on them in real time they are taking in this data and learning from it that's why sometimes you might see an engineer sitting in the back with their laptop open listen to that data ignore that data learn from that data in real time the machine is making sense of that data and learning from it why does this matter to the ux community because of data because we have a role in this data so another thing to think about in all of this data is where it comes from right simplified data in machine makes sense of it decisions predictions creation of new content out but this data has a life cycle and this is a very simplified ver version of it the data is gathered it is prepared wrangled means make sense of it how are we gonna how are we going to make some of these connections analyze it do we want to keep it do we want to throw some of it out train the model it's a lot of effort going into getting the data in a place where we can feed it to the algorithm but to air is human it's user experience principle number one and humans are involved in every step of getting the D data ready for the algorithm and we are full of bias we are full of judgment we are full of the making the decisions that lead to undesirable outcomes right from our childhood from what we were educated on on from what we haven't been educated on and from our own life experiences and that bias shows up in the data I'm getting ahead of myself and it leads to unintended outcomes you can take a screenshot of this if you want to follow up on any of these articles but it's been coming into the Forefront of the news for the past few years of how these um AI Technologies are not performing to their expectations of delighting the users right and there's an ugly underbelly of who is part of the system of sanitizing this data and companies have good intentions to do this and these people have good intentions to do the right thing and there's consequences to it possibly unintended consequences right maybe we didn't think about with the foresight of like who would what the consequence to these people would be of cleaning up chat GPT all day long so why am I bringing this to your attention because big picture all those steps in the data it's one thing for chat GPT to not produce the results that I want it's another thing when entire communities are experiencing um unintended consequences in the realm of policing in the realm of Health Care and the realm of education so my my argument here my call to action my intention that the user experienc Community can dig in and understand the breadth of complexity here as it relates to managing the data and corralling the data and start to think about these people as players that we are designing for okay as I mentioned the results are not always accurate and the consequence big term longterm to society right now we've got Google producing accurate data for us right with the exception of Gemini but we have that little trigger in us it's like man this policing system is um inappropriately detecting people and it's not true this Health Care system is biased towards certain ethnicities and not attentive to other ethnicities so we've got a conscience right now of that's erroneous but as we become more dependent on these Technologies long term maybe Society won't detect these errors and that's what we need to get ahead of and there's economy around all this data this is from an article from LinkedIn in 2022 if I recall and these were all the companies that were popping up at the time that are just part of managing the data that feeds into the algorithms and Fin finally just within the last few weeks we have a top down solution to try to drive safety and accuracy and maybe some ethics into our algorithms but we all know that a topown approach takes time and I'd like to propose another way a Bottoms Up approach I'm just going to pause for a minute I moved quickly through those slides does anyone have any questions before we move on no question is a silly question just know that okay well as user experienc professionals we know that it all starts with the users I'd like to encourage you as you prepare yourselves in your careers to work on AI and Technologies to start to think about the users differently yes we know that we think about the users in terms of their culture and value education but the thing that's unique that I want you to consider more deeply is who are those people that are On The Fringe and what are the edge cases that's a term I want you to remember what are the edge cases because the beauty of AI is that you are designing for personalization where everyone gets a personalized experience but if the data is like a bell curve and it's representing most of the people uh that seem common right the average person which is likely what you'll find in a data set these days then the algorithm is not fine-tune for the people on the edges and it's our job to advocate for them and find ways to design the AI so that they too can have a truly personalized experience this comes through representation in the data and we as uxers have a role to play in that you with me on this so far okay so we have an opportunity we have an opportunity to bring AI to the level that the scientists have claimed it to be right for to truly be inclusive and mitigate the biases that are currently present in our systems for it truly to be personalized and accessible what does it mean to be individualized means that the person that's not the common user can experience the product with the same Delight as you and I and that we all experience accurate AI solutions that our data is held securely that our data is not shared with others and that over time we will experience this Con consistently such that we can trust the AI and we can trust the brand that is using AI to deliver Solutions well how do we go about doing that um I think you wrote about this Cindy on LinkedIn today or yesterday about what what makes us different from an AI agent and our superpowers are empathy and understanding creativity and Innovation our intuition which is a form of intelligence and this awareness awareness of ourselves and awareness of the environment using all five of our senses sentient beings to make decisions so let's use our superpowers to create not just user experiences but to create human experiences for all of the people that are involved in developing these AI Technologies and it's starts with design thinking I've got the neelen Norman design thinking model up here for the purpose of today I'm consolidating it into these four steps noted here and for the rest of our time together I will be speaking to these four con pieces of content in Doge how to think about the human context I will teach you a tool on collaborative brainstorming we will look at how to analyze potential consequences of AI and get ahead of it and design mitigations into the product and then how to monitor for unintended consequences long term so let's dig in discover first thing we do what is the problem what are we trying to solve and no matter how fancy our AI Tech gets how much data we can gather from the internet how much some of these generative AI tools can lead us down a potential solution path I want us to still keep in mind how much data we can get by going to observe end users right we might be under more pressure to deliver sooner but our superpower is to use all five of our senses with these end users and another thing that I would like for us to start to think about in this first phase of the design thinking models is what data do we have already and how might this data be used to train the algorithms because in my experience the engineers management product management leadership they're already running they've got an idea in mind oh we can use this data and we can put this solution place and we can have a product in here yes you can but whose world does that data reflect the common in the bell curve the edge cases so I'd like for you to start thinking about this in discover phase because I don't want you to get behind where the company will be reacting at the end because we didn't have the right data we'll move on to Define in my experience with generative AI tools and the speed at which AI Technologies are coming to Market we are being pressured to short circuit our journey mapping we are being pressured to Short Circuit these detailed task analyses it's my been my experience and instead there's a focus on what's the intent and what's the output what's the user trying to do can the algorithm perform that way and so I just encourage you to think about this intent in terms of a human Centric experience to broaden our perspectives of it's not just the end user that could be impacted by this it could be the the other family members in the home right and we're good at this we're good at this we're good at thinking about who are all the people impacted and I just want to encourage you to leverage that as your superpower to teach the other people around you that you're work working with how to think about this and then establishing these clear goals and metrics for Success it may not be about Tas completion anymore it may be about productivity it may be about how we measure efficiency in new ways and the thing that's different to start thinking about now early in the defined phase is what human tasks can AI automate and what new tasks could we do with AI okay well how do we go about doing that well I propose that we start with a collaborative brainstorm you as the expert who has spent time interviewing users who spent time going through customer complaint logs you have a sense of who the user needs and values what the user needs are and what are their values you need to know this coming into this brainstorm the data scientists and the algorithm Engineers that you're working with they're going to bring to the table all of these ideas on AI capabilities okay narrow AI algorithms online learning right all these different ways we are not experts in but we get to learn real time and when we have that user need and we have that an AI capability collectively can come up with a feature or a solution also bring product management in right know who your keep stakeholders are have them at this collaborative solution but in my experience this was something that happened after workflows were defined right after we had this very detailed end to end experience no we're doing this early on okay because it's not about the entire workflow anymore it's still important still important for us to know when we're speaking with our peers it's intent and output right capturing it here in the screen brain storm the next piece in that then once you've got this brainstorm is that you go through and evaluate these ideas these Solutions against user value technical feasibility and the data needs to bring that algorithm to life what do I mean by user value well if you haven't heard it yet I encourage you to take a screenshot of this to go to AI now institute.org to dig up the KPMG uh report on the perspectives on fate so fate is a term that came out of uh Academia in around 2010 and I think that the AI now Institute socialized it in around 2015 2016 but it is a way to represent these user values when thinking about AI Technologies so could this algorithm evenly represent all users and treat them all the same would be would we be willing to be accountable and responsible for the outcomes can we build this algorithm in a way that it's readily understandable transparency is a tough Topic in the AI world because of whole concept of these inferences that these algorithms are making but the whole point of governance now is to say you have a responsibility for the behavior the performance and for us to understand Loosely how the algorithm works so could we do that and lastly is what we're doing here right just Equitable and good and now any team that's collaborating together any company you get to figure out your own threshold for these together what does it mean to do the right thing for the user for your company in your context right what does it mean to be just for our users and Equitable across all users these are fuzzy things that we get to wrestle with there's not a right or wrong answer but with experience with thinking about it with pushing products to Market and seeing what works and what what doesn't work like a rising tide raises All Ships we can all build our our sensitivity together for these things as a user experienced Community we can knowledge share on how we thought about fairness and how it worked right so we're learning together on what it means to apply fate as a judgment on user values when we're building these Technologies and that's why Post Release monitoring is so important so we can see how these things Act behave in the real world and re-evaluate our sensitivity to these things are you with me okay so we talked about we're evaluating user values we're evaluating technical feasibility and we're evaluating the data need and we're going to come back and revisit this model of uh what the data goes through right so at this point in time if you're coming up with a few Solutions that are looking pretty good they could add some really meaningful value to the user experience well now it's time to come up with a data strategy and build out a pipeline or work with companies that will deliver that pipeline to push the data through these steps but we as the user experienced people we get to have a seat at the table I encourage you to find a way to the seat of the table where you're talking about these data needs again who world does that data reflect and we know we know through the literature we know through Academia we know through our own experiences of AI Technologies in our hands that the data gets to be clean to produce better outcomes it needs to be diverse and representative of the users and it needs to be accurate to the context okay and purposeful you'll get better results building face ID get better results when you have faces and real life content of what happens in our everyday lives to train the algorithm so you get to think about all the variety of ways if you're going to go generate more data if you're going to go collect more data of how to get cleaner purposeful data and another thing that I think that ux can advocate for is the employee experience who are the people that are working at these steps what biases do they have what is the consequence of this them dealing with this data day in and day out what bias could that generate in the data alone I think we have a responsibility to think about this and if we can come up with that moment where again our intuition our emotion very meaning ful forms of intelligence say to us I'm not sure about that listen and find a way to act on it and the other piece of this is that generating the data housing the data sanitizing the data it comes with a huge cost and that can impact the customer experience and the overall cost of the product so I think it's important that we need a lot of data to do this well we need the right data to deliver accurate Solutions there's effort involved and it's a tradeoff okay um chat GPT billions of dollars into what it takes to deliver chat GPT right not every startup has that kind of money in 2023 I think the cost was around $4 million to uh build a machine learning or neuron Network algorithm that could go to market $4 million of data alone so we get to be kind of Scrappy with this and it takes creativity from all of us on how to get there so as I said to weigh in on technical feasibility of these things and then evaluate the data needs to build it well yes Bob when you have those data so to speak I guess arguments where you're advocating for a more diverse I guess data set who is the audience is it primarily Engineers is it the business marketing that's trying to say no we have enough with the middle of the bell curve so to speak that's a great question or you know or who you know I'm just that I'm curious yeah it's upper management that is responsible for maintaining the cost um I mean I've been building AI Technologies for nine years and on very very very rare occasions has an algorithm engineer ever said to me that they've got enough data okay um but it it's costly and it takes a lot of time and so the pressure comes from upper management in controlling the overall product development cost does that answer your question he's on mute any yes sorry could trying to unmute thank you yeah no problem any other questions about this before we move on okay so in the defined phase you've created this brainstorming map of a bunch of AI opportunities that map to user value and user needs and then we get to move on to prototyping and building it and iterating on that prototype it's very common in this phase that you'll build a very small small scale model of what it is you're thinking of this feature or function and you'll test its core functionality and I very strongly recommend that as the uxers you're part of that testing remember those Edge users and edge cases those need to be represented in the test cases so we have visibility into how it's performing early for those of you who have worked in regulated Industries whether it be health care or Finance or Transportation there's a tool called a use eror analysis or yeah that's I've heard a different term lately but I can't think of it right now in my days of doing all this there was a USAir analysis well in the work that I've done with clients in the past few years we've iterated on this idea of a USAir analysis and brainstormed a consequence analysis another way to think about it is an potential outcomes analysis the importance of this because we're designing for The Human Experience not just this moment right now user experience we're designing for The Human Experience the importance of this consequence analysis is to think about short mid and long-term effects of an idea on society like did Mark Zuckerberg know that if we would spend tons and tons of time in front of Facebook of an algorithm that pushed us content that we liked that we would be creating this inherent bias of thinking that our view is right that's unlikely an unintended long-term consequence right these are the kinds of things that we get to think about when building AI Technologies and I'd like us to feel a sense of responsibility not from a place of heaviness but from a place of opportunity we get to do this we as uxers get to advocate for the end users for The Human Experience in a long-term perspective and through this consequence of analysis of doing it with alongside the engineers because we'll need their help on coming up with these things you can build your awareness of AI weaknesses and get really radical in your ideas to continue to uncover the unknowns of a Ai and the unknowns of your algorithm and then we get back to the drawing board and we put our designs in place to mitigate those risks so how does this work when doing a consequence analysis then you're going to take these features and solutions that you came up with and you're going to brainstorm like crazy consequences short mid longterm the algorithm is erroneous the algorithm hallucinates which is happening right it's one of the slides I showed you of um Google CEO apologizing for false information out there malicious intent when building face ID I did not think about all the ways that people would try to break into someone else's phone it blew my mind that that would happen but it does and we have to design against it and dependency what happens to human decision making when someone uses this technology over time could they lose their sensitivity to the the granular things that happen when you're analyzing data today like you as a researcher when you're in the weeds of the data and you notice a little pulse and your gut says oh I got to dig into that could we lose a sensitivity to that if we're relying on AI to do that for us maybe that's not a problem but maybe it is and that's the point of this exercise so brainstorm the consequences brainstorm what the detriment is to the user experience with that consquence what's the likelihood of occurrence what mitigation do we want to put in place and once that mitigation is in place now what's the overall risk to user this is a common risk management tool you can look it up use eror analysis Bottoms Up risk management tool that's what this is but it's through the lens of AI and I've worked with clients on this and we've built it out rows and rows and rows of ideas consequences and it has shaped what uniquely we do differently with the algorithm that we were building and what things we decided were too high- risk to ship right now put it on the product road map but we had to go do some research and some prototyping to figure out how we could drive down the risk to an acceptable level before we shipped any questions with this before I move on okay so the point then is is that the final design as in the iteration that you ship is that you have a refined interaction model for the entire Human Experience you've considered the short medium and long-term consequences of the technology and as you update that design in AI the thing that I'm still wrapping my head around is that failures will happen and have we done our due diligence to ensure that the benefits of the technology outweigh the risk Amanda ascal at anthropic which I think is a beautiful company by the way spend time digging into it produce this 3H framing of to ship the product it has to meet a bare minimum performance of being helpful honest and harmless so in the world that I come from we would never have shipped an ECG algorithm that didn't accurately detect heart rate we would have never shipped face ID if it didn't have a superior level of quality but as we move into these neural networks and we move into these generative AI tools or Solutions there's a fuzziness around performance and what is good enough to ship and while I personally want to keep a bar here of this excellent delightful KN it out of the park user experience AI Tech in the neural network generative ASI spaces here and so we collectively get to build our barometer of what's the minimum performance that we need to be helpful honest and harmless and we get to do it collectively on our product development teams and then like crazy test that product when you think you've got that final design in the world I come from medical devices and human factors we call it summative testing we're not just doing the common scenarios we're doing the edge cases and it's important to observe closely for potential bias and unintended consequences then it's not just did they fail at the task or pass the task it's what was their decision making as a function of this information that were displaying to them and what new action did they take again intent and output I think it's important to then examine the user behavior and their con their like subsequent activities as they're working with AI because you will find things different than what you intended and because we're looking for the bar here this minimum level of acceptance not here in my experience the um um business approaches to release go get more data and improve over time so in this last step of testing and iterating and releasing a function of this is where's that threshold for minimum level acceptance and where do we already know we need to go get more data for the next iteration of the product now something I wouldn't do this just if as if I didn't talk about it in this talk is that governance is coming right in 2026 I believe the United States has said that they will reinforce this these laws that the EU put in place to say we're going to ensure that AI Technologies are delivering transparent technology and so companies that are building AI Tech get to develop um a culture around this a culture around these data privacy and security values and how do we monitor risk of people that have malicious intent and how will we demonstrate compliance to the United States government that we are doing our due diligence to build safe Technologies so it starts now and it starts with companies creating these Committees of of governance risk and compliance and I think user experience needs to be at that table right as we think about the different people that are part of the system that are that are corralling the data if the company that you're working with is building an AI technology where any of those data steps are under your roof you have a responsibility to design solutions for that and advocate for that what that experience will be like when they're spending hours upon hours sanitizing the data and this is a really hot topic in the AI Community I encourage you to go spend time at the ethical Aid database. org site these are all companies that have come forward to say I too want to make AI better and the different sectors that they're playing in to do so one build up your own awareness of this and two they need ux people so we talked about designing for The Human Experience through this collaborative brainstorm through consequence analysis and how do you monitor longterm for these unintended in consequences through listening on what's happening in the real world of how they're actually using the product and in the end if we do this well we'll have a refined interaction model for the entire Human Experience for the interaction model is that big data wheel again of everyone that plays a role in contributing to the AI Tech and the community that is impacted by the AI Tech and the AI industry is just exploding right now and I put this here to remind you all that for us to be meaningful uxers we get to learn these tools like consequence analysis so we can bring new value to these companies our jobs might be shifting in my world it is Shifting yes I need to bring all my user experience superpowers the things that got me excited about ux in the first place like applying empathy every day I'm a very emotional person but we also get to evolve our skills and fine-tune what we're bringing to the table to help these companies ship products equivalent to what they intended so I would be remiss if I left this time together if I didn't talk about how to use generative AI for our work and it's there I have a slide on it there's a lots of tools out there they do help us move more efficiently they can help spark creativity they're awesome use them and then get back to designing for AI so as I mentioned we're designing for AI in the data in how the data is generated and sanitized we're designing for AI about the data the ethical data handling and solutions the governance right we're designing for AI about the process let's evolve the design thinking process come up with new tools beyond what I've proposed to you today publish it help the whole user experience Community come up and meet the moment of what the AI technology needs and lastly we're all here for to design for the humans that we're here to develop more transparent and interpretable AI systems and we've got a long way to go and we need you so in design thinking for The Human Experience I think that our titles and our roles will shift if you'll go out there and look today there are a lot of research Engineers you know what they're doing they're working here in the data and about the data but we need you exors there thinking about the users in a broader experience and we do have superpowers right we are highly attuned at using our five senses in novel ways that trust me our fellow Engineers are not attuned to and our breadth of users need us at the table bringing our superpowers to the discussions of how a design these Technologies and every day day in and day out we need you at the table with these people advocating for Delight so that we can bring the bar from here of minimum shippable experience to something that truly does advance the user experience and as I mentioned we get to develop new methods to drive ethical AI Innovation Into The Human Experience so we talked about Josephine at the beginning feeling like the train was about to pass her but with these tools and thinking about how we can provide our skills to the industry in a different way we can be on the train we can be on the AI train questions D this is this is awesome um I actually have a question and I don't it might be tangential because I'm on LinkedIn all the time right as I'm sure that you are too um and there's a there's there's some ux people that are complete naysayers who are saying don't use these Technologies because of the privacy and security it's just not there what I mean what would be your do you feel like this um model that you've presented would address that or is there some other way to address those concerns uh that folks are bringing up well the concerns are real um and I we do have to think about what data like when we're using generative AI for our work we do need to think about what data you're sharing with a generative AI tool and what tool you're using and what has been in the Press about how they're using your data and whether or not you want to opt into that but I encourage you if it's not something where it's safe and a good idea to share um private company information on some of these generative AI tools I encourage you to explore how to use it for your own uses right how to go create images for fun how to write a poem to your mom for Mother's Day using chat gbt because the thing that drives me crazy about generative AI tools right now is that prompt engineering is a thing because the user experience of these things bites and I have to think like the machine to get the results I want that's poor ux but we need user experienced people to we need you all to build your own attitude like your own um perception around this your own opinion so it's not just my opinion that it's an awful user experience you have real- time experience trying to use it so you can speak to it right the AI train is much like the Internet Train is much like the car industry back in the day engines exploded right consequences unintended consequences happened it just took time for the technology evolv to get it safe much like the artists that continued to paint that did pick up that didn't pick up the camera when it came along they went out of business so it's in our best interest to use these tools to keep ourselves proficient and efficient at doing our job we just have to figure out how to do it safely where we're not revealing secret company information when we do it thank you what else like there's a question in the chat so can you give examples of edge users Edge users um I'm going to take chat GPT since we're picking on it let's say someone doesn't know how to spell accurately how can chat GPT help them along to understand their intent and get a um educated output right how can my grandmother who does not know how to think like a machine we're all lucky to be here we we we know enough to know how to like try again and try again and try again until we get the result we're working chat GPT how could someone like my grandmother use chat GPT to write me a card because she doesn't have the dexterity with her hands anymore right so these are people where they might have um accessibility concerns they might not be the top educated person if we think about health care we have medical deserts in urban areas and we have medical deserts in rural areas their health care needs are not represented in how they Access Health Care okay so if I'm building an algorithm um that is this fancy thing that's going to help them detect un ailment with their body but they don't have internet how will we serve that right so it's these things are like tangential to the product it may not be one for one within the product itself but it impacts user experience and in in my case in my experience it's our job to advocate for that what else I know we have time for at least one more question go ahead Craig okay just wanted to make sure we didn't get that double talk going um so I loved um seeing your experience um because to me that's it's been quite a while since I've taken a a survey course in machine learning um and it's kind of funny that so many years later now it's finally getting some momentum and it seems like it's too early to be honest in my opinion except for this the use cases that you're you you've been involved with and you know I think that's the real strength of machine learning as lots of data apply to technical issues what's interesting about yours is that it it was actually on you know widely released consumer products so you're kind of in that sweet spot of um I I just like the use of the term a AI because I think that's an overreach it's usually generally just machine learning but if if you stick with the machine learning approach The Sweet Spot is lots of data applied to problems that don't have a lot of human interaction um I liked your talk and I appreciate you including those slides for some of the Frameworks because eventually I think it will get better and better it needs to be led by humans if it's going to serve humans but my my question really is about more those specific use cases and those specific ml models where do you see them being most relevant right now because I think to be honest I I think it was a great call out that you did that saying in the in the data pipeline there could be U you know ux brought to that that alone would be valuable and speed up the ENT the entire train but to me you still I think it's still so heavily technical um and so backend engineer if let me use that that the use case is really on these maybe machine human inter interace points rather than the typical user product so do you see more of the more strength coming say around Medtech products um you know you show one of your charts showed kind of the different industry uses how would you how would you rank that I guess as a much more concise way of asking my question is how would you rank the application by industry as a place to focus did that make any sense at all yes so the goal of AI is to offset human decision making and to make our lives easier AI is in every industry right now the problem is or the challenge at hand is whether or not that AI algorithm is truly doing what we intended to do because it's fraught with error because the company doesn't have the money to house all of those data steps inhouse they Outsourcing to other companies where there are hundreds of hands touching the data and not getting it to a clean place before it comes inous to train the algorithm so it's everywhere right now and I know this because I have clients from every industry and I don't consider myself an expert in most of these industries but um yes you are right in that the value of it is when we're making something more like it's in manufacturing right it's in consumer electronics you've been using it on Google Maps since you started using Google Maps we just didn't talk about it like that right it didn't get air time Siri if you use Siri if you use Alexa right these are all machine learning algorithms so it's in our everyday lives but the level it's here right Alexa makes a mistake and we laugh about it right for regula Industries like Medical Transportation Healthcare um these industries will be the last to fully adopt AI because their level of accuracy is totally different ball game than Alexa or chat GPT right the risk comes into gray spaces like education where it's not regulated what is the bar for act for it to be harmless right and companies popping up and pushing because we we live in this democracy where we're all we got this whole thing of like capital capital gains going on right and they're G to push for these AI Technologies in a space like education they're going to have huge long-term detrimental impacts on us if we're not concerned with the ethics of the biases in those algorithms so it's in every industry right now it's just a matter of Healthcare hasn't gotten it right you know Tesla and automated uh driving hasn't gotten it right and those were some of the headlines that I showed today um but it is like an arms race if you will of everyone pushing to have this as their competitive Advantage yeah call oh sorry go ahead Craig no I just wanted to say that that's helpful I think um probably just two or three categories is useful um it's kind of a little bit of a bummer that the more interesting use cases are the ones that are going to be last but you know I I totally appreciate your point is that we God God knows we need more ux people involved in this to make it so less sloppy so I thank you for your presentation thanks last call for questions because I I know we're a little past 7:30 and I promise you a 7:30 stop time Sarah this was an amazing talk we've had really great speakers but this I was like totally zeroed in awesome talk tonight thank you so much for joining us can we all give her a round of applause thank you so much um uh I will post this video and share it widely because I think your ideas need to go out to the wider ux community so thank you very much and thank you all for for coming to ux and ATX tonight when you could be doing a lot of other things I appreciate you and hopefully I'll see you at the next social so you all have a wonderful evening thanks Cindy
2024-09-18 09:00