Panel Maximizing benefits and minimizing harms with language technologies

Show video

hi and welcome to the panel on maximizing benefits and minimizing harms with language technologies i am hal dummy i'm a senior principal researcher at microsoft research in new york city um and although she's not here uh with us today uh this this panel and the topic was co-designed with alexander olteano who's at microsoft research in montreal um so just to give a little bit of context and goals around what we uh hope to cover uh in this panel so the the observation that we're starting from is that language plays a significant role in how people understand and construct the social world and so this means that technologies that involve human language which we'll briefly uh refer to as human language technologies or hlt which is supposed to be inclusive of text and speech and dialogue and sign language and anything that um is language related can contribute both positively and negatively to to people in the social world um through uh through interactions so we're going to talk about this from two sides so one is the the sort of maximizing benefit side so you know how can human language technologies challenge power support and serve individuals and communities and also from the perspective of minimizing harm so who's being excluded by current language technologies who benefits and who loses uh so we have a panel of um four esteemed researchers who have i think uh both a depth and breadth of knowledge across these areas that um that i uh that i really believe will lead to interesting discussion um so i'm going to let each of them introduce themselves um briefly and we're just going to go alphabetically and and first is steven byrd thanks hal hi from lara care country i'm professor at charles darwin university i've been living and working in a remote aboriginal community in the far north of australia over the last five years where i'm learning to speak the local language kunwengu which has about 2 000 speakers and i'm working on new methods for transcription language learning and intergenerational transmission of culture wonderful um next is su lynn hi um my name is dylan blodgett i'm a postdoc researcher at microsoft research in montreal i'm working very broadly on social and ethical implications of language technologies and margaret uh yeah hi i'm margaret mitchell i'm a researcher in a lot of sort of different domains and companies um although most recently i'm joining hugging face as some sort of chief ai ethic something but we're still working on the title uh it's a startup so get to choose titles um and yeah my background is largely in natural language processing some computer vision thrown in assistive technology thrown in a bit and then ethics and ai type stuff uh and last we have hannah wallach hey everyone um as hal said i'm hannah wallock i'm a partner research manager at microsoft research new york and my background is in machine learning and specifically machine learning for text sometimes social networks these kinds of things but for the past six years or so i've been focusing mainly on issues of fairness and transparency as they relate to ai and machine learning and over the past year or two in particular i sort of returned to my roots in models for text and stuff like that and i've been thinking a lot about fairness related issues in the context of language technologies great um yeah so thank you everyone for being here um we're spanning wide time zones and so i appreciate both early morning people and and evening people for for making the time um so we're going to start with uh discussion around um sort of like benefits and how um human language technologies can um conserve communities so the first the question i want to start out with um is what are some examples of projects you've worked on or know of where human language technologies are used in the service of users or communities to make their lives better and maybe we can start with stephen thanks hal so in a lot of indigenous communities um people have managed to make extended recordings of stories and information about families and places and the initial focus of a lot of technology work in this space is to make our transcription easier to accelerate that process but that really puts things into the written mode which local people don't command and so another language technology that may be more applicable is spoken document retrieval where people could access the content of large audio archives in the oral modality exclusively yeah does anyone want to jump in on this um i sort of want to speak to the framing of the question which somewhat hilariously referred to people as users um so the question was around the service technologies in the service of users um and i think you know part of the framing of sort of ethical ai work and understanding both benefits and harms is to understand that there are users consumers and they're also those affected um and we tend to say users when we mean people and so just kind of like understanding the question itself and the framing there is sort of i think indicative of the sort of mindsets we have in technology um but you know once we can sort of understand humans uh as people uh i think then there's you know a lot of stuff that can be done that is useful so i haven't done anything uh like steven or dr byrd has done but i have worked on things like trying to predict mild cognitive impairment using the kind of language that people produce for narrative retellings um trying to describe images for people who are blind or name objects things like that um and so in the assistive and augmentative space um there's a lot of good that can be done um as well as in sort of more like corporate contexts but i think like the assistive and augmentative aspect of machine learning and nlp is is one of the most beneficial and maybe i can just jump in there and talk a very little bit about a project that i worked on gosh in the mid 2000s at this point so 2004 to 2007 or so um this was back when i was a phd student at university of cambridge working with david mackay who had created a program called dasha and the whole idea behind dasha was that it was a user interface that would help people input text and so it was intended not so much for those of us who are perfectly capable of sitting around and typing on a keyboard but for people who needed other ways of interacting with a computer or a system and what was so exciting about dasher was that it wasn't just a user interface you know let's say a bunch of like um you know words or letters that somebody would have to somehow navigate to and select either with a you know i don't know a mouse or a head mouse or any other kind of device like that but it's it had a language model that sort of powered the entire thing and this language model would adjust what was visible on the screen based on what the person had previously input and it's a little bit hard to describe if you haven't heard of it i do recommend sort of checking it out there's certainly screenshots and probably videos and stuff around but the language model would basically uh control sort of the user interface so that areas of the screen that were more probable um so letters that were words that were more probable would be placed in bigger areas of the screen and in this way you could kind of zoom towards these bigger areas of the screen that were more likely to be the things that you wanted to say and it was a much more efficient way of inputting text then sort of pecking out individual letters on a on a keyboard yeah that system is awesome i think i remember that it would do eye tracking so like if you looked at one corner it would start getting bigger that's right that's right exactly so yeah so that was there are a number of different devices that that david and others tried connecting it to to actually um control the system one was a head mouse there was also a breath mouse so you could breathe and kind of control the system um eye tracking was certainly something that that folks experimented with yeah a whole bunch of different things and of course you could just use a a regular mouse like we're all used to as well or even a stylus i i did a little bit of work trying to put the dasher system on the the compact ipac which was a handheld computing device that nobody remembers at this point but we thought it might be fun to um to have it on there as a way of as a way of inputting text rather than using an on-screen keyboard i think there's like an interesting thread that connects all three of these which is sort of um [Music] tailoring the the language system to cases where um the users either like prefer or don't really have access for for one reason or another to like certain modalities and thinking about like how to use access and other modalities to like bridge that gap yeah i think that's right although i guess i want to emphasize that although we're sort of highlighting these kinds of things language technologies are used all over the place in in some you know extremely mundane ways so when i think about for example um you know recently of course we've all been working from home and taking meetings on things like teams and zoom and many of these pieces of software have built-in transcription services that of course have a language model as the back end and actually maybe even there it is about sort of accessing things in different modalities i guess the meeting itself is all you know often um uh uh communicated with people speaking and then you're sort of trying to turn this into into into text so actually maybe maybe even there it sort of fits your your pattern how maybe this is the future i mean there are uses that are not just augmentative and assistive um and so i feel like less inclined to focus on those because i think sort of the tech sector generally is is very good at selling ideas uh that are just sort of like generally good for people or they call them users and so i just feel like less inclined to lean into that but i mean you know as someone who's worked in nlp you know you can use uh technology to do information extraction where if you want to learn about for example like some longitudinal behavior around some sort of organization uh then you can extract these and pull out provenance of uh different strings and you can get sentiment and you know toxicity and stance and opinion you know all these different things where if you're trying to learn or flesh out information in some way there's a thing called knowledge based population obviously these tools are very very helpful it's very easy to do basic analysis on the bible i've learned that if you bring in some basic language technology i like blew away one of my religion phd friends that i could like pull out you know all examples of the right hand of god and everything that was going on and stuff like that and so it can be really useful in any sort of large research domain um and i think that's sort of a large reason why people work on nlp is this kind of stuff i mean that was a big focus of my research before i moved into the area of fairness and transparency and stuff i i did a lot of work on topic models as a tool for social scientists to use to understand large document collections um yeah absolutely my fav my favorite example there you know drawing on your comment about the bible was analyzing a data set of ufo sightings you get some very interesting some very interesting topics that that emerge there there's another aspect of the framing of the question about users and that is i think the assumption that technology is for individuals so in places where the only device people have is mobile phones and the local practice of ownership is that things are shared means that you can't assume that a device is for one individual and so maybe they'll be used by multiple people for instance to look at or to consume media that be a popular use of a mobile device but to have a dyadic one-on-one connection between a person and the machine is is not really that common or not continuous how do you find that that dynamic affects how you think about like the technologies and the the systems that you want to develop i've been thinking about a triangular relationship with two people using a technology for example a linguist sitting with a speaker using transcription software or a linguist sitting with a monolingual older speaker and having a younger person interpret most of the engagements seem to involve three points and i'm looking at ways to substitute one of them with technology or to assist one of them with technology rather than technology over being a surrogate for a person yeah so i wanted to sort of circle back to um actually margaret's previous point um which was going to be a i was going to use as a transition to the next question but there was this good side uh sidebar in the middle um so the the example you gave about you know using language technology to investigate um document collections or um you know histories of an organization or something it reminded me of um this ruling that came down in france a couple of years ago now that apparently you're not allowed to use statistical analysis on judges decisions now that's not talking about nlp that's just looking at machine learning um yeah yeah sort of like machine learning statistical systems but obviously like the court system and everything else produces also lots and lots of text and so um i think this naturally sort of leads to this question of you know how can language technologies be used to challenge power structures like the courts or like other things um or to support minorized or disenfranchised people or communities i can start maybe but uh it like it will be maybe a little bit of a cantankerous sort of answer um i think um like one i think you're your example uh margaret actually like um is very related to some work that one of my lab mates did katie keith on kind of like automatically identifying the names of people killed by police from newspapers and the goal of this project was to kind of aid like the work um of people you know humans right who do this manually and it's an exhausting kind of labor um but the compilation these kind of databases right of the names of people killed by police is like a thing that is important to have because there are not existing records by you know people in power so this was kind of seen as like one way maybe to hold power to account right but i also think a lot of the ways we could imagine using language technologies in this way like to travel through large numbers of documents to find evidence of kind of like certain kind of patterns or whatever like um can be really valuable they also kind of uh can sort of be powered by these logics of like quantification right like you require this kind of like evidence in order to hold power to account right it's not as though you need to add more names to your database to know about police abuses right it's not that like you know we have plenty of evidence about that already um so i think like for a lot of these kinds of cases also kind of a lot of nlp for social good sort of you know oh um let's kind of narrow the digital divide let's you know provide i said let's better education this kind of thing like we have like data language technologies they're sort of like logics of quantification or logics of scale that i kind of find risky um hal i think margaret you wanted to say something so go ahead well um yeah so uh yeah so first like sulin makes great points as always um i did i was pondering your original framing of the question how and the relationship to judicial decisions and things like that and so i do sort of want to flag something that everyone here knows but you know whoever is watching it may not know that there's a lot of work on fairness and bias and the justice system and a lot of things that have shown that uh really problematic human biases with racism and sexism are reflected in the data that we learn from and so you know the french ruling makes a lot of sense given that it's most likely that uh the kinds of predictions that would be made would not challenge existing power structures but actually reinforce them um and so um so i was i was having difficulty tying your question to i think a little bit of the segway of what it was about because i would say that example is not an example of a benefit because it goes into harm oh i agree yeah yeah um yeah so so that's yeah so that's a that's a trickier one but um yeah i don't know that there's much more that i was going to say there other than maybe to say that like when you're thinking of supporting um marginalized or disenfranchised individuals there's a lot of utility in working with language generation technologies and so like for me a prime example is that i developed so much anxiety over writing emails because my emails were used to judge my personality and character and there's this you know there's this whole understanding of how women are supposed to talk and email like if you just say yeah you're me you have to say yeah exclamation point or yeah happy face like all this kind of stuff and so uh when auto suggest was coming out and uh auto compose smart compose and all these things there were a lot of people like the bias and fairness world like this is a horrible idea and i was like what are you talking about this is a great idea now i don't have to like guess how to sound like a person like people want me to sound i could just have smart compose write it for me and uh oftentimes it comes up with good ideas and so i think like there is a lot of uh useful things for supporting sort of disenfranchised populations especially in natural language generation i wanted to jump in here too and just point out a situation where language technologies may perpetuate existing power structures [Music] because it may involve a minority of people constructing technological solutions for everyone and it's built on this idea that we can make people's lives better often without involving them in the design process and that deprives people of their agency and in a colonized space one of the causes of language endangerment is that denial of local agency and self-determination so the very idea of solving social problems with technology is is kind of flawed actually yeah that was going to go to um sort of a a side question i had which is you know i think i don't know about in the nlp or human language technology space but um you know more broadly i think there are plenty of examples of technology that sort of developed with one thing in mind but then gets reappropriated and i mean steven brought up you know involving the communities that uh will presumably use or be impacted by technology in the design and development um or possibly just not designing or developing at all which might be the better answer in some cases but um yeah what are people's thoughts around you know like how can we try to um develop things that that are actually useful and help empower people i mean this is certainly something that that we've been thinking about a bunch um lately within microsoft and and i guess i actually want to kick this over to sue lynn who in particular i think has been doing some really interesting thinking about these these kinds of things and participatory approaches and stuff like that yeah i uh i actually was going to say something slightly different but i can also talk about i can also talk about that i think um i think certainly uh so i think certainly participatory approaches are appealing i think they're not a panacea i think like you know we especially as they've been adopted not in nlp but in a machine learning they've been kind of adopted as a way to kind of like avert fairness related harms rather than to like dismantle like you know the relationships of power that are that um that kind of undergirds the entire process of technological development um and usually they don't come with like uh you know with like maybe we shouldn't build this in the first place right so um and people have written about this very monastery this is a great paper story you know it's not a design fixed machine for machine learning or something like this um i had another thought originally but it's escaped me how can you restate the question um so the question was about reappropriation of technology um how things that can be perhaps designed with sort of doing good in mind can get you know taken by whoever has power and uh and used to maintain the power rather than flip it it doesn't look like that's healthy no no i haven't recovered the thought i mean i'm sure it was a great one but indigenous communities are really masters of reappropriation and fix things and make things with whatever is to hand and that's an example of people's resilience in the face of many threats to their agency i feel that the hci community and people working on ubiquitous computing are leading the way and we in the language technology community could take a look at what they're doing and also maybe seek ways of collaborating with them because they've had a 50-year history of participatory design and co-design of democratizing the design process and we in nlp um are following in their footsteps maybe not realizing it i did recover my thought um and i think it was in response to um a different part of your question which is like how do you how do we design things to actually make them useful um and the thought actually comes from the fact that i think like this is just uh i've been doing a lot of thinking i'm interested in other people's thoughts too on kind of like the sh how kind of nlp tasks are within nlps like a research field right are defined what the boundaries of these tasks are and how many of them in pursuit of something like natural language understanding actually back really poorly onto things that people would actually want solved or would want as solutions for things right um and it kind of all has all come up because we've been we've been doing some thinking um uh on on participatory design and what that might look like for nlp but there's some settings where this might be just incoherent right with this pd for something like and a lot of natural language inference like which is a very kind of it's like a classical nlp task right for like natural language understanding but like it it and maybe aspects of the technologies right the architectures or the data sets developed to solve this eventually get used in kind of um applications out in the world but it's not clear how kind of this task framing right supports developing technologies let's help people um and so i have been doing a lot of thinking about kind of like the boundaries of mlp tasks and how they actually map on to um problems that people might actually want solved um and whether or not it is incoherent to try to apply kind of like approaches or or like pd to kind of how we understand and bound nlp tasks and i think a lot of sorry i was just going to note that i think a lot of nlp tasks were chosen because they were feasible several years ago or decades ago and not even necessarily super influenced by what people might want or by or might need but more by sort of hey we can do this with the currently available technology this reminds me a little bit of there was this paper by amerishi at all in kai 2019 where they introduced this term ai infused systems to sort of try to get away from this like notion of autonomous systems or you know something that's going to like solve the whole task and like really try to put the um uh the notion of like trying to use in this case ai technologies but like human language technologies for us um in ways that are like directly exposed to the end user in ways that like actually help that user rather than you know sort of like completely solve some tasks that we defined you know and many years ago um but i think this boundaries thing that suelin brings up is really interesting because i think there's like there's task boundaries and there's also sort of field boundaries so um when we were talking about projects like um like dasher and spoken document retrieval and you know object detection for blind or low vision users these are not topics that appear super commonly in the main sort of nlp or perhaps even speech and certainly like machine learning fields and yet in some ways they seem to be the ones that are like useful like plausibly to people and um you know and why why is that like uh is this a historical accident or is something else going on here it also yeah it's also occurred to me i'd love to hear people's thoughts on this but it's also occurred to me that um search like in you know lives in a different kind of like academic place than nlp which is kind of i mean it's always struck me as absurd given is that if you ask people what language technologies they encounter on a regular basis search is probably like one of the first three things that they would say um so i also have kind of like maybe a cantankerous or like cranky answer so i was sort of pausing to see maybe i don't want to say anything but i mean my sense is that a lot of people don't care i mean it's it's useful to if you're really really interested in optimization for example it's useful to be able to say in a paper that this has some utility for some application um but i think working on applications which i think sort of is a little bit what you're getting at how as well is generally seen as kind of less than working on things that are more theoretical um or on things that are just sort of like bounded by some specific um you know task framing like zulin brings up um and so you know it's i think sort of a superficial interest that people will mention in passing um because they're actually interested in working on making the number get higher whatever the competition is right uh or making the thing go faster or whatever um and so like you are i think exactly right that although when you talk about like ai for social good and things like that um a lot of these sort of assistive things come up as ideas because these are applied um uses of this technology but when you look at the conferences and stuff and what's uh being published a lot of it is around like novelty and so if you're publishing or if you're trying to do a system that brings together a lot of pieces for some applied context like that's not going to get accepted because that's less than sort of um so i think there's just sort of a lack of care and appreciation of what it takes to actually have these sort of like for good sort of uses um where there really is a focus on increasing things increasing the speed increasing the size increasing the number um and that's actually what what people mostly care about unfortunately and i think as well building these kinds of systems that we've been talking about often involves interdisciplinary work and work that's challenging and requires people to come together outside of a single discipline and that kind of work can be incredibly slow because you have to figure out how to work in that kind of context it's also a style of work that ends up really touching upon the messy realities of the world you sort of touched on this on this meg but um you know it's much less abstract and sort of theoretical and neatly kind of contained and bounded and manageable it's it's messy it's uncomfortable it's difficult stuff takes a really long time and i think that means therefore that people are less inclined to pursue that style of work particularly against the backdrop of increasingly people caring just about numbers of publications and stuff like that and you know citation counts and all of these kinds of things i've long said that that interdisciplinary work is not a good way to boost one's citation count or number of papers if anything it's a great way to sort of slow these things down because it's so hard to do and and to get right and it just takes a long time i think there's this uh tokenistic way that people refer to the social benefits of their research as part of the narrative that's developed but not incorporated into the evaluation and since our field is proud about our evaluation rigor i feel like these ambit claims that are made about the social good of the technology ought to be subject to some sort of evaluation as well yeah and just to um uh add to that um i also think because we really value maybe in the pursuit of nlu or something right like work nominally in the pursuit of nlu um work that is abstracted away from deployment settings it means that we actually we can we end up knowing so little about the harms that these things actually give rise to right we know that they're probably going to you can look at what the task is or what the system is and say almost without you know with some certainty something will happen that is not great when you deploy this in a real setting but because they never get deployed by real setting in real settings by academics right we don't um we don't know and so we're left to kind of speculate and say like probably this is going to be terrible for someone or lots of someone but we just don't know and i kind of find it i've been finding it like shocking and kind of disappointing that a lot of um the cases of these systems gone wrong i find out from journalistic endeavors right and not from not because we're writing about it and publishing about it and talking about in our communities because some journalist is like hey did you know this machine translation system is being used in an immigration court and you're like oh my god no so so i think uh we've we've slightly you know slowly segued over to harms anyway so let's let's officially segue over to harms um so we've talked about this actually a little bit already but i wanted to talk about sort of inclusion um so uh you know the technologies that we're talking about generally speaking are being built by a pretty narrow segment of the world population um but but often sort of expected or i don't know imagined to to serve a much broader um population and so um you know it may be like specific examples could be useful here for cases where you know you you can think of um certain people or certain communities being excluded by current current um language technology practices and then we've talked about participatory design and related things but you know are there other forms of inclusion that that are meaningful and not not sort of like ethics washing i think um there's a lot of very obvious ways in which our practices um exclude lots of people across the pipeline and i think a lot of other people can talk um can speak to that uh so maybe i'll not um i'll not i'll not talk about that to start with um i think i wanna i was thinking kind of pinging backing off what we were just talking about right the kind of work that we routinely exclude from um nlp kind of research um so uh i think we exclude a lot of slow work so emily bender's talked a lot about this and other people um so i think nlp research and practice even when they are interested in measuring harm or other social impacts really only tend to inspect system behavior abstract away from deployment settings we've um and i think if we want to uncover the impacts of these technologies or how their how these technologies are actually produced we need methods beyond what we are employing right um often interdisciplinary kinds of approaches from different disciplines like hannah's been talking about so like qualitative methods or interviews or participatory approaches um and what we're doing right now is like we kind of poke a system behavior and say that you have that you know how your system is biased or something like this right can't possibly uncover people's really subjective or lived experiences with the technology or or the the beliefs and practices and organizational dynamics that actually create these technologies so all this stuff takes time and they're methods that aren't used in nlp and produces you know probably less flashy or or generalizable work or something like this because it's context dependent so we don't reward it um so um i kind of uh i worry a lot um about this kind of work and uh i think it's a shame that we're not taking advantage of kind of the the kind of many methods and um approaches that are available to us to really kind of uncover what language technologies are like what they look like in the world how they're produced most of the world's languages are oral and these are oral cultures people command a repertoire of languages and switch between them for different functions where they are and so language technologies which presume text and then which presume normalize standardized text have this unfortunate consequence of reinforcing the frame that you know high prestige cultures are the ones with written records when you tell people who are writing oh there's an orthography standard now and you should write in the standard way the unintended consequence is that people write less because they know there's a right way and they know they don't know it so um one remedy i think is for a to to continue this acceleration of interest in spoken language technologies and to focus not on language technologies for all so much as looking at the regional varieties of major world languages which are spoken by every remote person who is you know multilingual but they still need to work in the wider world and when they phone that banking system or hotel booking system if they've got charge in their phone and so on they're not going to speak in their local indigenous language they'll speak in their regional variety and this is really not well supported by speech technologies yet and that would be such an empowering thing to do yeah um i think you know inclusion is you know not just limited to who's using the technology and who's affected by the technology but also obviously the people at the table from the start who are defining what the technology is and i think uh i think probably sue lynn first was sort of hitting on that um but you know i wanted to focus on this point because i see uh inclusion within tech creation circles as being one of the most fundamental problems in ai right now um the fact that you can have some sort of diversity by hiring people who you see as below you but then you can't retain them because they're alienated right so you end up with the same you know sub-population working on um work for you know all over the world and so um i just really wanted to highlight that one of the big barriers to moving forward responsible ai down well-informed paths is because largely inclusion is not really done people don't understand what inclusion is um it means that you know if if if you feel threatened that doesn't mean you're being threatened if you feel attacked that doesn't mean you're being attacked right and so like communication styles for different people at the table are going to be very different depending on their cultures and you know how they've been raised and things like that and if you are basically implicitly or explicitly like putting forward a norm of behavior and communication style then you're going to alienate most most of the people who could come and diversify um the kinds of problems that are being looked at and um i think that this is like one of the things that is hard for people in machine learning to or nlp to really dig into because it's not seen as a technical problem but it still is fundamental to what kind of work gets done minimally because if you have a good idea and you're from a marginalized population you tend to have to kind of fight for it as opposed to just being able to say it and fighting is exhausting and also it makes people like you less right so um you know if you can't share an idea without having to couch it in a very certain way that conforms to some norms that other people wrote then you're not going to be able to you know make cool decisions with other people and that means the technology is going to be reflecting a very myopic view of what should be done and how the world works yeah um i want to be mindful of time so we've got about five minutes left so i want to switch on to the last question which is i want to give everyone you know one minute to tell everyone who's watching um you know what would your one piece of advice be that you could give to someone working in the area of human language technology is about how to do better research or how to build better tools or systems what would that piece of advice be and maybe we'll go reverse alphabetical um so we'll start with hannah yeah okay this is tough because i have so many pieces of advice that i want to give and and very limited time here but i think the biggest one for me comes down to this it you're not going to develop systems that are responsible that treat people fairly that behave in all of these ways that we want if you build your system without thinking about all of these things and then turn around sometime before or maybe even after deployment and say what can we do now how can we now try and somehow make this system fair and so for me it really comes down to changing our development practices to prioritize things like fairness things like inclusion from day number one i i really strongly plus one meg's point about if you don't have an inclusive culture on the development team you're not going to build products that are inclusive and so my number one piece of advice really is start thinking about these things from that very first second don't wait until you're about to ship it great and margaret um yeah um so i think one piece of advice that might be quick uh is is the sort of quality over quantity thing uh so people try and publish a lot and look at number of papers but if you can sort of have the courage to nurture your work then that might be less papers but the papers will be more impactful or you know go on a road show and show people the cool thing you're doing instead of just focusing on publishing more papers like really dig into what you're doing and nurture that further is a really great way of standing out uh in in the research world and having people know what you're up to sue lynn yeah i think i'd say um uh if you're building a system or you're thinking about building a system um think about all the assumptions that you've made kind of like at every point in the pipeline like it you know whatever you're doing uh you have implicitly explicitly made decisions right about what the j what the problem is and so you know should you build the thing and who is it for and who does it serve and whose language is relevant or which stakeholders are relevant for you to consider and whose perspectives are you foregrounding when you annotate and what kinds of system behavior rewarding via evaluation and all these kinds of things right you've made all these decisions and so um i would say always be thinking about what kinds of assumptions about language about speakers about stakeholders um you are being kind of operationalized or kind of leaned on when you're making these decisions and stephen just to embrace oral culture that surrounds us and to de-center our design practices this has been great thank you all so much for uh taking this time to share your expertise um with with me and with everyone else um and i said this at the beginning but i want to thank again alexander eltiano who's at microsoft research in montreal for helping come up with the idea of this panel and and workshop questions and stuff like that this definitely wouldn't have happened without her help um and with that um thank you again to all four of you um this has been wonderful thank you thank you helen alexander thank you

2022-02-09

Show video