jess if i switch to um [Music] another screen can you see that on the screen now or can you still see the presentation nope i'm seeing your presentation excellent perfect don't want to hide don't want to reveal my speaking notes okay so thank you everybody for joining this talk which um just court has very kindly agreed to give on auslan alexa the auslan communication technologies pipeline project that she is working on um i think many of you already know jess because i can see a lot of people from the school of ichi on the on the zoom call so that's really great so you probably already know how passionate jess is about the ways that good technology can improve lives so to ensure that technology is good as you said in her bio she advocates involving end users in the design process especially when these people belong to difficult user groups which usually translates to minority user groups her philosophy for technology design and life in general is that the needs of people who are disempowered or disabled by society should be considered first everyone else will then benefit from technology that maximizes usability and jess's research areas include human computer interaction machine learning and participatory and collaborative design so you might have noticed that um we are recording this session and we're also have enabled zoom captions so um in order to be able to see the captions at your end you might need to be able to actually change your zoom settings if if that's something of interest to you um the recording will include the zoom captions so that it will be accessible to people who might not otherwise be able to hear the presentation so so i thought you'd be interested to know that given that this presentation is in recognition of global accessibility awareness day which was last friday so so before we continue i would like to acknowledge the traditional owners and their custodianship of the lands on which we meet recognising that we're on zoom so we're probably meeting from a range of different areas but probably mostly the younger and tourable lands we pay our respects to their ancestors and their descendants who continue cultural and spiritual connections to country we recognize their valuable contributions to australian and global society and i'd also like to express my commitment to diversity and inclusion given that this is an event that's been initiated by the equity diversity and inclusion committee and as usual shout out to sandra hall for bringing this particular statement to my attention as a as a way of signaling the importance of diversity and inclusion and the fact that diversity of thought um can create um can lead to more creativity and innovation and problem solving which is obviously very strong in in our faculty given the disciplines that that we teach and research so we want to celebrate and be enriched by our differences so on that note i am going to stop sharing and hand over to jess to start her presentation thank you very much okay are you now seeing the uh presentation view of my powerpoint yep excellent cool then uh so my talk today we called auslan alexa um because that's the metaphor that we keep using for my current project which is formally titled the auslan communication technologies pipeline project so as karen said we do have subtitles and live transcription available for this talk today um just in case you're not sure where to find those uh if you're on desktop you should have live transcript at the bottom of your screen if you're on a phone you'll need to hit more and then find it in the full transcript in the list um i apologize in advance if the transcription is a little bit funny i have personally experienced that it's not great at technical terms like auslan i don't actually have it turned on on my side i don't know what came up just then um but if the transcript doesn't look right i'm sorry uh they're pretty good they're not perfect okay so i want to start off by asking how do you control technologies have a bit of a think probably the traditional approach is going to be keyboard and mouse more and more recently we've been seeing touch screens and we're beginning to get a revolution in voice controlled technologies these voice control technologies are becoming more and more commercially available um and have a expected level of good enough for mainstream languages um english in particular obviously you have a lot of options you've got the amazon alexa the google home apple siri microsoft's cortana some of which are available as individual devices some of which are embedded in your phone um and they're a great technology they're enabled by the fact that ai has got to a point where we can throw vast amounts of spoken and written language data at a black box and it comes up with the model that goes this is the signal i'm hearing that's what that means i can do something with that um and this gives us a promise of ubiquitous computing theoretically at some point in the future we can have computers literally everywhere and every interaction will be a natural one because you'll just be able to say computer report the location of commander riker um or you know possibly more useful everyday commands now the problem that we have is that these are actually inaccessible to deaf people um the research has been done that shows that for deaf people who like to speak even if their deaf accent as it's called is perfect to a hearing person we find that alex's and other of these conversational user interfaces can't actually distinguish them um and i've not experienced it personally but my one of my research assistants is a coda which is a child of deaf adults her english is absolutely flawless they have an alexa and it does not understand her voice she has to ask her partner to please repeat her things her commands to make the alexa behave now some of the voice-based personal assistants have text-based input options which they claim is appropriately accessible it's not really it is not the equivalent to be able to stand in the kitchen with your hands dripping because you've just been kneading dough and say alexa what's the next step for alexa to preheat the oven if you're if you're having to put it down wash your hands type something you may as well just turn on the oven yourself um you're not getting the benefit that we claim to have of conversational voice-based technologies so deaf people around the world have said that actually yes i would be interested in using a sign language assistive device um not everyone obviously as you can see but enough people in the particular study this graph is from was american usa deaf people and it was a combination of deaf and hard of hearing people the research that i'm doing at the moment we're focusing on deaf sign users so not so much people who speak as well as sign and i don't have the statistics yet because we're still working through and translating all of the interviews that we've been doing but the overall impression has been a lot of people are interested they don't necessarily know if they would want one in their house don't necessarily know if they'd want to buy one but desperately want to see what can we do in this space so my project is the auslan communication technologies pipeline the idea behind this system was that it will be a auslan in auslan out system so that a fluent signer will be able to sign a command it will be recognized by the recognition system it will be processed and acted on by the device and then it will produce a response also in auslan now obviously i'm working in auslan because it is the sign language of the australian deaf community now there's some very important considerations when it comes to creating good sign language technologies i oh it's zoom so i can't really ask for people to put their hands up but some of you may have heard of the term disability dongle before some of you won't have and that's fine um i particularly like this term it was coined by elise jackson who is a disability advocate um and it's as you can see a well-intended elegant yet useless solution to a problem we never knew we had disability dongles are most often conceived of and created in design schools and at ideo examples would be the con every couple years there'll be a new report of a wheelchair that is able to roll up the stairs this is not a thing that people in wheelchairs have ever said they wanted they really just want people to put in ramps there's all sorts of technical problems with having a wheelchair that rolls up the stairs including vastly increased likelihood of the wheelchair tipping over halfway up the stairs and you falling out the fact that it would be much more expensive when wheelchairs and other assistive devices are already incredibly expensive and in australia if you're trying to get it through the ndis there's about a million hoops to jump through um that's a different conversation but getting even baseline assistive technology is very difficult and expensive and disabled people are rightly saying we don't want more expensive inaccessible technologies when there are easier solutions sometimes expensive technologies can solve a problem but you have to actually ask disabled people is it a problem they're experiencing so my philosophy as a technology designer is that the way to make technology good is to work with your end users ask them what are the problems that they're actually experiencing ask them what do they want people to be working on i have to be very aware that a lot of the sign language technologies that do get produced are often disability dongles because they're something that someone hearing has come up with for a problem they think exists without actually checking is this a problem that really does exist so the approach that i take is called participatory design where we work with potential end users and in this particular project we want to celebrate and centre deaf people's needs abilities and insights this is important it improves the development process because it means every decision in our process is about addressing deaf people's needs and wants in the technology the data that we collect throughout the project is focused and appropriate the expected behavior sorry the behavior from the system is what deaf people would expect it to do and we have human verification of accuracy of that behavior from an ai perspective and data gathering perspective it also promotes community control deaf users are able to stay in control of the data they provide at every stage of the process we are explaining what data we're collecting why we want to collect it in a particular way and asking what data they would like to contribute so they get to choose what they're including and who has access to it um i have in my informed consent packages that i send out to participants a quite detailed informed consent page where i ask them things like you know may we use this inside our project may we use this for related projects outside of the immediate one which has been very useful for some of my students projects may we publish this data widely and make it available um if it is widely available do you want to be anonymized if we're able to do that questions like this which supports the deaf people's sort of control um and you know ability to choose at every stage of that process so obviously this is an ongoing project um but some of the things deaf people have told us they would want from this type of device are things like being able to find information whether sports scores news and recipes are some of the most commonly requested things being able to show and edit your calendar control smart devices around the house smart doorbells for example are very popular amongst the deaf community because you can't necessarily hear a knock but if your phone vibrates you can pull it out and have a look without having to go open the door they want to be able to set alarms and timers some of them want to be able to program specific behaviors so show me the news when i wake up or turn on the lights in the ac when i'm on my way home they also want notifications information and warnings but particularly the thing that stands out to me as something that's a bit different from what a hearing person uses the alexa for on a day to day basis um is they are really interested in something like smart alerts possibly not what this will end up being called but this is identification of things like you know the train lines are down today the sort of thing that a hearing person might casually hear on the radio on their way out the door that a deaf person won't um or if there's been an update to the covert restrictions um being able to to get that information unprompted would be useful so that's sort of the background why are we doing this what's the approach the idea behind this project when we started it was that we're reaching a point in ai technologies that a lot of the problems we're trying to solve have open source solutions that we can take and adapt and build on that has turned out not to be as true as i had hoped but i'll get onto that in a couple of slides now obviously we're not working in a void here having the ability to recognize process and produce sign language output is something that deaf people around the world are interested in so academics and companies from around the world have also been working on it fairly recently google and the university of tokyo launched a sign language recognition website um let's see if the video will work for me and let's just mute that the idea being that this particular website will teach you a couple of signs um in japanese sign language um and it will be just based on the webcam it will track your movements you'll be able to sign and it will go yes i recognize what you have signed uh i did try it i personally could not get it to recognize my signing could be i'm not very good at japanese sign language um could be the quality of the webcam i was using could be the quality of the ai because this is one of those known problems uh there's also been some work on producing signing a lot of approaches have this sort of two-stage process that we can see here where from english words it generates skeletons um which control where the hands are possibly control what the face is doing and then that can be used with a generative adversarial network to produce video with again varying levels of quality um and some reason the state-of-the-art research on this does get reported as relatively readable now this is an ai project the stuff that's already out there is also based on ai usually some version of machine learning or deep learning which means we need data this is a problem when it comes to sign language technologies compared to english there's no data like there is definitely some data for sign language ai but the english language corpus of recorded spoken and written language data is immense um to the point where we've now got two competing publicly available english language models um the big one that you might have heard of being gpt3 and facebook meta slash whatever they're calling themselves nowadays um have just released a massive language model for research use in english we're trying to work in auslan which obviously as a sign language does not have a written form which means that all of the data we can work with has to be videos there are some auslan videos floating around the place we have access to a very small well annotated data set of people telling fairy tales we're also going to be collecting an ant and i'm sorry collecting and annotating data through this project um but that's going to be really quite focused on the specific commands that deaf people have identified as being some of the most popular ones so that our prototype can hopefully learn to recognize the specific commands so there will be an awful lot of things like turn the light on turn the light off find me a brownie recipe find me a chicken recipe um we're able to use some new zealand sign language data new zealand sign language is a related but different sign language from auslan um i have heard some different estimates of how much overlap there is um but their sign language dictionary which is online um has some really spectacular videos which are well annotated of both individual signs and signs being used in a sentence uh we also have a video from abc and seven news press conferences because certainly since 2020 just about every press conference you'll see has an auslan interpreter uh not every video shows the auslan interpreter uh but that's a again a different conversation um now these are useful because we get what is being said in english we can see what the interpreter is signing um but they're missing the individual sign transcription and sometimes time alignment because it's an interpretation there's a slight lag in the interpreting into auslan so this means that all of the ai stuff that we're doing for best results we're hoping to not work on auslan first we're hoping to work on some of the other data sets for other sign languages around the world which are a little bit bigger and then be able to transfer learn to auslan which means they're not going to be hopefully as punished for having a smaller amount of data so there are data sets out there currently we're working on the how to sign american sign language data set um but this has been a bit of a process to try out different data sets and see what has enough data to be useful while at the same time having enough similarities to auslan that we're not losing anything with the retraining so sign language recognition is a really complex problem and like i said at the start of this project i felt really confident this was a mistake i'm aware i should know better um but the thought process was there's been a lot of work done in video and image processing including work on action recognition and video description there's been a lot of work on how do we process time sequences particularly in a language space and obviously a lot of work done on natural language processing including translations so the thought was you know we should be able to pull together some of the key learnings from each of these fields to build on it but what we've been finding is that we might actually need to be treating sign language as its own domain in ai even though it looks like a video problem it is both a video and a language problem and so has complexities for both so some of the problems that we're trying to tackle at the moment is when we've got our videos video stream how do we figure out what is a sign what is the movements between a sign and what is a non-sign movement that we can help safely ignore how do we then go from individual signs to understanding what that means and how do we go from the string of signs that makes up the sentence or phrase um but with a very different grammar than we have in english to something that we can spit out in english or at the very least put into the right order to trigger something within the processing system um that's the input side that's only half the system obviously we also need it to be able to sign back the approach that we're choosing to take is using an avatar rather than something video like now quick reminder again state of the art at the moment for sign language production is doing something video like as you can see on the right hand side here so we're taking a different approach if this is going to cooperate let me show you sorry just let me check if i'm sharing audio i cannot tell there may or may not be music in this next bit [Music] [Music] [Music] [Music] so that's alyssa who is the avatar that we have been working on at the moment she's activated entirely by motion capture and as you can see in the pictures here we have my ra julie and our team animator maria so far everything like i said is motion capture as you can see from the suit julie is wearing we have full body capture gloves for hand capture and an ipad of all things for face capture uh turns out the best quality relatively low cost way to catch facial expressions is with an ipad i was shocked but hey so the idea behind it is that we record julie's signing it then gets processed some of this is automatic some of this is manual and maria goes through and does some edits as required and then the animation gets applied to the 3d character um the approach that we've taken so far is some of the recordings were longer form ones like the introduction video you've just seen but most of the videos were actually really short a couple of signs because what this allows maria to do is set up essentially a state machine inside where you know the avatar most of the time is going to be in an idle mode so you know that could just be standing there could be kind of either looking around and then when it gets attention that triggers the pay attention to the person signing at you and then based on what they sign trigger some different responses at the moment we don't yet have the automated triggering of responses working um but yeah we can type in which phase we want to go to and get the trigger and also when necessary get the movement between the different states okay so that's great obviously as long as we've got something recorded we can't record every possible auslan sign or phrase only the most likely ones so our plan is that we should have a mixed activation of the avatar mostly this you know bog standard what we expect to be signing a lot will be pre-recorded phrases but then we'll also have some ai generated responses this is again another kind of ai where we're going to have to have video of people signing that we train the avatar system on um which will allow us to generate skeletons like the ones we see back here sorry further away than i remembered uh so either the middle or the right hand side gives an example of what we mean when we're talking about skeletons um it is tracking the relative locations of the fingers the hands the body as a whole so i did say that state of the art is currently using video generation we decided not to go with video generation for this project because we think that having to only generate the skeletons which will then map into the avatar should be a simpler task than generating a whole video um which will hopefully reduce the computational costs and all of the problems that go along with it like you know not needing to burn down a rainforest every time we have to train a new model um the physicality of the avatar should also hopefully provide for a lower bound on visual clarity so and an upper bound on uncanny valley so the avatar is always going to look like a human being we won't get any weird artifacts like we've seen in some of the former state-of-the-art video generation another problem that we see with video generation is that currently the video has to be modeled on a real person who is fluently signing so that they have something to map the video from which obviously could cause a problem if someone say convinces their alexa type device to sign something really rude that they then record some member of the community or an interpreter you know signing something rude there's potential reputational effects um the avatars on the flip side have the advantage that they can be customized to suit the individuals using it um the avatars can also be integrated into a modular pipeline where content like our pre-recorded phrases can be edited without having to regenerate video regenerate avatar it just slots in very nicely so the part that most excites me about this talk um is some of the results i'm able to report back to you guys from a workshop that we ran only two weeks ago one of the findings that we had from our avatar co-design workshop was this device should be culturally deaf now what does that actually mean when we talk about cultures we often think about around the world different countries have different cultures but we're also aware that countries can have subcultures within them if you think about perhaps the australian italian or the australian chinese sub-communities these have their own cultures that are a blend of you know their home countries you know historical countries um and australian culture uh for deaf people they're usually australian you know as far back as they can remember um but there's still a language community which has particular communication and cultural norms um deaf culture is actually really interesting from the perspective of something like 95 of deaf kids being born to hearing parents which means that unlike the australian italian or australian chinese community you're not getting a parent to child passing down of language and culture but a sort of horizontal community-based transferal um if anyone does want to know more about that please ask me a question towards the end of the talk i think it's fascinating but i should actually tell you about the project okay so newest results that we've got um it needs to be deaf we were trying to ask them you know should it have a particular personality if you think about an alexa or siri most of the communication is neutral moderately formal unless you're asking something specific like tell me a joke and what we were getting back was it's not so much about a particular personality a particular tone or register but about being able to understand deaf communication norms we had a role play activity where you know someone was pretending to be the avatar and the person using the device said hi how are you or signed hi how are you because that's the deaf communication norm that's how you greet someone we also know that because for signing the whole body is involved there's no such thing as a neutral inflection the emotion that you're conveying when you're signing influences the meaning of a particular sign and in some cases can change it completely my favorite example of this is the word funny much like in english this can mean funny haha or funny weird the sign is the same either way but the facial expression you're so funny you're so funny very different um we so uh because it can't be neutral this means it has to have tone so we have to figure out are we matching the tone of the user or are we adapting to the tone of the request um the overall sort of decision was most of the time it should be relatively casual or friendly for something official it should perhaps be a bit more formal and if there's an emergency it should be something bombastic or dramatic hey hey hey hey did you know the trains have stopped for example um the question of deaf jokes came up jokes in auslan very different from jokes in english probably also not something we could generate so this is something that if we're going to include we'd have to do a lot of motion capture work on um obviously that's not impossible but it's definitely something to think about to try and provide the same level of functionality as a spoken alexa would have um another big problem that we've got to address is the flexibility of language so much like the early alexas and google homes famously struggled with accents um in australia we may have difficulties with the auslan dialects because australia officially has two major dialects uh but each community in each city and region has some of their own variations um so being able to recognize you know multiple dialects is important and then being able to sign back in the dialect that is appropriate for the person who's using it could be something that's really important auslan also has a lot of grammatical flexibility so for those of you who have an alexa siri google home etc you've probably noticed that you have to speak in a very particular way um usually a very formal way to get it to understand you and do the thing that you want we may not be able to rely on that in auslan because the way an auslan sentence is structured is quite different from the way an english sentence is structured um we'll often hear about english being svo subject verb object auslan doesn't have that auslan has the topic the thing i want to tell you about and then the comment i want to make about it so like expensive red car i saw today would be roughly the structure of that if there's multiple comments that gives you a lot more flexibility in there uh we've also got some interesting problems in that context conveys a lot of meaning in sign and because it's a gestural language it happens in multi-dimensions um if i'm telling you a story about my sister and my friend i will locate them in space so that i don't have to keep saying my sister my sister my sister i'll say my sister here my friend here so my sister gave to my friend something a present a scarf whatever it happens to be that i'm telling you about in this story uh this is not something that has so far been addressed in the sign language literature but it's obviously something that's going to be really important to address if it's going to understand the way real people sign we've also got the problem of what's sometimes called lazy or more properly called informal signing um because signing happens in the whole body space i literally do me in the whole body space i'm not sure if i get far enough back um so if i want to sign dog for example it happens literally on the top of my leg if i want to sign angel i'm signing up here literally above the head whole body's involved um but much like speakers in any language the average person is a little bit lazy um australians in particular we're very good at you know no one is called jonathan everyone's jono um so if i'm signing angel i might not sign it up here i might sign it down here having established it once in the conversation you know what i'm talking about um so this sort of informal signing is very common but makes it more difficult for us to get a sign language recognizer working because we're changing one of the parameters that we're potentially training on but that's also unexpected norm we were complimented on our avatar having some informal signing back to the people who were testing it for us a couple weekends ago but they also said on the one hand it makes it feel very real but on the other hand it might make it difficult for someone to read especially someone who is perhaps not fully fluent uh for example if you've got a mixed household with a deaf person and a hearing person who is in the process of learning signing um we talked a lot about the customization of appearance and function the fact that it's an avatar means that it can be customizable so we had questions like should it look or behave differently for different members of the family possibly again if you've got deaf and hearing members of the same family should it be able to respond to both auslan and english should it sign faster or slower depending on who's signing to it should different users have different permissions and things that they can do so you know should the parents be able to do more than the kids and the kids be able to do more than guests who are coming in should it be auslan only should it have captions should it have english text only if they're signing should there be mouth movements probably most people said that they would want the mouth movements because like we said face movements can really influence um what a sign actually means but for some people the mouth movements are distracting and they prefer to have a much you know more stoic um experience uh the overall feedback was it's good to have some control but too much choice can be overwhelming but the flip side of that is deaf people come from all races having an avatar that resembles your culture can be really important um and then of course the big question is what do we name it so far we've been calling our avatar alyssa because it's you know very similar to alexa that's a problem because if you're deaf and someone mouths the word there's no difference you cannot tell so it might need to have a unique name it might also need a unique sign name which is a feature of deaf cultures around the world um which is a particular sign that means the person um or in this case the device um and that may or may not be what we need to activate it in the same way as you have to say the key phrases of alexa siri or ok google in order to activate an existing device um and if it has a spoken name it needs to be something that's easy for oral deaf people to pronounce so overall i think it's a really important project that we're working on and i know that might sound slightly up myself to say that but it's something that deaf people have actually said this is a technology that we want we feel like we're being left behind and we want to not be left behind it's wrong for a particular community to get get left behind as technology is changing as we move into what could be a world with more and more voice activated technologies we don't want anybody getting locked out from being able to use that um obviously there's a lot of problems to overcome from a technical perspective but i feel confident that we'll be able to get to a point where we have at least a prototype by the end of this project um that shows you know yes this is possible with more time more data more investment and importantly buy-in from the community okay thank you very much everyone for your time i'd like to open up the floor to questions if anyone has any thanks jess that was fantastic really fascinating and i really like how some of the designs like just the concept of inclusive design that really does mean that you are reaching more people than just um not just but it goes beyond um the target audience to a broader audience i think so um so um very keen to hear from uh from you all from uh any questions or comments that you would like to make just you want to stop sharing your screen yes i can do that we can see people more easily um very interesting presentation jess i really enjoy that um you must say we should definitely have a chat because i have some research projects around avatars animation and language and um not targeting deaf people but the focus is different but i can see a lot of similarities so uh we should definitely have a conversation yeah that would be good really enjoyed that thank you thank you yes i was i was intrigued by by your commentary about the the lazy speaking of the avatar and the reaction to that because i think that's a really interesting phenomenon of people saying yes that's positive it's more natural but it also makes it more difficult for people who are not trained to actually get that would there be a way for the system to essentially [Music] try to pick up the level of laziness from the person talking to the system as a as an indicator of how the response could be done possibly um so this is actually uh something i've got a couple students working on at the moment of one of the approaches to ai that's been developed actually within itty is something called slate ai which is the synthetic language i think it's just ai technology something um i'm no good with acronyms um but the idea being that you can treat any sort of input as language um not in the language sense but in the mathematical sense so we have a set of characters or an ordered set of characters um that allows us to say how frequently are we seeing particular patterns um now if we had a system that was you know probably more traditional ai for recognizing the actual science but running a slate type program that's also going i am you know over the last five minutes most of the communication has been very formal you know very crisp signing um or mostly chris signing with a couple of errors in it this is probably someone who's not so experienced with signing be more formal versus the person signing to me is very casual and very fast they're probably a fluent signer we can now be more casual in response that's the theory anyway it is still untested um jess can you see a possibility where um your avatar could actually teach sign language to people who are interested in learning this as a new language themselves so solid maybe so maybe i love it okay so it's it's the sort of thing where if the avatars got good enough and had a you know big enough face then yes theoretically it should be able to demonstrate signs the problem that we have is that the australian deaf community and deaf communities all around the world are pushing for greater control of their language at the moment so we're getting you know lots of controversial situations um hell i got linked another one yesterday where people who are not interpreters are not in fact fluent at all in any sign language are pretending to be interpreters getting hired and then very important press conferences are completely inaccessible to deaf people um similar but not quite so bad is then people who are not fluent signers teaching other people with very poor signing and often missing the cultural norms that are really important when it comes to signing um so if it were able to do that it would have to be fully with like the blessing of the australian deaf community before i'd be willing to do that but the flip side of that is if it does get approval or if it's solely as a support rather than the main thing um it could be a really useful tool for someone who is learning from a deaf person but wants an opportunity to practice because as a hearing person learning to sign myself my biggest problem is i don't have many other people to practice with if i had a device where either i could have a conversation or perhaps more likely i could sign and it said did you mean this and i go no i have gotten that sign wrong please show me that sign and then i can watch and practice uh that would be something that's useful um having it as an avatar would also mean that i could do a mirror thing with it because auslan is how you sign depends on which is your dominant hand which means that my best experience was learning from a left-handed teacher as a right-handed person because i could mirror copy the way she was signing um with someone who's right-handed there's that extra step of hang on what does that look like from your side um yeah so yes solid maybe any other questions i think it's great that your itty colleagues have come out and forced to support you jess absolutely um so we will be putting this uh the recording of this talk on our faculty intranet and obviously we can make it available to whomever wants it as well so um so it will be more broadly accessible and hopefully more broadly understood as well um the other point i wanted to make actually was um the the link with design you mentioned design quite a few times and obviously now we have a bachelor of design in the faculty and the school of architecture is the home of that i think it would be really useful to connect um there and potentially present to their students around you know designing things that would be useful to to different communities so yeah for sure so i think that would be really helpful to our students to learn about um how we can be inclusive okay thank you everybody for coming along um and thank you very much jess and let's i think i learned that clapping in sign language is this is that right yeah i'm doing it wrong but anyway so thank you everybody and thank you very much jess thank you thanks all for coming along good job
2022-08-02