hi everyone i'm alex vega i'm here to deliver this talk the great black hat how can we protect activists from technological policing so i'm alex befaiga i'm from peckham south london i love to make radical and playful things with tech um i studied the masters at university of arts london at central st martin's i did my thesis was on looking at using design as a tool to explore ethical features with artificial intelligence i co-founded a studio many years back with two of my best friends it's called camusi and we do some really interesting work and it pays my bills but i also teach one day a week at the university of arts london creative computing institute i'm currently teaching a module on introducing computational futures and artificial intelligence where the class is being exposed to different machine learning frameworks and using it as a tool to be able to explore the ways and how we can be creative with machine learning technologies activism is important you know um i've been very much been inspired by activism especially being a black person um you know it was inspired by some of the initial you know actions by um the black panthers in america and also by the black panthers party who actually were here in the uk yeah so you know it's really interesting you know the events that are going on in the world that sometimes it feels like these things in the uk don't happen it's like it's like this like this it's like there's this gloss on you know some of um you know britain's history in terms of the treatment of black people and you know for these individuals here who are able to you know um fight against police brutalism you know fight against a lot of things you know has provided loads of opportunities that now me myself down the line can benefit so even myself back in 2016 i took part in the black lives matter protest then um i didn't take part in it in the recent one and just obviously due to what's going on in the world right now but these things are super super important this is what this is the vehicle where communities can come together and show that resilience and show that resistance you know um but there's been an interesting thing right there is there's an interesting thing that is changing right now which is quite uncomfortable um you know i work with technology i work with you know face recognition models to create some wonderful experiences but one of the things i always say to my students right is the same ml models we're going to use to use for creative purposes are the same ml models that are going to do harm and so as you can see down the screen i've got a number of slides um of just things in the world like did you protest recently your face might be in a database cops in miami and new york city are as protesters from facial recognition matches or we have defund and face recognition i am a second generation black activist and i'm tired of being spied on by the police as global protests happen first recognition technologies must be banned or an article that i saw activists um turn face recognition tools against the police and it says we're now approaching the technological threshold where the little guys can do it to the big guys one researcher said and that's kind of where that quote there it's kind of the angle that i'm gonna come with with this talk and obviously that title of the great black hack you know i always say this about computers and working with technology in this world of talking about artificial intelligence and you know it's the way how it's framed a lot of the times is that you know computers are these omnipotent intelligent entities that know it all and i've always been out had this notions that computers are stupid they just make our human mistakes faster this is something from a while back ago when i was teaching um yeah i think 2019 i remember teaching this is a face recognition model this model in um is uh i think it's mobile net i'm going to believe his mobile net forgive me um and one of my students of south asian descent places a hand and in the shape of a gun or you know gun fingers um you know um her skin is quite brown um and she's got you know metallic rings or rings and finger and it was really interesting to me at the time because i was like hold on how did it detect this was a sexual robot gone what was those um where did they discovered this from how did it how did the model be able to predict that that was probably one of the few things um the model at the time predicted really well and uh so i had to go look back into the past and try to understand how the machine learning model is um is kind of compiled together um i i didn't actually have a i didn't have no slides today on the machine learning part i actually took it out because i wanted to make this um talk a bit shorter but you know if you think of it for a machine learning model to be able to make a prediction that's what machine learning models do they make predictions they need to be trained on data they need data they need to be fed data you need to give a model data in order for you to ask the model to do something for it to make a prediction in the context of the image on the screen the models made its prediction that this person or the model has been the model made a prediction that this person who's made us gone fingers is framed as a revolver six gone shooter so the first question i ask myself is what is the data that has been used to train um this machine learning model that's obviously recognizing this gun finger how did it come to this conclusion and what i found out right was that you know the machine learning model is trained of images that was found on the internet and these images on internet may have been tagged in particular ways and you know you train the model disc and over time every time the model cycles through the data hopefully the model should be getting better at prediction and so in this case here um i don't know if it saw many fingers like this i have no idea you know some of the questions could be is it because of the silver rings is it because of the color of the skin these are the things that open up the door um to research but it also is some of the things that intrigues me into the next thing that i want to talk about bias in ai systems you know we use human beings we constantly analyze the world around us we see label and make predictions and recognize patterns and that's where buyers can creep in right we talk about this in the context of facial recognition technologies and they have been a lot of work already or by researchers who've been trying to you know make arguments and why you know machine learning technology should not be used in law enforcement you know one of the stories i have um was it's a story that i kind of based my master's thesis on really it was a story of a young lady in florida and she was somebody that has got in trouble in police a number of times and she got arrested and this time around in the state in the in a criminal justice system in in the state of florida they had you know got a new technology a new software called compass and compass has framed itself as this tool where um it would be able to assess the chart it would be able to assess how likely somebody was most likely going to reoffend and this if you're interested in this this was an article called machine buyers i feel like if you just search that up in google it should just come up i remember one of the authors was julia anguin um her work is amazing and in that research in that study they were trying to understand how when it came to black offenders when they were doing a comparison between black offenders and white offenders the comparisons they were trying to explore was hold on wait a minute how is this this individual who has maybe this is their first offense versus comparing that with somebody who was a seasoned maybe what i would may call a seasoned criminal the seasoned criminal was seen as most likely to reoffend while this young black individual who probably is their first time was seen as most likely to reoffend and a lot of the times in those conversations people blame the algorithm to say oh no it's the algorithm that did that and you know working with algorithms for a long period of time algorithms are designed to achieve a task at hand that is meant to achieve the algorithm did the task it made it made a prediction it's made its prediction because it's been fed some data where does that data come from and in the case of that particular project the data for um the algorithm to make those predictions it came from a questioner and you had on this questionnaire you told us some questions and in the questions you would give a you based on the question many questions would be like how many friends have been to prison um don't think in my head if this anymore that was a popular one um or like have you ever been to prison yourself or is your mom and dad married like particular questions which might not necessarily be linked to you know that autistic are you black or are you white but if you were to do social studies or you know look at this that context of that questionnaire right it's trying to build a profile of an individual through those questions and the model has been trained on previous survey answers and this model generates that and so that leads me to this definition of biasing ai systems and bias occurs in ai systems when they reflect human biases healed by the people involved encoding collecting selecting or using data to train the algorithms that power the ai and so that's how i define bias and when we bring it back to you know protest environments right we've already been aware you know as black people we were police before technology we've been policed for a long time now technology just enables us to do that faster now we can do that on mass scale now it isn't no longer and it's even worse because now it ain't just me where it's like i see free black guy tall dreadlocks locks and you know in that case um it's now alex for fager we've got him we've gotten and that for me was really interesting right because it kind of removes the autonomy um it removes the the agency you know being able to protest is a right it's a human right and being able to protest maybe especially in a country like the uk and and hopefully you know countries in like the states this should be a place where people are entitled to a fair protest you know and you can see it right with some of the recent events in the world um that i won't talk about in the states where you know people were able to enter the capital um or capital i can never pronounce it correctly my bad and were able to just enter when there was really not that much police enforcement over there right um and you compare that to maybe some of the black lives matters protest that happened that was you know just you know do you saw the the brute force that police officers carried and so i've kind of been inspired um or will have been inspired with my with with my friends that kill and richard and and who happen to be my co-founders at community and we had the opportunity to explore you know can we create a speculative publication that could allow us to engage in this topic of facial recognition so the tool that i'm probably going to introduce first really quickly is um design fiction so divine fiction is um very important to this because um in my masters i came across the design fiction also known as speculative design design features critical design there's so much definitions for it but um i one of these definitions i use here is by uh matt martis who um yeah he's very familiar at university of arts london especially from the csm product design side he defines design fiction as a design and related approach rather than a practice that enables the exploration of novel possibilities through the creation of functional fictional features through the creation of fictional prototypes designers are able to establish compelling visions of alternative realities and speculative features and i think this is by uga in 2013 um and i kind of um actually write about this and in a blog that i released in 2019 which is a snippet of my master's thesis is called future orientated design um definitely do have a read and one of the key things i talk in future orientated design um was kind of taken from you know a you know when you think of speculative design and design fiction this tends to be one of the more famous um you know literatures or books that um gets cited a lot you know speculative everything written by anthony dunn and fiona raby a friend of raby who initially uh the royal college of art and have i think they have moved on um to you know passing school of design i'm not sure if they're still there and um one of my if i go back to this you know and in this future into the design i kind of give this sort of literature review on what um design fiction is the different thoughts and perspectives but i also talk about the limit the limitations and the criticisms of speculative design you know one of the more common ways is speculative design that may be or an example which people maybe conflate or say executive design as black mirror where you've got this really dramatized um environments that um that at times can be very uh you know sometimes you know film film and and movies are really good with creating these um alternative you know negative features of um of the world which at times sometimes it's not really maybe at that time not as grounded in reality as we know it one of the critiques i also have of you know the tool design fiction is that i always say that it focuses on the one percent it focuses on you know people with privileges and you know we you know privileged could mean even myself i can consider myself a very privileged individual working in technology working with technology working in technology working with technology access to technology and and and when you have those three things it enables you to have a quality of life or disposable income or access to knowledge which allows you to be privileged so all of these cool feature technologies or stuff will cater to me and and others who exist in that one percent but a lot of the times they always miss out the 99 percent what happened to those communities in those futures who would be necessarily exploited for these futures to happen and so i've kind of been critical in terms of design fiction maybe in the context of this book like this i'm not going to go into this a bit too much i think it's still a very good book if you're trying to wrap your head around it but i want you to read this book and i want you to engage in it with a level of criticality for those who are very interested in this book um this is a book i also like i think i went to the launch i think i was yeah i think i went to launch i think i was studying at csm and the masters at the time and this is a book by matt malthus who i mentioned critical design in context history theory and practice and i think um you know these two books are quite they're they're they're different in their writing style and obviously you know perspective everything focuses on speculative design uh matt in this case in this book focusing on critical design where they might be have some similarities i don't think they these books don't sing from a similar hint and i think they're very two good books um and they're probably more but these are the books i could remember of my head that definitely helped me a lot when i was um exploring this space and the other area or technique i want to introduce us to which is important to this project is called um adversarial machine learning and so it's simply a technique focused on fooling machine learning models and it's simply we could call it optical illusions for machines and so the project we have here is called the invisible mask and that was quite and and this is what i call a speculative publication exploring the attack on human agency and autonomy by facial recognition technologies this is at this period of time the invisible mask is a baseball cap it's actually built off at this same very baseball cap we ordered it from amazon and it had like a a pack of caps on amazon and we decided to create invisible masks and the concept of the invisible mask is based on this academic paper here which is really interesting when i first came across the paper it was when mozilla commissioned us to be able to explore like a provocation for the 2019 mozilla festival and i came across this paper i was very interested in this paper because they kill um at the time had essentially kind of just said hey bro you know first recognition technology is really interesting it's like the password to our new world you know especially scanning it on your face scanning on your phone you know you know it's very nice you know we in technology we can really you know design products that will enhance the beautiful features of facial recognition and capabilities but also you know where we are coming from a you know experience where we have been policed by you know police officers most of our lives you know even just for walking down the street or even the way how we will police ourselves in terms of how we dress how we look so we probably didn't form comfortable outside i thought it was important for us to be able to address a situation like this and so i was just you know in the internet searching and coming of stuff trying to find out things and i came across this paper called an invisible mask practical attacks on facial recognition with infrared and the whole concept of of of of infrared right as a technology is that it's invisible to your human eyes and and and you know so you could the whole you know you could wear a cap um and the whole theory is that you know if red light will emit up particular parts of your face and what that would simply do is when the facial recognition um you know a camera in that case isn't perhaps trying to you know scan your face it there because of uh most um fresh recognition cameras are beginning to use offred technologies the whole concept is you fight infrared with infrared that's that's the concept behind um the invisible mask when i read this paper i generally thought it was like wow these are interesting researchers who were you know i don't know just you know researching interesting things what i didn't realize after reading it after making a visible mask and i read it again that it was actually a paper that was creating the you know mask to see how they can create fresh recognition cameras to basically like evade the effects of the mask if that makes sense so they were trying to fold the model with the mask so they can improve the model so the models can't be fooled again if hopefully that makes sense and this takes me into a world of um of of projects you know i i definitely weren't the first person to try to explore how to evade feature recognition systems uh when i started this project and shared it with people a project that came up to me a lot was cv dazzle by the artist adam harvey i think there's also a club called dazzle club um which i've never been a part of or joined but if it sounded really cool um i know that their focus was to go around places in london and kind of identify with facial recognition cameras are it was like this kind of you know piece of activism and and also performance art in a way of how they conducted themselves um i'm not sure if it's still been going due to you know um the effects of the pandemic um but they're really interesting if you're interested i'm trying them out there was hyperface as well by the artist adam harvey and the artist hyphen labs who i'm very grateful to share with creative studio with and they create this project called hyperface which is a prototype and hyperface is a scarf which has many faces meaning that the model would if you wear it the models obviously would struggle at the time to detect your face however in recent times machine learning models seem to be getting better and better with studies um even now some of these models are now going to gate analysis so they'll be able to analyze how you walk so now you know the level of sophistication is getting higher there was this um project which i came across and it was called incognito um and i think if you went on the face it it's meant to admit some energy i've you know i've never really ever dig deeper into this project because i really liked it you know i really liked how it looked stylistically but i wasn't sure if you know if there was any technological effect or was it there was it this light effect because you kind of see it it kind of focuses on particular parts on the human face hopefully trying to be able to evade the system this was another project which i really loved um though it's called full-in automated surveillance cameras adversarial patches to attack person detection and so in this case here you can kind of see in the image and it's a really good video on youtube if you want to watch it's quite funny um the individual of the patch cannot be detected by the fish recognition detector while the person who's not wearing the patch is detected and so the um yeah so it's a really interesting project so there has been people who have been exploring this area in in different ways the reason why i was really drawn to an invisible master project really purely because it's visible with the other projects that you kind of see and even this in this case there you know if you're going to a protest would you want to wear colorful clothes in a protest no would you want to style your hair in a certain way in a protest possibly um but if you wanted to you know i don't know necessarily just i don't know i wear caps a lot i might wear a hoodie um and and a lot of these things are not to create a state of anarchy in that case but just in order for you know a kill describes it in a way of like with gdpr you were able to opt out you know and you know of certain things online but in physical space how do we opt out how do we say no i don't want to be captured on camera i don't want to be seen and and there's obviously loads of arguments for why these things should be done you know there's arguments for safety arguments for security privacy in those particular things and so it's an area that needs more discussion it's an area that needs you know i i think good debate and perspective some all sorts of individuals i don't think i'm a person who has the best expertise to communicate on why you know why shouldn't face recognition cameras be there why should should be there but i am somebody who is interested in how they're stripping away of of autonomy and agency with these technologies especially when the communities that these technologies are being that are being in place these communities never really have a say they never have to say they never have the opportunity to speak or share about how they want things to be done for them or how um yeah and how they can just really be able to walk around safety and so that was a key thing for us in this project there's also this really interesting paper which i actually forgot which is called um it's a really interesting paper and um it kind of talks about like one of the dangers of a project like this and this is something that we kind of considered as well as i think at the time we were working on this there was a lot of conversation on the first recognition was a big conversation in 2019 i think perhaps in 2020 that may be kind of reduced just due to you know what's going on in the world but there was this interesting paper which talked about the dangers of artistic experiences or art or masking camouflage and artistic imaginaries or facial recognition algorithms and it talks about this you know the danger of doing these type of projects and how they can make people be anxious about algorithms and and i think that's a very fair point i think that's why one of the key things i try to do even in this talk from early on was to say computers are stupid and that's something i've tried to maintain for a long period of time i think the word artificial intelligence does not do this industry or this piece of work or this piece of technology i do any justice in any way shape or form so what we decided to do for mozilla festival is we we had a couple options so we had to be very careful of how real we make this to work because especially at the time we were working on this i think there was a you know a number of protests going on in hong kong as well and so that was very you know for us it was quite we you know it was like how far did we go you know you read all these books he creates his prototypes what does a prototype look like how does it feel and so you know how we how we imagine this in a way was and we did it in this way we i don't know if it will see clearly but um we have like the we made everything as patchy as possible we still left the wire against the back we still left the infrared camera this way because the whole idea of all the whole concepts of the experience was like this was somebody in the community who had hacked together this in this future well really and truly it was like was how far in the future you know i didn't mention that with design fiction there's obviously this concept of features and how far in the future is and what's possible what's probable what's likely what's not all of those things and it's interesting with this project because i think the future we saw it was probably i don't know it's 2019 when we worked on this so maybe three years from then i don't know four years um just because of how advanced a lot of the conversations facial recognition technology used to be and it was interesting because i saw this project as a tool again for community resilience and community community resistance where you know somebody can go into github download the code if they have their abilities to diy they can also make that cap so that was the whole thing of it like this make a hacker tool make a hacker kind of device that people can wear to protect himself that way i was kind of coming from um i never remember the clothing ban but i mean there's a particular golden plan in the 90s that um was also protest clothing i think they had a store in soho um i can't remember why i should have actually remembered it and those things have kind of inspired me in a way where you know like i said the cap and this is very much the same shape and so you can kind of see there where you've got the you kind of go um i'm going to use the pointer for this one you've kind of got the three sort of lights the three led lights you've got the patchiness you've still got the wires in the back you've got the stitching stitching the stuff in you still got the camera on top of um i think this is like lemon and ginger tea and we we tried to create this like stage experiment um and we we sort of got people to come and try it out what we did in the end is we simply played a game um where people um where people could just learn to evade the system you know like i mentioned about that aspect of being i might make i'm gonna make this full screen as i come to the end that aspect of being able to um be careful of how room we make it to be and that was one of that was one of the key things because which i you know um if it's too real people ask you where's the code maybe if it's a bit too fake maybe people you know so it was also kind of trying to find that balance of where would go so we put the we got like uh i don't even know what we call this a a a model head you know and and we put the cap on the model head um for the sake of um [Music] one of the challenging things i wrote about infrared was that um hilfred we don't know how much radiation alfred would have on the human um ability or like nothing sorry about the human ability but simply um on the on the human skin what is radiation what is danger of those things what the danger of those technologies and so we just simply got people to just play their sort of we kind of created this thing called a vader machine and we kind of got people to do some activities play a game and and yeah and so that was kind of the space but one of the key things about it that i really loved it was more about the conversation the discussions that we were able to have about you know what is the role facial recognition technology in in our lives and in society um what is the ways and how we can necessarily um engage in you know taking back our autonomy taking back our agency one of the unfortunate things that i wasn't able to achieve for this project especially when when we had worked and this was around october november 2018. obviously 2020 came and you know 2020 kind of took me not a goal was to you know spend some time in 2020 um spending more time in this area of doing fish recognition workshops being able to go into communities being able to get their perspectives especially black communities get into perspectives because this is presented at mozilla festival and you know i i you know the thank you mozilla corporation for giving us the opportunity but you know there wasn't many black people there so the whole power of ourselves created maybe a project and grounded in a community my critique of myself and my critique of the project would be that we we didn't have the opportunity to expose this enough to the community as much as we wanted to um yeah the world kind of went into a standstill last year so there's definitely a you know um there is definitely a vision for us to do more work in this area the invisible mask has essentially just been catching dust in the studio especially when none of us have been there so it's definitely there's definitely a goal for us to be able to further enhance this and prove this you know even look at what more clothing can we add you know what more what can we use as a speculative provocation to have conversations with people about you know yeah it does role facial condition and just machine learning technologies in general being used you know in law enforcement being used by the government to make decisions on ourselves and being able to engage in a critical conversation and even have those individuals and that work with these technologies or create those technologies to be in the same space as well let there be a discussion point let it be you know give us the power give us you know we have to say and and that's what's kind of been my goal from when i came across this machine learning space um has always been about how do you demystify these technologies how do you educate people on these technologies but one of the things that my friend sarah gold always mentioned to me was also being careful of not only just educating people but how do you not also then give people you know make people feel burdened with this information as well and that's kind of similar to that paper which i shared before about the anxiety causing people so this is a very interesting space um it's one that i don't have the answers for um but yeah thank you so much for listening to this talk while sitting there this is alex fager this is a great black hat hack i should say thank you so much and enjoy the rest of today you
2021-02-13