Machine Vision: Algorithmic Ways of Seeing

Machine Vision: Algorithmic Ways of Seeing

Show Video

I'm jill walker repberg a professor of digital culture, at the university, of bergen. I lead an erc, funded, project, on machine, vision. So that's things like this this is actually an ad made by tesla, to show how they're self-driving. Cars. See the world. You can see on the right-hand, side of the screen it shows, what the computer, is seeing. And in the middle we see the human, driver's. Viewpoint. So, what we're seeing here is how, the machine, the camera, vision, in a self-driving. Car, picks out, elements, of the world right it identifies. A car that's passing. Markings, on the road and so forth, this is a different way from seeing, than human vision, and. What my project, is trying to do is figure out, how, this is impacting, us is this changing, the way, we humans see the world, when we see, with computers. Here's the project, full title. And these are some of the kinds, of machine vision we're looking at, facial recognition. For instance. Emotion, recognition, so this is how computers, are designed, to either, recognize. A person's, face and identify, them oh, this is jill walker red bug's face. Or to recognize, our emotions. Um is someone smiling, then, is that, mean they're happy. Also body scans medical, scans, image generations. And, deep fakes. Cameras, that are um in other places, or, self-automated. So. Google lens, is a free app you can download to see some of how machine, vision. Works, for instance, if i, show, google lens i can i can ask google lens, to search by image. And if you point it at a picture of a cat it will. Identify. The image as, a cat and it suggests a few different kinds of cat it could be. So that works pretty well right great image recognition, you can search by images. However. Artificial. Intelligence, and the machine learning algorithms. That, um. That run these uh image recognitions. Are biased, as you can see, from this, this is google photos, so um. The person, in the bottom, in the in the middle of the bottom frame there. Uploaded, this selfie of herself, and a friend, to google photos, now google photos. Like your phone probably does actually sorts, images, into different categories. And you can see it's correctly, sorted many of her images, cars, a graduation, ceremony, a bike. But it has classified. The image of this young woman, as a gorilla. That's not just an incorrect, classification. It's a racist, trope. That has existed, in our culture, for, centuries, and it's a very damaging. Stereotype. Now, google's response with this um, was actually just to delete, gorillas. From, their image recognition. If you point your google lens app, at a picture of a gorilla, today. It finds no results, you should go ahead and try that after this lecture by the way. Um. They were not able. To find a way to correctly, identify. Dark-skinned. People, of african, heritage. As humans. So what does that say about, how computers, read faces. Well it's deeply concerning. To understand, how this is working. Let's just try to look at how neural networks, and machine learning work so this is a very simple. Basic, understanding, of how it works. First you have to feed, the, system. A. Tag data set, there's other kinds of learning but this is the most common, you take a set of pictures, and they're tagged for instance this is a cat this is a cat this is a dog, this is guacamole. This is an apple etc. Or your data set might have different tags it might say for instance this is a man this is a woman. This is somebody smiling. So you take this big data set of many many images and you feed it into a neural network, now, a, neural network is a machine learning, um system.

That Is, based on the idea you have these different layers and different, nodes, which actually. Each of the nodes identifies, a different aspect of the image so maybe one little node is saying okay i see a curve another one is seeing i see these colors. And then, it picks out it so it it divides, all the images into these, um individual, aspects, and then it sees oh, okay we see more of these sort of things if it's a cat. And more of these sort of things if it's a dog. And then it makes predictions, after that so you can feed new images, into the neural network. And it's learned, from the data set, so it can predict. What a new image is of, and so on the right hand side you see here an image of a cat and it's predicted. Correctly, that this is a tabby cat. Um it has a few other ideas it has levels of certainty, so it's a bit more than 80, certain that this is actually a cat. This gets more problematic, when you're trying to tag humans. Um and, one of the most common problems with biased data sets is, that you have a, data set which is not representative. So for instance, you have a lot of pictures of white, men, and that's a problem with google, photos, for instance and most, um, facial recognition, systems, because, they're developed. Often, in um, in california. And, most of the people, working in the companies that develop them, are white men and so you end up with this very undiverse. Data set. You run it through the same machine learning the same kind of neural network. And the predictions, it comes up with is that oh this african-american, woman must be a gorilla because she doesn't look like those white men, or serena, williams. Is male. Joy buellenweeny. At the mit, media lab. Really raised attention, to this she's a researcher, there. And, she discovered, this problem when she was. Not, recognized, by, her xbox, which also uses, image recognition. And facial recognition, to say, oh there's a human here you know the xbox. Lets you interact with it through the camera, so you use gestures. But in order to interact with your xbox, using that system, you actually you're a connect, sorry. Um, it needs to see a human and it does that by identifying, a human face. Um that's a problem. If you have dark skin, and. The algorithm, doesn't, see you as human. So joy bowling weenie and her team, um in the gender shades project. Went through, um, they audited, so many different uh facial recognition, systems, and they found as you can see here, that most of them um are really pretty good at recognizing. Light-skinned. Men. They're not quite as good at, recognizing, white-skimmed, women but pretty good. Dark-skinned. Females, are terrible, at recognizing, them. After this, the um, the companies, have improved their data sets because a lot of this can be, solved by simply having a more diverse, data set in the first place. However, it's still a problem. This becomes a huge problem if you think about, what facial recognition, is used for it's not just for playing games. It's also for entering, buildings, it's also. For identifying. Criminals. Or. Worst case scenario. With, uh lethal, autonomous. Weapon systems. Facial recognition, can be used to identify. A target for an assassination. In a war. That's a pretty um. A pretty high risk, if there is you know a 30, risk of killing the wrong person. Um so i'd like you to watch this video. Which, is uh let me see. By joy boiling weenie because it's a poetic, response, to what this means.

I'm Going to let you watch this video in your own time. Have you come back again, so i hope you actually watched that video before continuing. This. Lecture. I'd like to just explain, what actually happened, in that video, by joy bulenwini. So they had a data set that was tagged much in the same way. As, we saw earlier. But in this case the tagging is more specific, it's about emotions. That runs through the machine learning, and then it makes predictions. I hope you actually watched that youtube video i'd love you to go back and watch it if you haven't. So, joy boulev's. Shows, how. How. Emotion, recognition, how facial recognition, can have really deep emotional. Impact and cultural, impact as well on people. Here's another example, emotion, recognition. Now what this does is it tries to it plots, what your expression, is on your face. And it uh. Imagines, that it predicts, then what your emotion, is, and these are used for instance in schools, in china, where for instance you and australia i believe where you, have um. It it, reads the faces, of all the students in the room and the ones that are not paying attention, or are smiling, or looking tired. Are marked, as having you know spent seven percent of the time, uh not paying attention, or, looking, unhappy. And messages, are actually sent to their parents based on this, you can imagine many other uses for this one of the most common commercial, uses, is. They have people watching, videos, advertisements. For instance, and then, this, system, will track whether people look excited or happy about the advertisement, and the advertiser, uses that. In order to um, analyze. The. Um, to, figure out how to make a better advertisement. Now the basic problem here is, not, just. Not so much the problems with the data set itself. But the problem is that, the face, is a proxy, for emotion, that means, when you measure, expressions, on the face you're not actually measuring how somebody feels, you just ma imagine, you're just measuring what their face looks like, and in this um article. By barrett adele a group of, prominent, psychologists. They go through. Like hundreds, of different psychological. Surveys. Looking at whether there is a connection, between, emotions. And facial expressions, and they find that there is not, emotion. Is so connected, to context. To other things, beyond, just facial expression. That you actually cannot, measure. Some how somebody feels based on what their face looks like and, you can imagine some problems, with using this kind of system you could imagine a dictator, for instance who, installs. Face, emotion, recognition, everywhere, and while the dictator, is giving a speech. Everyone needs to smile, and look respectful, and if they don't. Then you might be thrown into jail for thought crimes, right yes i have read 1984.. Um. The thing is if these systems, did, evolve we would probably, end up with poker faces, can you imagine a society, where no one dares to show their true emotions, because, the computers, might be watching. Ai, also controls, other kinds of recommendation. Of course it's not just about image recognition, and faces. So this is what my youtube, uh on my ipad looks like because my uh nine-year-old. Uses it, he, obviously, likes. You know minecraft, and so forth and youtube is offering more of that to him. Um. This, can go badly. There is some research showing that. Youtube's, recommendation. Algorithms, tend to recommend, more extreme, content, so for instance if you. Have. Um if you're doing a search for you know, immigration. You might quite fast end up in a very right-wing, anti-immigrant. Or fascist, uh, recommendation. Sort of ecosystem. Or if you're interested. In, veganism. Maybe sorry in vegetarianism. Maybe you end up being a vegan or being recommended, that and this is also how conspiracies, theories happen, now here the problem with the algorithm, isn't that it's not learning what you like to watch, it's that it's, it's it's, it's programmed, to generate, profit for youtube, and profit, for youtube. Happens, from. When you um. When you watch more ads when you spend more time watching videos. So they look at for instance you know did she stop watching the videos. Did she not watch the whole of it. They're really just trying to, give you more, content that will keep you on youtube for longer so you see more ads. It turns out we spend longer, on youtube, if we're outraged, or angry, or upset. And so the algorithm, learns to give us more. Videos that will make us upset, or angry or outraged.

And That's not really what we want from our media is it. You've also got lethal, um autonomous, weapon systems which i already mentioned. And then there's a problem that we're, really. Creating this society, where we're just gathering, more and more data, so we're bringing up our children to expect. Continuous. Surveillance. Which is translated, to machine, readable data. This is uh several years old 2009. But it's like an excerpt, from, one of the uh. High school sort of, um, system, tracking. When people you know bad behavior. And you see this is when i went to high school. I guess teachers wrote down notes but it certainly wasn't in a searchable, database, once you put something in a database, and turn it into data, it can be used in ways. That it would not be used otherwise. So a basic question, in digital culture, is whether, society. Controls, technology, or technology, controls, society. Which way round is it, now if we say, if we think that technology. Is going to cause, change, in society, so you you come up with a new technology. And society, will change, then that's what we call technological. Determinism. And one of the classic, examples, of this is the bridges. In new york between new york and long island, now as you know, long island is this beautiful beach island where rich people live. And in the 30s, the city architect, designed, these bridges and as you can see they're very low. A bus can't get through these bridges. And, the argument then isn't that was a clever, way of. Making technology. That, causes, certain societal, effects. Because if you can't get a bus through there you need a private car or a taxi. Poor people can't afford private cars or taxis, and so you keep poor people out of long island. Here's another example, of technological, determinism, do you remember paris, revolutions. You've read about this in history. There were many many revolutions. In paris and one of the ways that the revolutions, were successful, was, that the streets were narrow. Um, and they were able to barricade, the streets, oh another interesting, fact this picture is the first.

News Photograph. Ever printed in a newspaper. 1948. I believe. And. Um. Anyway, napoleon. One of the napoleons, decided he did not like, these, barriers. The barricades, that were being built and so, he told the city architect, general houseman, to, simply, get rid of the, the small streets, and, design, boulevards. Now the great thing about boulevards, in paris i mean they're beautiful, right, you can also drive tanks through them, and if you can drive tanks through. Uh. A boulevard, well. The people don't have much, they can do, how are you gonna barricade, that. And of course this is also now the way we know paris, beautiful, paris. So technology. Does, control. It certainly, can control, society, and the way we're able to behave but it's often not intended. For instance i cannot believe that anyone designed. Staircases. Specifically. To keep people in wheelchairs, out. But it's an unfortunate. Consequence, of them. Another example, is analog, photography, which was calibrated, for white skin. This is a shirley card which is what analog. Photographers. Sorry the people developing the photographs, used to use to calibrate. The images, and as you can see there's no one with dark skin there. In fact it was almost impossible to take good photos, of people with dark skin until the 1970s. When kodak, came up with kodak gold. Which is was known, for, actually, being good for dark. Images. Um, and they didn't do this. Because. They, wanted, to. Not include. People with dark skin and photographs, or because they wanted. The three dark, skinned children, in the classroom. To not show up on the class photo. Um, no. They did it because they didn't think of anything else because they did not have a diverse, group of developers, or or i don't call it developers, for analogue film do you but you know. Um kodak gold, was not, also, not, introduced. In order to solve the bias, problem, but. In response, to advertisers. Of dark chocolate. And, mahogany, furniture, who complained, because they're advertising. Images. Did not, show. Uh, up the the, details, of their, of their products. So it wasn't even about helping people. So, it's not that dissimilar, from the bias we see in today's algorithms. Technology, can also, cause different. Societal. Organization. For instance railways. Require, centralized. Organization. You need to have a central. Um, control. Of where the trains go you need to invent things like. Time, that can be measured the same way, across, a nation, so you know when the trains are coming and leaving. Nuclear, power also requires a very central, organization. To keep it safe. But for instance solar power would work in a decentralized. Society. So there are clearly ways that. Technology. Affects society, but. Obviously. It's society, that creates, technology. Too so we can talk about a co-construction. Now in the 80s. And, 90s. There was a lot of emphasis. On the society. Um, having the main impact here so people. Laughed, at the idea of technological, determinism. And say no it's created by culture, and there was a strong idea that you know. Every, things are relative, and we um, we changed, technology, but in the last years especially, after. Facebook, and face fake news and just seeing, how much technology, affects, us, when it's changing as fast as now, there's a more of an acceptance, of this idea, that technology, does determine, society, to some idea, to some extent. But perhaps, it's not that simple, donna haraway. Great feminist scholar. She wrote the cyborg, manifesto. Where she says you know forget about these binaries, it's not nature versus, culture. Man. Versus, woman, technology, versus society. It's all, mixed, up we're cyborgs. And she means cyborgs, as a metaphor, for understanding, that we. Um humans, are not just nature and bodies we are also technology, we integrate, in our cells we use it i have contact, lenses. You may have other technologies, you're using, well just the way we work with our computers, and phones is pretty cyborgian. And there are newer theories, too like, um vital materialism. New materialism. Um, jane bennett wrote a book vibrant, matter, and, she. Talks about the blackouts, that happened, in america. Um, in 2000, oh i don't remember three or something like that there have been many of those, um. Where you can't pinpoint, a, individual. Cause. There's not one reason why that blackout, happened. Um. And she, talks about this idea of the assemblage. Many people many scholars have talked about assemblages. Bruno letter. The post-humanists. Um jane bennett the the new materialists. Feminist materialists. The idea of the assemblage.

Is It's not, technology. Versus society. It's a mixture it's an assemblage. A putting together, of many different things and, this list, of the things, that. Caused. The blackouts, in um, in the united states. Is a beautiful, example, of an assemblage. The electrical, grid is better understood, as a volatile, mix of coal. Sweat, electromagnetic. Fields, computer, programs, electron, streams, profit motives, heat lifestyles. Nuclear, fuel plastic. Fantasies, of mastery. Static. Legislation. Water, economic, theory wire and wood to name just some of the actins. Now. The idea of an actant, comes from bruno latur's, actor, network, theory also known as ant. Where he's. Showing these, networks, between, different, actors. So an actor could be me as a human but it could also be the law or it could be, a computer. Or an algorithm. So these are useful, new ways of understanding, the relationship, between technology, and society. Assemblages. In bennett's definition, are ad-hoc, groupings, of diverse, elements. Vibrant, materials. Of all sorts by vibrant materials these means that a material, thing, like a cup or a computer. It's a thing. It's not alive. But it can still have this vitalism. It can be vibrant, in that it has some kind of agency. It does something. With other, things. So an essence. This this connection, of me and the technologies. I use and everything around me, are living throbbing, confederations. That are able to function despite, the persistent, presence, of energies that confound them from within. Kate hales. Also. Thinks in this way, she defines, cognition. As something that's not just a human, action. So. She says cognition. Okay she talks about the human she says humans, think that's the stuff we're conscious of and we're thinking, we know we think we reflect, but there's also this, non-conscious. Cognition, this isn't a freudian, unconscious, subconscious, thing it's a different. Paradigm. You think. But you also, have cognition, you cognize. And your cognition. Is non-conscious. You're not aware of it it comes before, consciousness. Um it's a process, that interprets, information, within context that connect it with meaning, so for instance, um right now, my body, is a little bit hot, and it's producing, a little bit of sweat. Um. I'm breathing, without being aware of it and sometimes. The uh responses, are more sophisticated. I start to cry without knowing why. And, part of that is not, at the conscious level. It this definition of cognition, is also interesting, because, it allows you to understand. What computers, do as cognition. Because computers. And algorithms. Are processes. That interpret. Information. They take in data. They interpret, information, within context, that connect it with meaning, they. Input data and they interpret, it in a certain context. And create a form of meeting, meaning. Animals, also have cognition, clearly. And there's a lot more to be said about all this maybe later. So what is data. Anyway because if we're importing data what is this data and that's sort of at the crux. Of, trying to figure out how machines, see versus, how humans see. I i think this image is a beautiful example of data because, these footprints, are clearly, data of a kind, and we can interpret, them and we can realize that probably, well especially when you see the man and the person and the dog, um, sure people have walked here dogs have walked here, but. There could be other things they're proxies. For what really happened and we tend to try to interpret, them as. As, the thing we really want to get at without really thinking about how they are proxies, for something else. There's no such thing as raw data, all. Data is situated, and created. And so that's something to think about when we're thinking about what the data sets are that we are using. Um, as i um as we talked about the emotions. Uh the facial expressions, are proxies, for emotion they're not the emotion, itself. Now dataism. Is, a concept, it's this idea this almost ideological. Belief. That, that. If you have data and you're objectively, quantifying, things then you can predict, anything and that's power.

And This, belief. In dataism. Is, um, often, sets us off on the wrong track like the emotion, recognition, scholars who are selling a system that actually does not based. On scientific. Research, or on what we actually know about emotion. These systems are being applied, many places, like this school, in china which has made, a facial recognition, system that scans the students behavior in class, i mentioned this previously. Or um, this is actually a, visualization. Of what it might look like, i'm not sure the computers, actually see this. But um the social credit system, in china, where computers, use facial recognition. To see. Individuals, and connect them to other data about them in the system. They can also do things like, uh tracking your gestures or your behavior. So that they can um, well in this, in this case i'm not i think they're just checking if you're buying um. Nappies, or beer and they're giving you scores accordingly, but there are also systems. For automated, supermarkets. Where there's gesture, recognition. Systems. That uh see whether you're a risky, looking customer, so does it look like you're going to steal something. And, the funny thing is the data set. That they've, taught, this system, to recognize, a potential shoplifter, with, is not real shoplifters. It's actors. They got a bunch of actors to pretend, to be shoplifters. They recorded that on video, and that's the data set that the neural network then learns what a shoplifter, looks like. So. We don't really know if that's actually what shoplifting, looks like. There are also, um, risk prediction, systems that are that have huge impact on people's life for instance. If you read this article, you'll read about. Two individuals, who were, convicted, for the same crime. Stealing, an item for a value of about 80. Vernon prater, on the left here. Stole, had had previous convictions, had spent five years in jail, and had uh, robbed.

He Stole tools from uh like a home depot, store, in the united states. Um brisher, borden, was 18 she'd never ever been convicted of anything, and she was late picking up her niece from school and she saw a kid's bike next to the road and she grabbed it and got on it to ride it, but a neighbor saw her and called the police and she was arrested, and even though she'd thrown the bike away, she was still arrested. Now when the um, the judge ran or the. The police or, sorry the judge ran this through the risk assessment, system. To see what the likely, risk of them repeating. A crime. So the risk of recidivism. Was. Um. The system, found that british borders risk of repeating, a crime or doing committing another crime, was much much higher in vernon pratas, the difference between the two. It's that she's black, and lives in a black area he's white. His crimes, were clearly. Worse you would think you'd think it was more likely that someone who'd already spent time in jail was going to commit another crime than an 18 year old never convicted of anything before. But the risk assessment. Is based on data. Such as you know. How many of your friends have ever been arrested, how many of your friends and acquaintances, serve time in jail or prison. Have. Have has your parent, uh, ever been sent to jail or prison and, in black communities. Far more people, have actually been in jail than in white communities, so these things. Have this immense, bias in them that are set up to just replicate. Social problems, and inequalities. So, what i really want you to remember, here is that um the, the data, that is available to you is not, real life, also. You can download this free app and play with it it's quite fun affectiva. These, systems, are being, um, are being brought into our country, into norway, into, around the world. And um. For instance, the. In norway, they're testing, out systems, for facial recognition. And prediction, and risk prediction. In uh police, the police, and customs. This is a police prediction, system used in the u.s, where based on previous, crimes. Um, the system. Predicts. How likely. Um. Their. A crime is going to be in a particular neighborhood so it sends the police to certain neighborhoods, based on this risk assessment, which is based on previous crime rates but also things like, um, what the weather's like or where the school's out today. In chicago, they've taken it even further they have something called a strategic, subject. List, where they identify, individual. People, who they think are more at risk of joining a gang. Killing somebody, being killed. And then they try to help these individuals, which. Could go really well with the right, um. Help. Um, but it's also. Uh well you there can be problems with the way the police force especially in the united states, actually deal with individuals. Um. Tick tock it's it's unclear exactly what tick-tock, does with data and this is something that many countries including, the eu. And norway are trying to figure out, but there are rumors, that they use things like, expression, analysis. From, facial. Analysis, what your ages, etc. In order to make predictions, and and recommend, you content, so it will be very interesting to find out what all these, explorations, of what tik tok actually does with data, lead to. And another problem with this is that you don't know what you're not seeing, in social media. So for instance, this is an example, of a post that had gotten an nrk, journalist. Banned, from facebook. Now she, manages, an nrk. Uh page. But she shared this, article, here, about an increase in measles. Um. To her private profile, on facebook, and that got the whole, nrk, page banned which is a bit problematic, since this you know the national broadcasters. Page. And if you look at the article she shared it says that the world health organization, says there's a 50, increase in measles it's not fake news. But the picture of the child, may have been interpreted. As nudity, and that would have been, automatically. Caught up by. An algorithm.

Because, There are algorithms, that sort out what it thinks are pornography. For instance, which, is sometimes, problematic. If you're interested, in these questions. I invite you to read my book which, is not specifically, about machine vision, but it's about, self-representation. In, visual, media, in, blogs and textual. Media. And also, through data, so it has a lot of the ideas, that lead to the current project. We've also got a project website, we're building a database, there's going to be lots more coming out. And even an exhibition. At the university museum, in march. 2021. Thank you very much.

2020-08-24 16:10

Show Video

Other news