The Future, This Week 24 Mar 2017: hard questions go unanswered, self-driving cars, and robolawyers

The Future, This Week 24 Mar 2017: hard questions go unanswered, self-driving cars, and robolawyers

Show Video

The. Future this week simply business inside do we introduce ourselves I'm Sandra Peter and Kyrie moe once a week we're gonna get together and talk about the business news of the week there's a whole lot I can talk about okay. Let's do this. Today. In the future this week why the hard questions, go unanswered. The road for self-driving cars seems rockier than we thought and robo. Lawyers. I'm. Sandra, Peter I'm the director of Sydney business insight I'm. Kai Reema I'm professor. Here at the Business School I'm also the leader of the digital, disruption Research Group so, Sandra what happened in the future this week first. South, by Southwest Interactive, happened. And South by Southwest is, a huge, annual. Conglomeration. Of festivals. Conferences, and. Other, events, around. Film around interactive, media around, music but, also a, very. Large part of it South by Southwest Interactive, is, focused on emerging. Technology, which, has earned, the festival a reputation for the breeding ground for new ideas and creative, technologies, and this is where Twitter. First appeared in 2007. This is where Foursquare, appeared in 2009. Where we had meerkat, a couple of years ago and South. By Southwest claimed. To cover a. Lot of sessions, with a lot of sessions on workforce automation, and the surveillance state and the future of internet, and robots and robots. An article, in the verge reported. On a series of talks this week that reflected. The festival's, really conflict, the verse stone this year the, fact that there was a huge focus on creativity. On, personal. Life and background, of people who are attending but. There. Was very little critical, examination around, how society, will grapple with the effects of widespread automation. Or, discussing. Ethical, dilemmas, involved, with ever more powerful, AI, taking. On the big roles in transportation. Or in medicine, another example mentioned was that while, on one panel, the early. Democratizing. Force of, the internet was mentioned, no mention was made of the fact. Today, we have white, supremacist. Conspiracy. Theories, being rampant, on the Internet and a. Discussion about fake news and echo chambers, and the, way in which the Internet population. Seems to be. Evermore. Split into subcultures. That rarely ever talk to each other or a large organization. Dominating. The distribution, of media would think, it's 40 to 50 percent of Americans, getting their news on Facebook, so Sandra why do you think those, conferences, are unable. To ask those hard questions why are those things not discussed, well. First I think increasingly. We. Are seeing problems that are evermore complex, so technology is changing very very rapidly and, any conversation. Around the implications, of technology. Whether they be ethical. Or around the changes in the workforce are quite complex, debates, the, second would be the very, speed of the. Technology, these changes, are so rapid that very few people manage, to keep up with these changes so you're saying it's inherent, in the technology. Topic that a panel, at a conference can only ever touch the. Surface of what lies beneath no, we're saying that is actually, difficult to get at this not that we shouldn't be doing it right I find. That those, conferences, sometimes, because, the commercial conferences.

They Rely on high-profile. Speakers, to. Come to the conference often. They rely on the, sponsorship. And goodwill of corporations. To be part of the conference that this might actually impede. The questions, that can be discussed and can be asked, I've been part, of a few conferences, that were rather, disappointing. In what I perceive, for this reason because you cannot. You. Know chop the hand that feeds you is, that a problem do you think well, it is a problem because it gets to the question of where should these conversations, be taking place so. We have increasingly, complex, conversation. Whether that's around the society. Grappling with the effects of technology, or the negative side effects, of some of the technologies, we're employing or the ethical, implications of, technologies. We're developing, the question is who is responsible for, a having, these conversations or, driving these conversations. Should these be a, theoretical. Responsibility. Is this the domain of academics. Is it the domain of businesses, to start these conversations or, of, larger, societies, or even governments to have these conversations yes. And to what extent, can we expect, journalists. For example, to delve deeply, into how, AI, works, how, robots. Work and therefore, what they can and can't do to critically, question, some of the often. Very, far-reaching. Implications. That, are being reported, like, a I will replace, humans, in all parts of life robots, will come and take away all our jobs, algorithms, will remove biases, that's right so. These are all pretty. Stark claims, which, you. Know cannot, be discussed. In a few hundred words and maybe not you know in a couple of questions. At a, conference, panel, so. Are we running the risk that's the, way in which the media works in those conferences, work that we cannot actually have those tough discussions, the answer, to that question, might, be increasingly. Yes, these questions are too complex, to answer in half our sessions, I think there is a huge role to play for universities. For instance here at the University, of Sydney Business School we course we would say that of course we would say that we discuss, these matters at, some length with our students, so developing, the next generations, of leaders, or empowering, them to have these conversations I, think is quite important, or are we too positive about technology. Topics is there, an issue with a Technol, optimism. Whereby, we. Like to look at all the positive outcomes, the feel-good outcomes, the problems that we can solve with technology, yet forgetting about a downside, the dystopian, view, as a balancing, out of the utopian, claims, that are often made, is this something that you wouldn't want to have in a conference like this which should feel good and should be looking, forward should be Technol optimist and should, really be, a vision, for what we can do in the future and a more. Critical viewpoint, just gets in the way well. As a Technol optimist, I strongly, believe in the power of dreaming, big and imagining, the future to empower entrepreneurs, or. Innovators. To make these bold claims, or to indeed, innovate, or invent in that space so I think there's definitely a role for but an uncritical. Examination. Of the future I think is extremely, dangerous but, certainly there's a difference between dreaming, big and making. Claims right which, raises, the question who. Then should be part of that conversation who. Else can't, just be left to theorists. Academics. And self-proclaimed. Futurists. I think technology, is one of the most influential, mega, trends that will actually shape the way we live and the way we work and the way we function as societies, in the future so I think this is a conversation that everybody, needs to be part of this won't be solved by tech conferences, or by Silicon, Valley or indeed by academia, but, rather there is a need, for all of us to push and ask the hard questions, in public forums and create that collective, understanding so let's take a look at a couple, of topics, that were discussed at the conference the first one is self-driving, cars there's, a perception that self-driving, cars will be a normal part of our daily lives in the very very near future some people say in five to ten years even indeed.

And We're looking at an article in TechCrunch that, actually, points out that even lubbers fleet is demonstrating. Some fairly wild swings on measures, of safety and, reliability and. There. Isn't a steady progress, in self-driving. Cars but rather a more jerky. Sort of stumbling, towards, the goal of self-driving. Reliability. And this, is complicated by things such as uber its court battle with Google, over autonomous. Car technology, which has just started we haven't seen the end of that yet so, that raises the question of, how do we think about self-driving. Fleets, in the near future yes, indeed so, the documents, that were, made, available show. That uber, self-driving, cars have, done, about 20,000. Miles but, that at about on average every mile. Someone, had to intervene, because something, went wrong not necessarily. Always big. Things that would have led to accidents, but veering, off the street or things. Where a driver, had to, disengage. The computer, and then the computer could take over again so a lot of small. Things to be ironed out the. Companies. Say that, the algorithms, learn and they become, more. Proficient those, things will become less and less but. Are we making steady progress what's, the technologies. Like there were a few other articles, that point, to things, not being all that ready yet whilst, the uber conversation, was around miles. Per intervention. And whether they were critical incidents, or just you know bad experience. Not a smooth, ride there are a couple of other stories including, one raised by MIT that, look at the sort. Of practical, progress towards, autonomous, vehicles, that really, need improvements, also in technology, and things like sensors, that map, vehicles. 3d environment, here, we want to talk a little bit about lidar, sensors. And the, fact that companies, such as alphabet, or the spin-out company way Moe uber, and Toyota, all of these with, the notable, exception of, Tesla, that's using, other technologies but all of them rely on lidar, sensors, to locate themselves on the map or to get around or to identify, things like people or dogs or cats, of. The car so, these liners are essentially. Devices. That sit on top of the car they look reasonably, ugly at the moment looks a bit like a coffee machine sitting, on top of the roof but what I essentially do is they shoot lasers, into the environment. And then read. A 3d, image of the environment, of the back bouncing, reflections, and can, create a, fairly, accurate, picture. Of a few centimeters on a hundred meter distance, of what the environment looks like in 3d now, the technology is very expensive. At the moment it's bulky. It's not, a hundred percent reliable. And it. Is one of those things that really stand, in the way of making, progress to bring self-driving, cars to the masses isn't it indeed, you've highlighted the fact that it actually is, quite expensive it costs thousands. Of dollars or even tens, of thousands, of dollars a piece, it's got moving parts, in it it's got it moving spinning. Mirrors that direct the laser beams at the moment indeed, many vehicles these days have more than one of these things onboard and despite. The relatively, small number of autonomous, vehicles that we have at the moment demand, has become a very huge problem, so by. Some reports some company wait for six months to get one of these things now, there is a light at the end of the tunnel apparently, which is something, called solid state lidar, technology which should be much cheaper much smaller and much more robust, but, that hasn't, hasn't, eventuated, yet we haven't seen any working. Devices. And not been produced at scale anyway now. While, those problems, with lighters might be solved the whole story, points, in the direction of how hard it is to make cars, see, right. And one. Thing I want to point to here is that as, we're trying to build cars, that can see live humans, do I think, we're missing a point about how humans, drive. Cars, it's not just like we're, taking in sensor, information with our eyes and, and then, the brain process, it and we can drive the car with our hands and feets we're. Driving cars with our bodies right so we are moving in traffic, we know where we are and we can react, quite intuitively.

By Feeling. Feeling. The car feeling the road all, of this information we take in so, it's very hard to replicate in, a piece of technology, in a computer, and an algorithm what. It's like to drive the car in the multiple, ways that we, as humans, are able, to sense, when. We drive a car in traffic so, it will be interesting to see our companies, will, solve this problem going forward with combining, multiple, sensors. The issue of humans driving. Cars is also one of the reasons we are looking at autonomous, vehicles, in the first place I have actually skipped, the stage in the development of autonomous, vehicles so most companies whether, that's Google, or uber or indeed, traditional. Car manufacturers. Like Ford or Mercedes, have skipped the stage of what's called level 3 autonomous, capability where, you would have a human in the car who would take over in, terms of emergency, but, rather are looking at developing these fully autonomous, vehicles because. We would actually need sensors, inside, the car to be able to tell if that human is still looking at the road if they, haven't strapped, you know VR, goggles to their head are playing games in the car but rather doing, something, else so they're going straight to full autonomy, now, this also creates, the other difficulty. Of of not having this person, who is interacting with the car in any way in, the fact that it's, not only technology, that is stalling the development, of these cars but the fact that most of these autonomous vehicles, have to interact, with other, cars driven by humans, yes that's a really, interesting topic, because. Presumably. If you're building a self-driving. Car you're, programming, your algorithms you're training your algorithms, you want those algorithms, to adhere to the rules right. It turns out though that humans, in, traffic, they don't, humans. Do not always follow the rules they, speed up they might break the rules at times and sometimes for good reasons because humans apply judgment, humans, can work. With the rules they do not have to slavishly, adhere, to the rules and so, the problems that have been observed, is that self-driving. Cars get into trouble when drivers around, them are a little, bit lenient with the rules which. Is what creates a traffic, flow that is largely. Organized. By human drivers if you now enter, cars. Into, the mix that are very slavish, with the rules you're really messing, with this system and you're creating, dangerous situations. Where human. Drivers might not expect, how, a self-driving. Car reacts. And so you're creating unexpected side, effects, in a system where humans, that apply judgment, and, self-driving. Cars that strictly, adhere, to the rules have, to interact, and indeed. So far autonomous. Vehicles, have refused, to break the law so we haven't built in any mechanism, for them to break the law even, though the safest thing may be to break the law for instance to, avoid an accident and they, also can't read social cues we, often rely on eye, contact, or, signaling. Or moving the car a little bit forward to signal, the other driver that we might take initiative, and join, the traffic in an intersection, and so far autonomous, vehicles have struggled, to interact, with what is the majority of cars and absolutely, and we know from experience that, when we, drive in, traffic, the, rules cannot cover a hundred percent all the situations. That might arise and so as humans, we have to interact we have to apply judgment, we have to commit. To a certain course of action knowing, that other people will anticipate, and we'll know how we react, because we've done this for years. And years, organizing. And negotiating, the way in which we do traffic among humans you, enter, those very. Mechanistic. Self-driving. Cars into the mix and things will just break down inevitably. So this is what people are concerned about when we, talk about a. Traffic. System that will gradually move, towards a, system where we have more and more self-driving, cars because we cannot just switch from one, fully, non, autonomous to, a fully autonomous system, and this. Goes back to our conversation, about the big questions and the big questions, might be you, know our self-driving, car is going to be here in five years maybe. Quite a few people are saying maybe not but also what that technology might look like what, is the infrastructure, that we need to build, for that to accommodate it even for technologies, that we are not sure what they look like today what, are the ethical implications of, having these autonomous vehicles, on the road who. And when gets to decide I'm. Pretty certain that in the next five to ten years we will see cars, that are being sold that have some, form of assistance.

Systems, Where you can have you, know certain autonomy in certain situations, you could get a Tesla, this week yes, absolutely. We might see. One. Or two companies, launching. Fully, autonomous. Taxi. Services, but. Will we have a traffic. System, which has a majority of self-driving, cars or will we have a situation where, most new cars that are being sold ourselves driving. I cannot see this happen, anytime. Soon if we look at where autonomous, vehicles might show up first leaving aside the conversation. Around industrial. Autonomous, vehicles, whether they be in mining. Or in ports, or in public, transport, we will probably see autonomous, vehicle coming up first in areas, that have been very extensively. Mapped probably. As a transportation, service, in, discrete, areas oh that brings me to another story. Which, showed up just recently there's. An artist. By, the name of. James. Bridle, and he's. A Flickr artist, he does photography and, he has this photo, project, where he. Is trapping. Self-driving, cars, we. Will put up the pictures for you to see but what, he's essentially, done is he's, drawn a circle, as a solid, line with a dotted line around it, and the idea is that a self-driving. Rule-abiding car. Would, know that it can drive into the circle, but it would find no way out of the circle because he cannot cross the solid line now, whether or not this is realistic it's just a prank or an arts project it points, to a deeper problem which is that self-driving. Cars will read. Off the, built environment, certain cues as to what to do so, they rely on certain, visual, cues in the built environment and, if, those cues, are not there they, get into trouble but, it also meets means, that once we learn. How, they read those. Visual. Cues this, might be to being you, know two people playing pranks on them too you, know we might see a whole new YouTube, genre. Of people playing pranks on self-driving, cars, by trapping them in cooler, sacks or by having. Them via, of roads but it also points to a serious, problem that you can hack. Into or, in otherwise derail, those sensors, to. Maliciously. Bring, about accidents, for example, so we're not really, some. Of these sensors, are quite good could we build billboards. We're human. I wouldn't say it but it would have embedded, pixels. That would give certain, directions. To the car this is actually what is being discussed right yes, this, is indeed one of the ways that you, could hack self-driving, cars if the, sensors are reading the environment, you could actually build code, that they would be able to read off in large billboards, and that, could be used for good you know driving you into the next very, fancy restaurant for a free meal but, it could also be used for other purposes and, again. We haven't exhausted the discussion. Around autonomous. Vehicles, and even the problems, with technology we haven't even discussed things, like weather you know the bad rains we've had in Sydney what does that do to sensor, technology, and fuel either or snow. Or sleet or low. Light or a glare in case of cameras, or radars or the things that Tesla relies on exactly. All of these points to the fact that once. Released, into the wild, out of controllable. Lap. Mental conditions, all kinds, of things might happen where as humans. We can employ, judgment, and we might make the right call but algorithms. That have to rely on training. Data on, rule-following. They might not actually be in a situation where they can react, appropriately which, points, to the last story, I want to bring up in this context, there's an, article called. When machines, go rogue and it, points, to the fact that with. The self. Learning technology. Deep learning machine, learning, we're now entering an age where we have algorithms, that are quite different to the ones that we have employed in technology, so far so, if you think of planes, and the way in which planes are being. Steered. By automatic. Technology. Those. Algorithms. They, are of the traditional. If-then, nature, which, means that you can actually test, the code rigorously, you can put the, plane and its software through a very, detailed, rigorous. Testing. And certification scheme. To be almost, certain, that nothing, will happen under, all the kind of conditions, you can imagine but, this is complicated with machine learning absolutely. Even, traditional. Algorithms. You can never be a hundred percent certain. But you. Know self flying planes, and they are largely self flying these days tell us that. It. Works, to a large extent, but self. Learning technology, is radically different it's, based on neurons. Self-organizing. By learning from training data and then producing. Similar outcomes. And so, when they read off sensor, data in a real-life situation, they will react, to this. Data in the way they were trained, and then react, presumably. In a way that will. Be okay but, you can never be a hundred percent certain because, the technology, the algorithm, is largely a black box and so it will always from, time to time throw.

Up Certain. Unpredictable. Behavior which, even, the developers, do not fully understand, how this comes about so all you can do is train, more train better train more detail without ever, be certain there. Nothing. Will happen so indeed that black, box creates huge, problems, not only around. Seeing. The potential, effects of employing that technology, but, also as to how do we think about ethics. Or morality or, right. Or wrong in that space which, leads us to our third topic, Robo. Lawyers so this is a story about a, more fundamental, shift in professional, services the. Rise of the Robo lares in the Atlantic talks about advances, in artificial, intelligence and, how they might diminish, the role of lawyers. In in legal system in some case replace lawyers, overall and this is part of a wider conversation about, replacing, doctors, and lawyers and a whole bunch of other professionals. So. This conversation, is also about how technology changes. Business models entirely, whereas, we used to have a one-on-one relationship with, our lawyer or with our doctor, or other, professional, services these. Will now become embedded, in systems, that are done made available to, people well, first of all I think we need to distinguish, because there's two types of technologies, being, folded. Into the same conversation. The first one which we refer. To as, artificial. Intelligence, as a kind, of shorthand what we're really talking about there is pattern matching so, what we're talking about is that sophisticated. Pattern, matching technology is, used. To, do. Away with a lot of the entry-level, lower, skilled, jobs, in professions. Such as law but also accounting. Where it is all about. Collating. Vast, amounts, of information, going, through passed court, cases and, coming. Up with the kind of patterns that might actually help with a case, that, we're dealing with so. Artificial. Intelligence, or that, our pattern matching machine learning can do this much. More, reliable. And faster, more efficiently, than. Paralegals. Or junior. Lawyers, would, be able to do we need to make sure that we are not claiming, that all of these professions. Are indeed creative. Highly, innovative professions. All of these professions, can be broken down into smaller parts and many of the tasks, in those smaller parts can be better performed, and indeed it's the breaking, down into, those low-level.

Tasks. Visa vie more senior, more expertise. Based jobs, that rely on judgment, that, now, enables. Companies. To, automate. A certain, of those low-level. Tasks, and so there's a real threat to those entry-level, jobs. Into those professions, that they are being done away with under, the mantra. Of cost, savings, and efficiency which, raises obviously. Certain, problems. As to how our lawyers. Supposed, to gain the, skills that they need how do they, learn the trade when those entry-level jobs, are no longer available you. Know they're not coming, into the profession and go into the more expertise. Based job straightaway so that's a problem I think that is not being discussed at the more it might be indeed about training them differently, increasingly. Lawyers will have to rely on the systems, and know and understand, the system so maybe the, entry-level, training, for these lawyers will be quite different and it will be how do you learn how to make the most out of the, brute, force that you get from machines, analyzing, big data or, having these remaining which. Points. To the more likely outcome, that is that we will have to relearn, how we do those jobs that rather than, having. Junior. Lawyers, or junior accountants, do all of these menial tasks, we will learn the trade quite differently, by employing, computers, and machine learning algorithms. To, do that, work for us and therefore develop, into the profession, in a very different way where algorithms. Just become part of, the, trade they become a tool, to be used by lawyers by accountants, and which. Will change the narrative I think from a you, know fear-based the. Robots, are coming for jobs to discussion. About how can we actually improve, and, make. Accessible, legal. Services, to a wider population by, doing, away with the bottleneck, of menial, work that we can employ computers, to do and indeed. I think the article. In the Atlantic, has embedded, in a very good observation which, is that this. Is not a conversation, about replacing. Jobs or about getting, those. Algorithms. To actually, deliver on affordability. Or in, efficiency of law systems, but rather it is a story about, changing.

Business Models in these industries. And rethinking. How we do law or how indeed how do we do medical services, and so on yes indeed and that points. To the second, technology, I want to mention which is more, traditional algorithms. Which people, develop to, cope with the complex. Often, bureaucratic. Nature, of government processes. Or legal processes where, the process, itself, is actually fairly deterministic. And mechanistic. It, needs, a lot of work though because a lot of information has to be collated, there's a lot of forms to be filled in. Different. Instances, have to make decisions, but the. Outcome, is often, largely predictable. Once you know what has to go into the process say. In disputing, a parking ticket and someone has built an app for that this, app does not need machine, learning, but it needs a an algorithm, that has all the steps embedded, in it that it takes to collect all the information and, then, submit. The claim a process, that is rather complex, and time-consuming and. Off-putting, to people. In their everyday lives but can be, solved. With computers, in a fairly straightforward. Way and a lot of tasks. Are like that in accounting, in law. In many. Other dealings, with governments, and so computerizing. Those, I think, is a logical. Step. In coping, with the often. Artificial. Complexity. That is being put up by the. Bureaucracy around those processes, and this indeed also speaks to a larger. Question since, we spoke about larger, questions, today around, how technology, is making. The boundaries, of traditional industries, a lot more permeable, so these are indeed, some processes, or tasks that can be performed outside. Of the traditional, law. Firm or outside. The legal industry and we've already seen a very sort. Of quiet. Creep, of technology. Trying, to break, down these boundaries, so for instance resolving. Disputes this, used to be a matter for largely the court system, but now there are about 60, million Ebates, disagreements, being solved, online. Every. Year that never goes through the court systems, and the, a lot more of those than the ones that do go through the court system and that has made the. Service available to millions of people yes, and that's. All we have time, for today more, questions to be asked next week see, you next week see, you next week. This. Was the future this week brought to you by Sydney, business insights, and the digital disruption research, group you, can subscribe to this podcast on, soundcloud, itunes. Or wherever you get your podcast you. Can follow us online on Twitter and on Flipboard, if you have any news you want us to discuss please send them to FBI. At, Sydenham, or edu. You.

2018-07-26 19:25

Show Video

Other news