Good afternoon, welcome to our panel on shaping, technologies. Uh i'm david mendel professor at mit, and co-chair. With david otter. And liz reynolds, on the of the work of the future, task force. Um, one of the key messages. Of the report that came out yesterday. Is that technology, is not something that happens, to us. Um it's something that we shape and create, and president reiff's, remarks, this morning. Opening up this. Congress. Uh emphasizes. That, uh ai, embeds, the values, of its designers. Uh into the technology. So uh as mit, as a place that both invents, technology, and creates, inventors. Um. The question, for the next 45, minutes is, how do we roll up our sleeves, and get this done. How do we shape, uh. The, the educational, mission, the technological. Mission, and the innovation. I'm joined today, by. Professor darone, asimoglu. Institute, professor, at mit, which is the highest, academic, rank and also a member of the, department of economics. Daniella, roos. Professor, of computer, science, and as you heard this morning. Director, of the computer science and artificial, intelligence, laboratory, at mit, and one of our co-hosts, of this conference. Julie shaw, a professor, of robotics, and specialist, in collaborative. Robotics. And also a member of the task force as is. Daniela. And i should also say julie's. Book just came out, about, maybe two days ago what to expect when you're expecting, robots, and excellent exploration. Um of what robotics, looks like uh on the ground in the real world and what's coming. Um and jeannie margerlick. An engineering, supervisor. At ford motor company. And deeply involved with ford's. Thinking about the factory, of the future. So. The overall. Overriding, questions for us here are, how do we shape technology. To support, and augment, workers, more than replacing, them. And what do we see coming down the pipe that could help us do so. But maybe i'll start with a few more targeted, questions for the panelists, so, um daniela. You're both a ai researcher, in your own right you lead your own laboratory. And you lead one of the great larger laboratories. Uh, in in ai. Around the world. What are ways that. Research, leaders. Can shape technologies. In these ways, toward, larger, social goals and influence, the kinds of inventions, that our students and others are making. Thank you david for this, question. So let me start by saying that, in the area of artificial, intelligence, machine learning. And robotics. Which are the technologies, that are, advancing. Our ability, to have these more powerful, tools, for people. All of this work, is, by the people and for the people. In some sense within a research. Laboratory. We, aim to develop, the science, and engineering. Of intelligence. And the science, and engineering. Of, autonomy. This means we're, we're identifying. Increasingly. More. Complex, increasingly. More well-developed. Ideas. About. What machines, can do for us. Now in the process, of, developing. These new capabilities. Uh it is also very important. To be responsible. To think about. What these machines, will be used for in the future we can't stop technology. From advancing. And. From changing the world but we can stop we can pause, to ask, what are the consequences. And make sure. That we are held accountable. For these comp um consequences. And so, increasingly. We're talking about. Responsible. Disruption. Meaning. The idea, that, researchers. And companies. Ought to be accountable. And more responsible. About the technologies. That they create. Now on the technical, side. We have. We're talking a lot about artificial, intelligence. Or intelligence, demonstrated. By machines, but it's important, to distinguish. Between the kind of specialized. Intelligence. That we can, give our machines, today. And general intelligence.
And I'd like to underscore, that today's. Ai, what we can do with ai, systems. Is essentially, specialized. Intelligence. Or the ability, to solve a very fixed, very, limited, number, of specific, kinds of problems, and this is in contrast. Uh with general, intelligence. Which humans, possess. Which give us. The ability, to undertake, a wide variety, of different tasks in a broad range of, settings, we have machines, that can do some things better than people, we have people. Who can do some things better than machines working together. We can be, more empowered, and if you'll give me just 30 seconds i will give you an example, from medicine. Where doctors, and ai, systems. Were. Were given the task, to. Classify. Scans, of lymph node cells, and. Declare, them cancer, or not cancer. The machine, made 7.5. Error, as compared to the humans, error at 3.5. Percent. But when, they worked together, the humans and machines. Achieved, 80 percent, improvement. Or 0.5. Error which is extraordinary, this kind of example. Is applicable, across, many disciplines. So let us look at the. Um at the tools, that come out of the ai machine learning, and robotic, community. As tools, to, help people to empower, people to support, people with cognitive, and physical tasks. Thank you that's a terrific, example. Um darrone. Your ideas have been very influential, on the task force's, work. Um from everything from taxation. Policy, to, this notion of so-so, technology, and i wonder if you might explain, your idea, of, what so-so, automation. Is. Um, and if you have, ideas about how we might find our way out of it, um with of course the acknowledgement, and the recognition, that, you're on a panel here with four engineers, and you're the lone economist. Okay i'll try to represent, my discipline, although i think uh the problems, that are afflicting, us are. Made by all of us not just by engineers, or economists, i think it's this joint mess is ours. Uh, so. You know what.
Uh I mean by so-so technology. Is that. It's technology, that doesn't, really, bring, transformative. Productivity. Improvements. And that is a real problem, when we are talking of automation, technologies. Because. You know the hope, has always been that, even though we are automating. Tasks. And taking jobs away from people. We will be creating. Almost, automatically. Jobs in other parts of the same organizations, or other parts of the economy. But that doesn't happen. Unless, you have, a very large productivity. Benefit. Social, technologies, are those that are really not creating that productivity, benefit think of self-checkout. Kiosks. You know you may be a fan of them or you may hate them but we are not going to transform, the u.s, economy, using, more and more self-checkout. Kiosks. At best the productivity. Improvements, from them, is small, at worst we're actually adopting, them because of other reasons such as tax policy. And the solution. To these issues is actually relates, very much to your first question david. It's not that we can transform. So-so, technologies, into much better technologies, by putting more effort in them, it is that we need to have a more balanced, portfolio. Of technologies, we need more. Technologies. More ways of restructuring. The economy, so that we put humans, first. And that. Whatever, anybody, says i believe we're not doing. We are very much beholden. To an automation. Alone. Mentality. And. We are just going after. Exactly, the same things, that have taken us a little astray, over the last three decades. Algorithms. For getting rid of humans algorithms, for doing things that humans, could do, algorithms, things, for things, that, we think. Troublesome, humans could be put aside. And if that's our growth model, it's going to run into problems in terms of productivity, growth, but worse, it's going to create a dystopic, society. So julie, um, you're actually a leader in the things that both daniella and doron have just, talked about, human first technology. Human first ai, and collaborative, robotics. And a major part of the task force work was your work. Going to europe, and. Looking at the deployment. Of some of these technologies. In germany, in particular. At both uh, robotics, companies, and manufacturing. Um and i wonder, uh, if you could say a little bit about what you've learned it's embodied, in one of the the task force research, briefs. Um, and what the implications, are there for how we might innovate, for the work of the future. Yes, so, my research team in collaboration, with others. On the task force. Went and interviewed, a number of manufacturer, suppliers.
And Firms. Small to large. In germany. Um, and throughout europe. Um, and to look at uh, you know seven, eight plus years into this mega trend of industry 4.0. Look at the technologies. That were being deployed. What were the enablers, what were the hurdles. Um, and and what can we learn from that, um. And i'll, highlight kind of two. Key takeaways, from this study, uh for me that's particularly. I think important to thinking about how we shape technology. As an ai researcher, and as a roboticist. One is that um it's not about the particular, technology, say the robot or the collaborative, robot, that you develop, on its own. Uh we saw different perspectives. And different approaches. For um deploying, and integrating, those systems. Um, into, the industrial, environment. Uh so what we called sort of top-down, versus bottom-up, uh approaches, or mindsets. And a top-down, approach, uh the system is developed, and programmed. Sort of separate from the line and the workers with the domain expertise, for doing that job. Um and then and then and then deployed to do the work. And in contrast. Other approaches, where the the system was was programmed, and developed. Uh with the engineers, working side by side with the domain, experts, the folks that do the work manually, today. Um and with uh with different results, from from those and, those two approaches span both the small firms and the large firms, but the ones that um, that we're able to leverage or incorporate, the domain, expertise. Uh for doing the job. Had um had better better results. The technology, we have today doesn't, easily support, that. It usually requires, special expertise, to program these systems it's very time intensive, resource, intensive. Uh and then that leads to other challenges. So, um, as. The work you know evolves, over time. One of the questions we would ask is where do you see the introduction of these technologies, going. Uh thinking about moving towards a lights out factory. I think one of the quotes we got back is especially, illustrative. Uh one of our interviewees. Said, a lights out factory. Uh why would we want that a lights out factory is a factory that's not that that's not innovating, and improving, and that's because it's humans, that drive the innovation, process.
But The technologies, that we deploy. Today, um, uh sort of lock in the sort of state of knowledge the way we do the task and it makes it very hard to leverage that human, innovation. And that human domain expertise, to improve. That improvement, is what every character. Factory relies on a well-characterized. Sort of productivity, or learning curve, uh through say the the assembly, process. We see a similar trend in healthcare, so in my work in developing, intelligent decision support for nurses and doctors, that they run a labor and delivery, floor, in, boston, hospital. And when you deploy, automated, planning and scheduling, techniques, when they don't incorporate. The sort of on-the-ground, implicit, knowledge, and preferences. They're just used, um. And people find workarounds, to them. And so in our lab it requires. Reframing. The the ai, problem, to be able to learn from. To actively learn and update. Um you know, the models, that that are used to, either augment, human decision making for these scheduling, processes. Um. Uh or to offload, or support, in other ways. Um and all of this speaks to how we need to redefine. The ai. Problem. As an enabler, towards, more widespread, use of these technologies, to our to our benefit. Wonderful that segway's, a nice uh, question for jeannie you have the, privilege of working for the ford motor company which. Invented, the revolution, in mass production, uh more than a hundred years ago, in fact, every day i drive by the uh. Model t assembly, plant at the end of our street here in cambridge, most people don't realize there is one here but it's it's a wonderful old building. And um, and now, uh. You're in the enviable. And very important position, of helping, ford, conceive. Uh, the factory, of tomorrow. And i wonder if you could talk a little bit about. Uh how you see the relative, role of machinery, automation. Robotics, with, human workers, human intelligence. Uh human augmentation. In that factory of tomorrow initiative. Yeah thank you, um, you're right we have a lot ford motor company has a long history, of of innovation. And um the factory tomorrow initiative, is. Is really uh you know a, continuous. Continued, step in in in that direction, for innovation.
The Factory tomorrow initiative is really trying to reimagine. Manufacturing, to build smart vehicles for a smart world. And, through our research, we know that we have to keep, innovating, to be a manufacturer, of the future. And as part of our factory, uh of tomorrow, initiative, we've developed strategies. For creating smart factories. Using technology. Um you know we're, also, um. Cognizant. Of, uh you know creating a strategy for a digital factory. Um digital, engineering. Is another big, aspect. Um, and then the analytics, so once we have all this data, how can we um, use um smart analytics, so that we can, um. Use the data. In an elegant way if you will. And then we're also using um, we, have a group. Process innovation, that works um, very closely on our um, on advanced robotics, and, in particular, they do a lot of, um experimentation. Proof of concepting, of pilots, on on, cobots. Um as well. Um and we're starting to um to do a lot of experimentation. On using ai in our in our automation, systems. Um and also vision systems we've been using ai. For several years in vision systems, as well. Um additive manufacturing, is is is the final area, where, you know, we're still um you know maturing, that capability, and we see um you know a lot of growth in in that that area, as well. I i will say that um you know our vision really is to become the world's most trusted automotive company for all of our stakeholders. And that includes our customers, the distribution, network, supplier partners. And then most importantly, our, our employees. And people are really integral to. Integral part of our, factory tomorrow initiative. Last year ford completed, a 35 million, renovation, of the ford uaw, technical training center. And that update. Included, a revamped curriculum, to include. Training. Skilled trades our skilled trades workers, on, advanced manufacturing, technologies. And we're also working in lockstep, on the technologies, we're developing in the advanced manufacturing, area. With our learning and development, partners, as well. We've developed the first of its kind for ford, a technology, and people workshop. So for new model programs, where we're, deploying, technologies. Um, you know we're identifying, the skills, the knowledge, the training, and even organizational. Changes that might be required, for success, successful, adoption, of these new technologies. Yeah these days when i visit a automotive, plant, i also i often see it as a. Giant collaborative, robot, itself, that. Employs hundreds of robots, and thousands of people and other kinds of infrastructure, all working together. Um. Darone. What do you think is the best way to ensure that we're developing, technologies. That meet social needs, and, i know you have some, ideas, for policy, steps, um, that could nudge us in these directions. Thank you david i think that is critical. And uh, you're right, i have written about, policy steps, such as tax policy. That can not just in that direction. In particular. Emphasizing. That right now our tax code. Subsidizes. Firms to. Automate. Because, we. Tax capital, much more likely, than we tax labor. But. Actually. We need to step back.
Nudges. Are one thing. But we actually, need, a, more holistic. Approach. We need to. Agree. As society, that means businesses. Consumers. About the consequences. Of technology. In the same way that there is now a broad agreement, on. The consequences. Of. Pumping. Carbon into the atmosphere. And you know a lot of consumers, are worried about that a lot of businesses, are worried about that especially in europe. Us is behind but. It is, a general. Social, norm. That has developed on this topic. And government, policy. Evolves around it. And when you look at, again, climate, change or nuclear, energy. There is. A. Variety, of voices, on that matter. You know no company, can adopt, nuclear, energy without, checking a lot of safety. Items, and there will have to be various, voices that you have to hear, but when, the same, is in the area of technologies, that are going to shape, the future. Of the workforce. Whether people will have jobs. What type of people are going to benefit, from economic, growth, none of that, is the case. I think, most ai, researchers. Have. Almost, no recognition. Of the social consequences. Of what they do that's why. You know thousands of people with impunity, would work for, uh, companies, that are preparing, technologies. That are just for snooping. Over individuals. And are, suppressing, human freedom. But it doesn't just end with monitoring, technologies. I think, there are social consequences. When you, put all of your effort in automating, work rather than, helping humans, or, creating, new tasks for humans. If you look at around you. Most of the people you know, today. Are performing, tasks, that did not exist. 80 years ago. Even, professors. You know we exist as an occupation, but if you look at what professors, did 80 years ago, it's very very different from what we do today. All of these things emerge because we use technology. To. Create, new capabilities. For a lot of workers. But that has largely, seized. Or has become much less important over the last few decades so we need to recognize, that and we need to develop, a holistic, approach to it and that starts at the bottom with society, it starts at the top with the government. And companies, are of course critical. But most critical, perhaps. Are the researchers, and i think the researchers, need to wake up. So speaking of which. A last question, the same for our three. Engineers, on the panel. Um. Daniella, mentioned. Collaborative. Ai. For radiological. Uh. Evaluation. And we've talked about collaborative, robotics. What else do you see coming down the pipe that's exciting, to you that has. Potential, positive, implications. For. Human collaborations. And workers, and work and addressing, some of the issues we've seen. Maybe. David i would like to jump in and. Talk about, uh transportation. In general. And. Or. More broadly, talk about. The notion, of a machine, as a kind of a guardian. For the person. This is a concept. That. Mit, and the toyota, research, institute, started working on together about five years ago. The idea is that you're not gonna have self-driving. Cars. 5 autonomy. For a while, but what you can have, is the parallel, autonomy, system. That can use. Extensive. Sensors, to look at the road. And prevent, the human, from making a mistake, when that mistake, is about to happen. Um in the same spirit. We can have systems, that can, monitor, a surgical, procedure. And. And give the surgeon, early warning, that a blood vessel, might. Be about to explode, and this could. This could be the difference, between. Life and death. We can imagine, applying this notion of a guardian, system. To many, aspects. Of. Many technological. Aspects. And. Including, manufacturing. Um we can give early warnings, that an a piece of equipment. Uh might be about, to, get damaged, and that. That could save, a lot of money, um there are. Many ways, in which. We can, we can help support. And, some of these ways, uh in fact, are are also new ways uh darron, was talking about. Uh how people do different, tasks, that we, didn't used to do. Some time ago, and i would just like us to think a little bit about. How many people, are employed, in social, media. Roles. With with a very. With a varied, uh level, of um. Of skill, and uh and talent. And, um, and so these jobs didn't exist, before, social media existed, and social media didn't exist. Until about 15 years ago. So um, uh it's uh it's sort of exciting, that, that we see activity. Uh we see innovation. But at the same time, uh we have to be very careful. About, how, we. Proceed. Julie. Yeah, so, um. I think, i think we're, at a quite exciting time you know with the examples, daniella, has mentioned, and many others where. Uh, maybe previously when we think about how to decompose, the work between a human, an intelligent, machine, or a robot. Sort of done by task, uh what tasks will the machine do what tasks, will the person do. Um but if we if we think differently, about, what the relative, strengths of, humans and machines, are you know i say. Um. Our, net our human ability, that we really want to be able to leverage, max, maximally, is um.
Our Human ability is to structure an unstructured, problem. And then once we have a structured, problem, a computer, or machine can kind of crush it. But the the, means we have and that's a different decomposition. Right but the means we have for structuring. Problems for for machines, or for ai is still so limited today, it's collecting, lots of data and painstakingly. Labeling it which is not something people are sort of well suited to do it's not how a person, uh sort of communicates, or translates, knowledge to another person. Uh or in the case of an industrial, robot painstakingly. So programming, like line by line how that how that work is done, in a way that's quite rigid. Um, and so, um, you know. Our these robots, the ai, will never be infallible. And the key thing is to figure out how we can, match up our capabilities. And open. Uh, sort of transparency. And explainability. With these systems, so that we can um, you know the the system is a guardian for us but in a sense we're also a guardian. For the system, and the system can, leverage, or benefit, from our very unique human ability to structure unstructured, problems. That's valuable for physical work like in factory settings. But also for, supporting, cognitive, tasks. And health care many many other domains. Yeah as we. Think about guardians, which i think is a great idea and the parallel, autonomy, that daniella, mentioned is is really valuable. Um the 737. Max is in the news today again for being returned to flight and you could you could interpret, that problem as a guardian. Problem, where the guardian, and the pilots were fighting with each other, with with awful results. Other, other issues there too but, um. Jeannie. What excites you coming down the pipe from, the ford point of view and from a manufacturing. Point of view as far as. Uh new technology. And collaboration. Well i think adding on to what daniella said about predictive, maintenance, um this is something that we're really excited, about. Um, and really, um, it, is gonna, make us more efficient, we're going to be able to uh schedule you know notify. When a machine seems to be trending, out of control, and and may need, um some maintenance. Um and then we can schedule that at the next available window. Um you know for efficiency. So we don't to take we don't wait for the machine to go down or or even it's like the next level, of preventative, maintenance which is what we do today.
Um, Which is which is not is, predictive, is obviously, much more better, much much more. Um. Efficient. Um and it'll also help us um. With order replacement, of parts. Um so we could potentially, if we know that a part is going bad. Um rather than having that part in our inventory, and, and, sort of uh. Holding our you know free cash flow, in our inventory. You know we can free up that cash flow. Um, and order it on demand. Um, as an example. Other areas as i mentioned we're, really um. Advanced to mature and using ai and vision systems, particularly, for quality defects, an example would be like a paint chip in scratch. Um you know it used to take us thousands of images, to to train a vision system to look for a defect. And now, you know it's it's it's very few and and it can look for that defect. Um. Anywhere on the vehicle which is, which is um really made us more efficient. Um second we're you know as i mentioned also, we're using automation. Ai and our automation, to make them, to really use machine learning to um, to see if we can reduce our cycle time and improve our ftt. As well in certain applications. Uh we just recently, uh reviewed a use case um, in, it, for a. Um. Um for a transmission. Assembly, part. And it did. The data did show that it reduced the cycle time and and slightly improved um. First time through as well. We're also developing, natural language, for manufacturing. To use voice commands, to communicate with machines. And it's think of it as kind of like surrey for manufacturing. Which is really exciting for us um and finally we're, we're looking at. Deploying, ai to help us evaluate. Um. Uh quality defects, that that make a noise, like two connectors, going together so there's an acoustic. Acoustic. Uh or vibration, signature. That goes along with that um making that connection, and that noise and, and using ai to kind of learn, what's what's a good signature, and what's a bad signature. Um so those and and then obviously, um you know collaborative, robots, and collaborative, remote, uh team tools we're using for for training. Um on some of our new model launches as well. Wonderful, um we're getting a lot of good questions from the audience so clearly, people are really engaged in this conversation. Um and i'll start with one for julie. Um you mentioned that it's important to get the domain, experts, involved, early in the process, and the question is who are those domain experts can you go into a little more detail, about where that knowledge comes from. Yeah i think there's there's, multiple sources of the knowledge that are required, to. Um. To to. Develop and deploy the system, um, but but in particular, the domain, expertise. Of the, you know the person on the shop floor that's actually doing the job today. Um. Uh, and so you know many of these jobs i've learned through you know our own, uh, work, in the lab. Many of these jobs are learned, um, through kind of like years of apprenticeship, in some cases.
Uh Jobs that you think you know might even be, relatively. Easy for a person and could easily be, translatable. To, a robot can, can be more challenging than at first look and so for example. Um, placing, uh stock, uh, to prepare it for sort of heating. Uh uh sort of parts that are all sort of different sizes but they need to be placed in such a way to go into the oven so that so the air flowing through it um. Uh heats them all. Uniformly. And you'd say well wouldn't it be great for a robot to pick and place the stock just into the bin and put it into the oven, but it's actually a trained skill. To be able to um. To kind of get that right that humans are quite good at and that can be challenging, to, to program, in some settings, um. And so, um so unders. You know. First through, uh being able to. Learn from observation, learn from demonstration. Uh but even be able to directly, elicit. From uh someone doing the task on the floor. Uh you know the key factors, that, that, go into, doing this job successfully. Uh and then uh, people doing, the work today also are the right ones to, help envision. The the different ways the tasks can be done, with the use of, a new technology, sort of that more open space of sort of alternate, ways, it can be done. Um that uh you know a robotics, engineer, or, you know a ph and ai phd, is not going to be well suited to to do uh on their own. Great. Um question for jerome. Um. What do you think will help researchers, become, more aware, of the social consequences. Of their work. Uh top three strategies. I think, uh i was just gonna give three so good that you asked for three. I think, the first one is. Training. I think much more ethical. And social, training. Is necessary. I don't mean abstract. Ethics, courses. I think those have been tried in mbas. And haven't always, been. Extremely, useful, but much more. Concrete. Understanding. Of what the social, implications. Of, different technologies. Are. Second. Government. Policy. I think. Government. Is. The trendsetter. Laws, are signals. Government, priorities, are signals. If the government, itself. Sort of gives, up, on the agenda, of creating, better technologies, for humans. It's natural, for researchers. To do so. But the third one is most important, also. And that is, autonomy, and independence, from the corporate. World. You know according to some estimates the mckinsey, report, says two out of three, dollars that goes into ai, comes from. The, big tech, in the u.s, and, a couple of handful, of chinese, big tech, companies. Those companies, are the ones. That, also, fund most ai labs. And they have their own. Agenda. Unsurprisingly. And their agenda, often is not, the one that we're talking about, most of these companies, are very lean they are built on, automating, work. Substituting. Algorithms, for humans. And. And don't, really, have. A organic, relationship, to manufacturing. Or new technologies, that are going to create new tasks, in, different sectors, of the economy. But if. Those companies, set the tune. In leading ai, labs. And moreover. Most students. Are getting into ai and computer science to go and study in these companies. How can we expect. The ai research, that comes out of that process. To do anything but replicate, and parrot those companies, priorities. But if you look at, all. Leading u.s universities. It's the opposite, right now including mit. So, this is a very very difficult, lesson. But i think it is one that we'll have to come to grips with.
When You say the opposite, what do you mean by that meaning. Meaning that, you know we are doing the opposite of that we're not we're not really establishing, our autonomy, we're actually. Saying that, good research. Means. We have to be more integrated. With. Google facebook, microsoft, amazon. But. You know of course we have to work with many companies. But, autonomy. For research, is critical especially, in this area. Daniella, do you have thoughts about that about the uh. The autonomy, of research, vis-a-vis, funding sources in ai today. Well i i think that there is a fair bit of. Autonomy. And there are, a number of. Programs. That, support. Independent, curiosity. Driven research, uh problem is that we don't have enough funding. Of that uh of that form, so, um. So i mean in some sense, uh where the government, is lacking. Uh the companies, are stepping, in. And, but working. Working with companies, i find. Can be also, very. Very enriching, and empowering. So, i will give you. Some, examples. I'm going to come back to, our. Collaboration. With the toyota, research, institute which started. Uh, five years ago and the objective, of that program was to advance. Uh ai. I, advance the science of intelligence. That's ai, and, robotics, research. So um there are different, levels, at which um we can, we can connect. With um. With different potential, sponsors. I have to say that when. That when i think about, how, um, universities. Um. Industry, research, labs, and, development. Uh connect, together. Um. I have this mental model where, an industry. Um. Where uh where an industry, development. Lab, um, works, on products, for today. An industry, research, lab, works, on products. For tomorrow. But the role of the university. Is to imagine, the day after tomorrow. And. The day after tomorrow. Involves. Advancements. Foundational. Advancements. But also connections. With. With. How those advancements. Matter. And in the absence of applications. We can generate, a lot of theory, but applications. Also, allow us to root, our, ideas. Uh into things, that, the world cares about. So, as a final question, um. There's a question from the audience, about, uh as we've talked about unique, human capabilities. Social, relationships. And even physical manipulation. To some degree. Uh what are the potentials, for ai, and the questioner, particularly, asks about. Gans, generative, adversarial. Networks, which is a more experimental. Kind of ai that's been. Uh. You know been used in in various kinds of artistic, creation. How much do we worry about what we consider, today's, unique human capabilities. Being overtaken. By the next generation, of. Ai. I can, say something on that i think it is a choice. There isn't a iron-clad. Rule, of what it is that. Humans, can do and, technologies, cannot, do. They are all both fluid. It depends, on what we. Value, and how we use technology. And much of it is hard to imagine. So when we were going through. The. Previous. Examples. Of major automation, waves, for example mechanization, of agriculture. Contemporary. Thoughtful. Commentators. Couldn't, see what new tasks, and new human capabilities. Would emerge. Those were things, that. Changed. As, technology, was deployed, in various different ways. The factory floor was completely transformed, in that process for example. Where, today. Most. Factories, employ more non-production, workers than production workers. That is a major change in technology, and uses human capabilities. In a completely new way. So entertainment. You know we can. Use technologies. To make human. Elements, in entertainment. Completely, unique. Or, we can change. Technologies. In such a way, that human. Element in entertainment. Industry becomes, less and less again it's a choice of how we train to humans. How we train and develop the technologies. And what, as a society, we come to value, and understand. So, that is why, i think the most important, lesson, that, the report. Makes is that technology, is not. Something. That has a definite, path there isn't a path of technology. That, technology, itself or its leaders want that's the right one but it is something that society, as a, as a whole has to forge. Julia or jeannie on that topic. Yes so i i i agree with um, you know uh drones, comments, there. Uh i think that there's uh. Maybe there's a sense that you know we develop these technologies, and they're gonna overtake. Human capability. But they are still performing. Very narrowly, defined. Tasks, that, we define. And uh, and we determine, how they're embedded, and how they're how they're used, so. Um you know like, for example deep learning. Uh neural nets they're like very fancy, function, approximators. Right they're like, a way to model things like as is, algebra, as is calculus. But it's how we then take those, tools, and use them, um for the for for a purpose in in real systems, where. Uh. There are a lot of you know questions, of uh, what is our end goal what is our how do we define, success, for, uh introducing, these systems, are we, aiming to enhance, some human capability.
Are We, is there a, goal to replace, you know some some aspect. Of what a human is doing today. Uh but but none of these systems ever operate, truly. Independently. There are inputs to them and there are outputs, and there are human decisions, based on um. You know the, uh the output of these systems, um and so. Taking that more holistic, view and asking you know what what is the way that we can deploy these technologies, to achieve our, our larger goals is really the the critical question. We need to be grappling, with. So if i can add um, to the conversation. I would say that an important, part, of. What we do. At. Mit, and in the scientific, community. Is to advance, the science, and engineering, of intelligence. And in doing so. We, accomplish, many things, we, understand, ourselves. We get a better handle on on life, and we, develop, capabilities. We develop machine capabilities. That have an increasingly. Broader, range. Now in the process, of doing of developing, these capabilities. We get to the point where, the kind of ideas, that we develop, with long-time, horizons. Can be put into action, today. And i for one am very excited, about the possibilities. Of using. The the latest, advancements. In ai machine learning and robotics. To bring, more powerful, tools for people. To make. Certain, jobs, easier, to make. Life easier. Look we wouldn't be here together. If not for technology, technology. Is enabling. Us, uh to come together. Despite, the fact that the world. Is in the middle of an epidemic. Technology. Has also, accelerated. Uh the rate at which. We have achieved. An effective, vaccine. Technology. Is enabling. Scientific, discovery. In so many fields. Technologies. Helping, us better, engineer, monitor. And treat disease. Technology, is keeping us safer. On roadways, technology, is enabling, us to communicate. Instantaneously. No matter what language, we speak. There are so many ways, in which machines, can take on. Some of the. Routine, tasks, that we do. So that we can focus, on. Broader, bigger. More creative. Ideas. Wonderful. I think we're out of time, i want to thank, all of our panelists. And, for a very interesting, conversation. And also. Um. All of our panelists, are. Part of, uh, major contributions. To the task force work and report itself, ford motor company has been very supportive, and open about, uh some of their processes, they're going through daniella, and julia are actual members of the task force, uh darron is not formally but has been, very supportive, and, wrote one of our briefs, and, have major, impact intellectually.
On Our work as well so, uh thanks to all of you uh and thank you for your important, voices, on this topic, and, finding its way into the report, and uh into, uh additional, research, and innovation. So. With that we'll wrap it up and go on to the next panel. Thank you david, thank you. Everybody.
2020-12-02