The idea of the "Productivity J-curve" is that there's often a lull or even a decline in productivity before it takes off. Especially, when there's very important general purpose technologies. These are big technologies like a steam engine, electricity, information technology, and AI. Why is there this low? Well, at first, it's not enough to simply put in place a powerful new technology.
You also need to make complementary innovations, like new business processes, new skills in the employment and other changes in the organization - sometimes physical compliments like new types of equipment. And when you invent and install these new compliments, that takes time. It's not always clear which ones are necessary. For instance, in the case of electricity, it took about 20 to 30 years before productivity started increasing, after American factories were electrified.
During that period, they invented new ways of organizing production, and eventually productivity grew by 100 or 200%. But for the first 20 or 30 years, there was very little or any game in productivity. Most of these big technologies go through a J-curve and I think we're just at the early stages of the J-curve for artificial intelligence. There have been some amazing breakthroughs in things like machine vision and robotics, but we haven't really rolled out the complementary innovations to make them productive in manufacturing, services, finance, healthcare, etc. Those are just now being invented.
Based on my research, I think the next ten years are going to be an era of much more rapidly growing productivity. Between the 2020 and 2030, I expect productivity to grow significantly faster than it did in the past ten years because we're now beginning to harvest the benefits of these new technologies and get to the upward sloping part of the J-curve. One of the disappointing things to me has been the lack of really big, successful technology companies in Europe and the lack of entrepreneurship more broadly. If you look at the trillion dollar companies on the planet: most in the United States, some of them are in China, but none of them are in Europe. Now why is that? It's not because Europe is lacking in smart people or lacking in educated people. I've met some of the most intelligent, best researchers in the world in my trips to Europe.
In fact, I was born in Europe. But, like many of my European friends, I moved to the United States because I found that this was a better place to apply my talents. And one of the risks for Europeans is that they aren't creating an environment where people can take their brain power, take their research, take their innovation, and turn it into new companies, new products, new services, and instead they find that the United States is a more attractive place for that kind of innovation. So it's not a lack of skills, it's not a lack of education, it's not a lack of technology. The barrier is in translating those technologies into products and services.
Part of that has to do with the regulatory environment, part of it has to do with culture. For instance, we see that in some countries, in Germany, as I understand it, it's illegal to sell books below list price. So, what's the incentive to be able to have a better way of distributing books? In other countries it's very difficult - in France it's very difficult to fire people. So, if you do a start up and it turns out it doesn't work out, you can't start over and do something new.
There are other countries in Europe I have to say that are very forward looking and, I think in many ways, do things better than the United States. In Denmark, my home country, they have this concept called 'flexicurity' where they have a very generous safety net with health and retirement and unemployment but they also make it very easy for entrepreneurs to hire and fire people. So, if you do a start-up and it doesn't work out, you can let the people go and you can try something new and this creates more dynamic, more flexible economy than you would, if everybody had to stay with the company, that originally hired them. I think, all the countries are experimenting with different approaches. The ones that have the more dynamic and flexible economies are the ones that are going to be best able to take advantage of these technologies. No country, certainly not the United States, has ever thrived or succeeded by locking in place all the old jobs and locking in place all the old industries and all the old technologies.
The real security comes from being able to develop new technologies, new industries, new jobs and occupations, and have those replace the old ones as they become obsolete. I think this is a lesson that all the governments need to learn and understand: that you need to embrace change. And instead of trying to protect the past from the future, we should try to encourage the future and nurture it, as it replaces the past. In the digital economy, not only are traditional networks important, but increasingly people are recognizing the power of what are called two-sided networks.
In a traditional network, every time a new user joins, it becomes more valuable. Think of a phone network, a fax network, or instant messaging network. The more users, the more valuable it is to me. In a two-sided network, it's a little more subtle. There are two different products, and the value comes not from the number of people using the same product, but the number of people who are using one of the other products. For instance: I'm a user of Uber. So, I use the app to call cars,
but the value of Uber doesn't go up as other people are using the same app. It actually goes up to me as more drivers are using a separate app that calls - that allows drivers to find their customers. And the value to the drivers goes up, the more users are using the app that calls them. So, these are two separate apps. You can go to the App Store and get one app for drivers and one app for users, and the value of each of them depends on how many are using the opposite app.
There are more and more situations with these two-sided networks, not just Uber, but also Adobe: they have one app for people creating content and another app for people using the content. We have a lot of different phone networks. We have the App store, we have Google and their advertising. In each case, there are multiple different products that are providing value to different groups, and their value gets greater when one of the complementary products is also increasing in its user base.
As two-sided networks become more important, you start seeing the rise of what I'd call platforms, which are ecosystems that allow lots and lots of applications or products to thrive and to flourish. And they each benefit from the number of people who are using some of the other applications in that same ecosystem. Network effects are increasingly important in the digital economy, and some of the most successful companies, like Facebook, depend heavily on network effects. But I also think it's important to learn from Joseph Schumpeter, the great Austrian economist who described the process of Creative Destruction. I'm seeing that happening more and more in the digital economy, where one company will rapidly grow to a big network and a big lead, but then another company will displace it. I was teaching a class at MIT in 1999, and I remember very clearly, when a prominent CEO came to my class and the students asked him what area should we be focusing on for the next big thing? And he said, well, I'm not sure what the next big thing is, but I can tell you one place to stay away and that's internet search.
Yahoo has it completely locked up and no one will ever displace him. Of course, within a couple of years, Google displaced Yahoo and it just emphasizes that even when you think a company has a big lead in networks someone else can replace it. VisiCalc was the leader in spreadsheets, when I was a boy. And then, Lotus 1-2-3 surpassed it and then Microsoft Excel. And Microsoft Excel had a dominant position in Windows that many antitrust authorities were worried about. And now, people don't worry so much about Windows, because the mobile platform is much more important.
So, in each case what you have is a company that can grow to a dominant network position but also be supplanted relatively quickly. In fact, I would say that the cycle in the Tech Industry is faster than it is in other industries like automobiles, where you also have a changeover - but on the level of decades or even centuries in some categories. So, for those organizations and entrepreneurs who think that they are too late, I would say absolutely not. You are not too late. There are many new waves ahead of us. Artificial intelligence, in particular, is going to open up so many different opportunities that it would be a mistake to think that just because somebody has an old technology locked up or in a dominant position with networks that you couldn't, not only displace them, but more likely do something new on a new platform that becomes even more important.
One of the things I've noticed among my technology friends, is sometimes they use three terms interchangeably. Even though, when I think about them, they're very, very different. The terms are human-like AI, human-level AI; and artificial general intelligence. And sometimes people use them all to describe this future where AI is very powerful. But I think they're very distinct concepts.
Human-like AI is AI that's similar to what humans can do. And the reality is that artificial general intelligence could be very different from human level AI. Artificial general intelligence should be more general than humans. Humans have a very specific kind of intelligence.
So, a machine might be able to recognize X-Rays, it might be able to understand protein folding in much more detail. It may be able to obviously multiply very large numbers - all in ways that are not at all humanlike. And in this sense, I think artificial general intelligence should be thought of as very different than human-like AI.
So, when researchers focus on making machines that are human-like or human-level, I think that's a distraction. They should make machines that have broader artificial general intelligence, that can do many things that none of us humans can do. And part of that is to be more ambitious and create more value and create a more powerful artificial intelligence. But part of it is also because as an economist, if we make machines that are very similar to humans that are human-like, that actually can be destructive to value creation. The reason is that when machines imitate humans, then they become better substitutes for humans.
But, when machines are different than humans, then there are complements for humans. If a machine is a substitute for humans, it reduces the value of humans, because now you can use a machine where a human used to be working. And that drives down wages and that tends to concentrate wealth. Instead of having many people producing things, you tend to have just a small number of owners of the machines creating most of the wealth. On the other hand, if machines are complements to humans, then it increases the value of human labor and that tends to create a more widely shared prosperity.
So, for instance, one company that I've been advising is a company called Cresta, and they're using machines to help with chats where customers need advice. So, online chats. Cresta takes the approach that instead of having the machine answer the questions from the customers, instead it has the machine give advice to a human customer service agent, and by advising the agent, they create much more value. The agent is still in the loop and the agent learns how to answer questions better. And in some cases, the human can do a better job. In some cases, the machine knows the answer and the two of them working together, create far more value than could be created by the human alone or by the machine alone.
We've done analyses of the productivity gains, and what we found was that the Cresta approach of having the human and machine work together, had significant improvements in productivity and customer satisfaction. And this is a great example of how complementing humans, instead of trying to replace them, is more likely to create value. When people say that they want to make human-like AI. They are trying to often imitate what we are currently doing, but I don't think that's nearly interesting or ambitious enough.
Imagine, 2000 years ago, someone had looked at what the ancient Greeks were doing and said, we're going to make a machine that does everything that the humans are doing today. Well, you might think, "Well, that's great!" Suppose that we replaced all human labor with machines in ancient Greece. And instead of having humans make clay pots, you had a machine making clay pots. And instead of humans making chariots, you had machine making chariots.
And instead of humans doing bloodletting to cure disease, you had machines doing the bloodletting. Well, in each case, you could eliminate the labor and you could have enormous productivity gains. But if you think about it, having large numbers of clay pots and chariots and bloodletting really wouldn't be that high a standard of living. The reason that you and I have such a higher standard of living today, is not because we simply have cheaper versions of what the Greeks had in those days. It's because we have many new things like television and penicillin and jet-travel.
And these new inventions is where most of the value comes from, not from having less expensive, automated versions of the old things. By the same token, if we just look at what's being done today and said, how can we make machines do those same things? We're missing out on most of the potential value creation from inventing new goods and services. So, I would urge my technology friends that you're not being ambitious enough. If you aim for human-level AI and replicate what we're doing today. Instead, you should be looking to augment
what we do and do new things that we never could have done before. It would be great if policymakers tried to set up the incentives so we can have more use of AI to augment humans rather than to replace humans. I don't want policymakers to make decisions about every new technology and whether to reward it or whether to discourage it.
Instead, what I think policymakers should do, is provide some broad incentives, and then technologists and entrepreneurs can figure out how to implement their new technologies in ways that are aligned with those incentives. Specifically, right now in the United States, in most countries, we tend to tax labor much higher than we tax capital. And what that does is it steers entrepreneurs towards using less labor and towards using more capital. Now, that might have been a good idea 50 or 100 years ago when we had a lot of scarcity of labor and needed to be more careful about conserving it. But today, I think it would be a better policy to encourage widely shared prosperity by having entrepreneurs invent new business models that use labor as well as capital. So, I would level the playing field and charge labor and capital the same tax rate, or maybe even go further and reward entrepreneurs that had a business model that used a lot of labor, instead of a lot of capital.
The entrepreneurs will figure out which technologies to implement to line up with those goals. All the policymakers have to do is have some broad policies. In some ways, I compare this to a carbon tax or other kinds of what economists call 'Pigovian taxes', that encourage certain activities and discourage other activities. One of the ideas that I've heard kicked around by people like Bill Gates and others is the idea of a robot tax.
And frankly, I'm a little confused by what that exactly means. There's two ways I can think of it: one is, as I was saying earlier, to have higher taxes on capital and lower taxes on labor. So, it's more even, and that's a very good idea to encourage more labor intensive ways of production and less capital intensive ways of production. However, I wouldn't single out specifically just robots or high technology.
That might discourage technological innovation. And I don't think we should slow down technological innovation. Instead, I think we should speed up our adaptation to technology, So, I wouldn't single out robots as a way of taxing things.
Instead, what I would do is, more broadly, have an even led playing field where capital and labor are taxed more evenly than they are today. And you could call that a robot tax, but I would simply call it even taxation on capital and labor. One of the most important questions is where we should be investing our skills. There is more human capital, over $2 trillion in the United States alone than any other kind of asset.
But we're not investing well in those new skills. One of the areas that we're going to need more new skills is in tech skills, understanding machine learning, etc. But that's only a small part of the workforce. And I wouldn't want to overemphasize that as technology becomes more important, we're also going to need more human skils.
In particular, human interactions, being able to have the social skills to coach people, encourage people, to lead people, to sell and persuade people. So, there's a real need for those kinds of skills. But most importantly, I think the kinds of skills that will be of the greatest value are creativity and innovation, because these technologies can be used in so many different ways. And the real question will be: How do we think of the new possibilities? Pablo Picasso once said that he wasn't very impressed with computers because all they do is give you answers.
Well, giving answers is not a bad thing, but I think what he really meant was that even more important than giving answers is asking the right questions. So, going forward, teaching our children, teaching adults how to ask questions more intelligently, is going to be the real superpower, the real skill that's needed in the 21st century. There needs to be a fundamental change in the way we do our education to keep up with the changing technologies. It needs to be life-long learning and not just expect that we can learn a few things in the first twelve years of education and then rely on those for the next 30,40 or 50 years.
Instead, we need life-long learning, and the nature of education needs to fundamentally change. The current education system is in many ways adapted for an industrial economy of mass production. It teaches children to sit quietly, to follow instructions, to sit in rows of desks and do what they're told.
That might have been good for Henry Ford's Assembly Line in factories, but that's not what we need in a digital economy, where people need to be more creative and fluid and new types of products and processes and goods and services are constantly being invented. We need people who can work in teams, people who can be creative, people who can work on their own, people who can work on projects. That's a different way of teaching people than the Rote Memorization that was so common in schools up until now.
So, the Metaverse is the idea that we spend more and more of our time in a digital world instead of one made out of atoms. And it's not hard to imagine a near future where the typical person spends the majority of the time looking at screens. In fact, already today, I checked the data, the average American spends about seven hours and eleven minutes per day looking at screens. Those could be computer screens at work, they could be the phone screens, they could be the television screens. So, we're already spending close to half of our waking time looking at screens. Going forward, instead of just looking
at two dimensional screens, like the ones I just mentioned, we could imagine having a virtual reality headset on and seeing a three-dimensional digital world. When we're in this digital world, our work might be more productive, we have new options for entertainment, those of us who have tried them have found them exhilarating - you can feel like you're skiing or flying or swimming or experiencing an entirely different world than what you see in the physical world. The economics of the Metaverse are also very different because the things you're interacting with, by and large, are created out of bits instead of atoms. And you can create experiences with bits at close to zero marginal cost and replicate them instead of things made out of atoms, which can be very expensive.
So, I think over the next ten years or so, we're going to see a big transition in the way we experience the world and the economics of that world. I can't say any of us fully understand it, but it's not hard to imagine that this will become the dominant way: that we interact with our friends and our colleagues at work and even our family, because more and more of our life is already in a digital world, and the Metaverse is just a natural evolution after that. One of the biggest catalysts for the move towards a digital workplace has been the pandemic. Of course, the pandemic has been tragic for many people. Millions of people have died, and many more have become sick.
But it's also been something that's accelerated the adoption of digital technologies. According to my research in the United States, about one in six Americans worked remotely in early 2020, but six months later we had over 50% of the American workforce working remotely because of the pandemic. So, in those 20 weeks or so, I think we compressed about 20 years worth of digital transformation. And I don't think we'll go all the way back.
As the pandemic recedes, more and more people are going back into the regular workplace, but many of us have learned new ways to work. We are teaching things remotely. We're interacting with colleagues remotely, we're working digitally in meetings, and we're going to keep using some of those habits and using some of those tools and technologies like Zoom and Slack and our other tools that we're interacting with. So, we have a new way of interacting that is much more digital and allows for more rapid innovation in these digital processes. In some cases, that's more productive.
In some cases, it allows for the end of geography. In some places, it's led to more creativity. I don't think it's a universal benefit, there are also some downsides to this. Some people are feeling more isolated or more lonely, and we need to find a way to navigate, so we get some of the benefits from these new technologies without as many of the downsides. The companies that are most successful over the next five years are going to be the ones that understand how to harness the benefits of this increasingly digital world while overcoming some of the drawbacks. The digital economy is creating some incredibly powerful new tools, and with that comes some enormous risks.
As Paul Romer has noted, there's a risk of increased monopoly. I believe, there's also a tremendous security risk, there are risks of privacy violations and increased bias, I'm particularly concerned about some of the effects on income inequality and wages in different groups. All of those are some big problems that we have to grapple with. We can't ignore them. We have to take them on directly, as individuals, as managers, as voters and policy makers. We certainly don't have to accept monopolies in order to gain productivity.
We are going to have to rethink and reinvent our antitrust policy. The kinds of strategies that were effective with the steel industry and the oil industry aren't necessarily the ones that work for digital networks and social networks. So, we're going to have to reinvent our antitrust policy. But we have to be careful that we don't destroy the networks that are creating the value by breaking them up. Instead, I think there's a set of policies for instance, involving interoperability and data sharing that can get the benefits of scale, the benefits of network, while also encouraging more competition.
Just as we have reinvented the technologies and those have led to reinvention of business processes and organizations, we also have to reinvent our regulatory structure. And the same tools and techniques that worked in the past, aren't going to work in any of those spheres without a lot more creativity. Our success in harnessing the enormous benefits of these new technologies and having widely shared prosperity, greater productivity, greater health and wellbeing, will depend on our ability to reinvent our society alongside of reinventing the technologies. My research right now at Stanford at the digital economy lab Is focused on two main areas: one is better measurement and the other is the future of work.
In the areas of better measurement, we're trying to go beyond traditional GDP and productivity statistics which measure how much people spend on things towards a new tool which we call GDP-B. The GDP-B framework is - the B stands for benefits and what that means is we're measuring how much value is created by new goods and services, not how much we spend on them. In the digital world, more and more things are available at zero price, like Wikipedia, but they can create enormous value. And as the world becomes more digital, we need to have a way of measuring the value, not just the cost of things. Ultimately, this framework will give us more insight into how we can enhance the well-being of humans and where we should be focusing our investments and our resources.
So GDP-B is a big project of mine at the lab. The other big area is around the future of work. Understanding how artificial intelligence is changing the demand for different skills and the different kinds of activities that people are doing. And I have a company called 'Workhelix', that is implementing some of these tools in a way that allow companies to make better decisions about their human capital and their workforce. So, they not only have
the skills they need today, but also, as hockey players say, skate to where the puck is going to be and understand which skills are going to be important in three or five or seven years and invest in those in the workforce, hire the people who have those skills, do the training for those skills, so they will complement the technologies that have already been invented but are just now being implemented.
2023-02-09