MITRE Grand Challenges Power Hour: The Future of the Internet and 6G
- Good afternoon. Welcome all to MITRE's second instance of our Grand Challenge Power Hour. For the next two hours, we will be speaking on the topic of 6G and the future of the internet. And for those that are new to the event, this is a series we're holding monthly here at MITRE looking at major challenges in the science and technology landscape and sort of exploring the intersections between technology, policy, business and really looking at how we can come up with constructive solutions for some of these big problems. So I am honored to be joined for our opening remarks here. My name is Charles Clancy, I'm the Senior Vice President and General Manager of MITRE Labs here at the MITRE Corporation.
And for our opening segment, myself and Vint Cerf we'll be discussing these topics with the help of our moderator, Rachel Azafrani. And the the discussion here is gonna be a range of different topics, we've got about 45 minutes of dialogue and then we're gonna transition to a panel session moderated by Nadia Schadlow looking at sort of diving deeper on this range of topics. So with that, let's go ahead and get started. - [Rachel] Wonderful. Thanks so much, Charles.
Hello, everyone. Good afternoon and thank you for joining us. We're hoping that this will be an exciting discussion and wanna do something a little bit different.
And so instead of holding Q&A at the very end I really would like to invite the audience to pose questions in the chat throughout the discussion so I can work those into this discussion about the future of the internet. We're going to be exploring different scenarios in a few thematic areas and for our first theme we're gonna start with connectivity. And I'd like to start with a little bit of a premise here. The nature and availability of connectivity has evolved significantly over the last few decades although has plenty of ground to cover.
Even with around 59% of the world's population using the internet at the end of 2020, there's really still markedly distinct access and network coverage globally among urban and rural areas disparities among gender gap in using the internet and so I'd like to pose a first question about connectivity. What technologies will be key to enabling connectivity over the next 10 years and what are some of the ramifications? - And are you looking for an answer from Charles or from me? - [Rachel] Go ahead. Why don't we start with you, Vint? - I'm happy to jump in. I'm Vint Cerf, I'm Vice President and Chief Internet Evangelist at Google since 2005 and one of the original co-inventors of the internet.
So the conductivity is wide open now. In addition to rapid undersea cable development much more undersea cable than I ever anticipated linking the continents together. We're also seeing a rapid evolution of mobile access to the network, which really took off in terms of utility around 2007 with the arrival of the iPhone and the ability to reach the internet from things you have in your purse or your pocket. Since that time, of course, the mobile technology has evolved from 2G to 3G, to 4G, to 5G and whatever the heck 6G is.
And the two issues there, one of them is that you require towers and things like that in order to make use of these mobile technologies. So concurrent with that, we're also seeing increasing influx of wifi, notably in establishments, retail establishments, coffee shops and things like that, McDonald's and so on which give people more or less continuous access. However, that's only true in urban and suburban areas, in the rural parts of the world internet is much less available and that problem is being addressed with low earth orbiting satellites. Since I would say about 10 years ago, we started to see some medium orbit satellites like the old 3B system, which is at about 8,000 kilometers which was actually pretty good, it's capable of delivering 400 megabits a second from the 8,000 kilometer orbit with about a 50 millisecond delay. But the lower earth orbit satellites like the Starlink for example, and the several others, Cooper and so on and one web are all in the 700 to 1100 kilometer range.
So you have much lower latency and higher signal to noise ratio and potentially significant bandwidth. The information that I have for the Starlink program coming out of SpaceX is that a reliable 300 megabits per second channel seems to be feasible. If Elon manages to get all 24,000 satellites up, in theory it will be impossible to avoid internet access because these things, some of them will even be in polar orbit so even if you're the North or the South pole you can't claim that you can't do your homework. The interesting question of course will be economic. How much will it cost? And will it be sustainable? I think the the SpaceX guys have a very interesting dynamic going for them when they get the 24,000 satellite up the earliest ones will be falling out of the sky and there'll be launching more rockets to put more satellites up. And so it's a self-sustaining business, it's quite a fascinating business model.
So I'm very excited about those technologies. I think as we move into the 5G space, something Charles, you and I might wanna talk about. The interfaces to 5G and 6G are quite different from the interfaces that are available in 4G and they have some implications for the way in which the service is presented to users or to implementers of applications. So let me stop there because I'm sure Charles has much to say on this topic and then we can perhaps discuss implications. - Yeah, no. And as much as we're not quite sure what 6G looks like, it's kind of already time to start thinking about it, right? If you look at the pipeline of technology, particularly in the cellular space, it takes about a decade to do the R&D and then another decade to really figure out the standards and the use cases and the systems engineering and actually build the systems. Right?
And then at that point you're ready to start deploying these networks. And so essentially if you look at every decade as a new generation of mobile technology, 2020 is really the rollout of 5G. We're not gonna see 6G until 2030 but it's gonna be based on research that's happening today at our universities and standards that we need to start thinking about and requirements we need to start thinking about. And it's really interesting if you extrapolate. So usually when you look at the next generation of mobile technology, you start with what are the sort of the key performance indicators which are at the simplest level, the data rates that you can sustain and the latencies that you see through the network, how long it takes data to traverse the network.
And so 5G is taking us up to gigabit per second speeds. 6G probably will be five to 10 gigabit per second data rates. Accomplishing that requires us to bend some laws of physics around special efficiency, around instantaneous bandwidth of signals. Again, we'll have to see how to bend the laws of physics to get there. The next is in latency, right? So you see in 4G we had latencies in the 50 to 100 milliseconds which were perfectly fine for voice communication.
In 5G we're trying to get down in the one to 10 millisecond range which is what's necessary for sort of industrial automation, industrial control kinds of applications. But again, if you wanna improve that by another order of magnitude and 6G we're looking at, I don't know, 200 microsecond sort of latencies which again, even if you're looking at the at the speed of light it means that that your speed of light becomes a barrier to the latency in your cellular network. And you need data centers and compute that are literally within 20 kilometers of the cell tower or there's just no way for the data to get there and back fast enough even if you have no other delays in the network.
So if you think about this, again, it leads you on an interesting trajectory of the really hard research problems that need to be solved as we really think about that next epic of technology. - So, Charles, I wonder if I can jump in now and react to a couple of things. The first one is there any kind of bandwidth of sorts you're talking about requires you to be up in much higher frequencies. And the side effect of that typically is that you need to have towers that are closer together which implies they need bound together somehow either by a point to point lasers or a fiber. And so there is a significant potential costs of moving into those more dense configurations and of course it gets worse as you move out into the rural parts of the country trying to pursue that kind of connectivity.
That way could turn out to be especially expensive to say no of the cost of the access to the frequency in the first place. So a huge amount of money over a hundred billion dollars in spend (coughs) just in the last off. And it raises questions about, is there any money left to actually pay for the rest of the infrastructure? The implication of running higher frequencies is that you may be able to use MIMO, for example more effectively because the physical size of the transceivers is smaller and smaller at those frequencies, so that might be helpful. I know that we have Milo Medin who's gonna be on the panel, he has a lot to say about some of these things.
So I'm looking forward to hearing from him as well. - Yeah. I think if you look at 5G, there's a hundred megahertz channel bandwidth and low and mid band and then up at millimeter wave, we're talking about 400 megahertz channel bandwidth, is what the specification implies.
I mean if we want to start hitting those data rates, we need probably 50% increase in spectral efficiency which is gonna be MIMO, that's the dimension we can squeeze more efficiency out of. But then yeah, you need one gigahertz wide channel bandwidth in order to be able to do this. So that's either a carrier aggregation which we have today in 4G but across but in millimeter wave or it's something new. And again, it's gonna drive the design of of transceiver technologies that need to have like a linearity across these massive bandwidths. - Well, so Ted Brown report up at NYU is running in the 60 to 125 gigahertz range tests and experiments to see what's possible there and that certainly gives you plenty of percentage bandwidth in order to achieve those data rates. Rachel, I'm sorry we seem to cut you off, so why don't you go ahead.
- No, no, no, I thought there was a good question in the chat that actually touched on some of these challenges and the implication for position navigation and timing. And I was wondering if we might work that into some of these challenges with towers clusters, tower clusters in space, what might be required and some challenges that seek to repose to PNT. - I don't think you need a tower cluster in space exactly, I think what we've already experimented with stratospheric 4G with the Loon project which has been shut down recently but at least the feasibility of doing that in the stratosphere was demonstrated. I think I'm not good enough on the navigation and timings sector to be very responsive to that question. Charles, are you a more knowledgeable than I am about that? - Well, I mean, right now GPS is our standard for position navigation timing. At least here in the U S you've got a whole range of different constellations up there but the design specification for them, that's really based on the precision of, the satellites have to know with high precision when and where they are and so that's being driven by chip-scale atomic clocks, you've got detailed ephemeris data about the satellite locations.
All of that adds noise, right, and so you get down to sort of these 10 nanosecond level timing accuracies and with a lot of extra effort down to like one to 10 meter localization resolutions. Even if you invented a better chip-scale atomic clock you still have noise in the ionosphere that causes basically the inability to be more precise than that. So I don't know that there's anything we can do with GPS that's gonna give us orders of magnitude improvements in location and timing without much better knowledge and real-time knowledge of say how our ionosphere is working.
But I think as we look at dense deployment of say 6G towers on the ground, they themselves can be a source of localization because they know with more precision when and where they are and can be used for more fine-grained localization. But again, as long as they can't really be relying exclusively on GPS for that gold standard knowledge. - So there is an interesting issue here with regard to the need for timing because the 4G systems and the 5G systems schedule access to the channel by telling the mobiles when they are permitted to transmit, which is quite different from say wifi. So accuracy of time is very important. You could supply the time from the tower presumably and you could also supply the tower with time over optical channels. On the fiber based system we have atomic clocks built in into the network as opposed to relying on the GPS.
You still have the problem. So, anyway, I mean, I think there may be ways of delivering time to mobile units in addition to the GPS which is a very weak signal as everybody knows it's easy to jam and that is a cause for some concern. - We've got one more question here about implications before we maybe move on to our geopolitical theme next, but this is something I believe you touched on, Vint a little earlier in the discussion about power with current costs being driven about higher like electricity costs, how much will stay in green play a role in this connectivity transformation and are there fundamental limits are on power that need to be bent to continue to increase frequency and bandwidth at the rate we are? - Well, that's an interesting question. I think several things are happening all at the same time. The first one is the chip sets are getting lower and lower, current chip sets are becoming available.
You still have the transmission power to worry about but with the tower is closer and closer together transmit power is actually less an issue. And in fact, what you wanna do is to control how much power you use in order to avoid interference. So that plus improvements in battery technology which are coming partly because of electric vehicles could turn out to be a relatively solvable problem.
- Fantastic. So with that, going to our next sort of thematic topic here about geopolitics and the future of internet governance and to preface this part, division over the future of internet protocols could really transform the structure of the internet as we know it today. There are some experts that predict a bifurcation of the internet or sometimes termed the splinternet that might result in a separate China led and U S led different Internets emerging in the context of a broader set of U S-China relations development that is now being termed the "digital iron curtain." Russia and Iran have also been moving forward with their projects to build domestic Internets. So question here is, what future internet structures could realistically come from competition between IPV6 and China's new IP proposal or the success of domestic Internets more broadly? I'll go ahead and maybe ask Charles to comment on that one first. - Sure. I mean, there's a lot there, right?
I think there's the notion of splinternets or bifurcation or fragmentation is sort of happening for different reasons at different levels of the stack. You have, I think lots of bifurcation happening at the let's say layer seven or the services offered by the internet where you see region specific social media platforms for example, and it's really about catering to the sort of the culture and needs of local populations. I think we've seen China really amped that up to the next level, particularly with some of their social media platforms that again, much like the U S is a tech giant platforms are seeking to be as all inclusive and cover as many facets of your needs as possible. So you've got the sort of the WeeChats and sort of the Alibaba infrastructure of China stacked up against the Googles and the Facebooks and the Twitters of the West. But you see that same pattern happening now at other layers, right? There's a now sort of a tug of war happening over the basic inter networking layer of the internet not just sort of the services and application layer with efforts from from China to promote an alternative standard for the internet protocol, which is the basic protocol that wires the whole internet together.
And to a certain extent, things like the Great Firewall of China have already sort of created some discontinuity in the internet at really more at the application layer with content filtering. But if China successful with new IP that represents now a split at the inter networking layer and I'm sure as Vint will remind us, that was critical to really cohering the internet in the first place was having a common network protocol that worked agnostic to the underlying connectivity technologies and it was agnostic to the higher level application technologies. And then of course at the lower layers, again we still see market forces primarily driving to common standards, right? It's not as though there's a separate 5G we're seeing in China or in the EU, in fact, we've seen consolidation back if you recall the 2G and 3G days we had competing wireless standards, say between CDMA and GSM. When we got to 4G basically we saw that all combined together with longterm evolution or LTE, same thing with 5G. But again, there's concern that we may be on a future path to divergence at the underlying connectivity level if the fights and the standards bodies continue to really take a geopolitical view instead of a best technology view.
- So let's see, first of all the new IP turned down to be a dud, it made no sense technically. It was briefed as the Internet Research Task Force and the Internet Engineering Task Force. It was a portion the Internet Research Task Force offered to host some of the discussions that the Chinese were asking for but not all of them because they thought some of them were outside of the remit of IRTF and they, the Chinese delegation decided that was rejection and went to the ITU. So as near as I can tell, the new IP thing is not technically sound, it is simply an attempt to move the discussion about internet over into a different venue and which I think isn't helpful. So my view is we should stick with the IRTF for further evolution of IP, including IPV6 which is still only about on average 30% penetrant. I would like to see IPV6 only capable software running so that we can be reassured that when we are all forced into IPV6, it'll all work.
The U S government has taken the view that IPV6 is important and should be in place by 2025. We've been pushing since 1996 has taken a long time but it's the only avenue forward in terms of having larger address space. I think the Chinese and the Russians and the others would shoot themselves in the foot by trying to isolate themselves from the rest of the internet for the simple reason that if you want to do e-commerce you're gonna have to be able to compatibly exchange information with everyone else in the world. And so even if you decide to build a penetrable firewalls that is all porous firewalls, which is what the Chinese have done, I don't think that they will be well-served by being totally incompatible. My guess is that there will continue to be a compatible internet everywhere, except that there will be attempts to filter at various layers in the architecture as Charles has implied, whether its attempts to interfere with the domain name system or interfere with the border gateway protocols.
And those various systems, these are core to the Internet's operation are being reinforced now with cryptography strong authentication, digital signatures and other things in order to prevent them from being subverted from their primary function which is to keep the internet uniformly interconnected. (multiple speakers) - Oh, sorry. I was gonna ask Vint a question here. You talk about e-commerce as the glue, I've often heard this argument, it's the thing that's gonna be internet from fracturing entirely.
How do you see really the, I guess the pull back from lawfulization based on anxiety over supply chains exacerbated by COVID-19 but do you see a shift to more local supply chains reducing e-commerce as the biggest driver there or you still think it's gonna be so dominant? - Good question. First of all, I think supply chain resilience is absolutely an issue. We learned a big lesson from the pandemic and I think there will be a great deal of attention paid to building more resilient supply chains including domestic and local ones. However, any one nation's economy is relative to the world economies, small although certainly the U S and Chinese economies are significant portions of the global economy, but nonetheless if you're interested in maintaining a healthy economic growth you need to speak to global markets. And the only way to do that increasingly is to be online and be available and accessible and discoverable in the internet.
I don't imagine for example that there would be any appetite at all for building two distinct and separate networks, one for the Chinese that everybody has to connect to and one for the U S and one for the Europeans. That would take us back to 1910 when the telephone network was not a single network, it was many different networks and each, if you wanted to communicate with someone using the telephone and you might have a dozen telephones on your desk and you have to know which service did your corresponding go to so you knew which telephone to pick up the call. And it was the roll-up by Theodore Vail and did that fragmentation. And I believe that in spite of the interference that we're likely to see and already are seeing, especially in China but Russia and Iran, and elsewhere to interfere with the data flows that are unimpeded.
I think that we will not end up with a completely bifurcated network because it's not in anyone's interest to do that. - Thank you. I'd also like to note that there are a number of other people that have joined since we first started the discussion and that the audience is welcome to jump in and ask questions in the chat here so we can ask them live. There is one that we have here about the implications of Chinese ownership of a large, relatively percentage wise portion of undersea fiber-optic cable and how that might impact internet divergence.
- My first reaction is that the costs of building undersea cable is dropping over time, has been dropping over time. The consequence of that is that we're seeing more and more cables being built. So if you don't wanna be on one that might put you at risk because the Chinese ownership or relationships you may be able to find another cable to use. Google was founded, so building his own cables now in order to interconnect its data centers together and not even having to partner with others in order to afford the cost of the undersea cable.
So my sense here is that that's not likely to be a major issue, but Charles, do you have a different perspective on that? - I think it depends on how we formulate policy around this, right? The state department had come out with a series of recommendations around the Clean Networks initiative, right? This was to tell to the last administration and it included things like explicitly barring cable landings from Chinese owned undersea cables on the West Coast. And I'm not quite sure how to enforce that given the number of cables that already exists so far. There was also rules on there that barred what appeared to bar peering relationships between Chinese ISP and U S ISPs. But again, not sure how to enforce that, certainly we haven't seen that come to pass. I think aluminum is still appearing a massive amount of backbone traffic with China Telecom. So I don't know, I think it becomes an issue if we have blunt policy that is crafted around it, but not otherwise.
- An implementable policy is generally a bad idea. - That is true. (laughs) So moving on to our next theme and final theme here talking about the economic and programmatic implications the future of the internet.
So to preface this, enabling dramatic increases in connectivity and meeting increased expectations for speed and bandwidth and latency and at the right price necessitates harnessing economies of scale. Hyperscalers status technology companies are making inroads into the telecom sector with their own internet services, their own network infrastructure projects which are challenging traditional ISPs and cable and satellite providers. So what could the progression of hyperscalers toward becoming tier one ISPs mean for the topology of the internet? And I'll pose this one to Vint first. - So several things are happening and it's an interesting phenomenon. There is an economy of scale that's associated with cloud computing and so you're seeing Google and Amazon and Azure Microsoft building substantial networks that link the data centers together and that's done for efficiency reasons. Also it's done because the amount of data that moves back and forth between the data centers exceeds the amount of traffic that is in the public internet.
So at least speaking for Google we've been forced to build these large scale networks in order to link the data centers together partly to replicate data to make sure that it doesn't get lost even if a whole data center goes away. But at the same time in order to reduce latency for service to users a lot of the data centers are connected by yet another network into the public network, into the ISPs that serve the customers directly. And so that interconnection which I think is a phenomenon showing up in all of the companies that offer cloud services has the effect essentially of interconnecting the data centers networks almost directly with the local access networks so that the services can be delivered very quickly to consumers.
You'll see similar kinds of behaviors or mechanisms showing up with content distribution systems like Akamai that are replacing their facilities close to or even in the central offices and the like of the telcos and the cablecos. Again, to reduce the latency and reduce the need for transmitting data all the way across the network. That activity is completely understandable from the performance and business model point of view. It doesn't remove, however, the importance of the last mile connectivity to consumers, it just means that there's increasing amounts of connectivity directly to the large scale cloud providers with ISPs that are directly connected to customers. - Yeah. So I think the important thing to understand here is that the structure, the typology of the internet is fundamentally evolving, right? If you look at the internet 10, 15 years ago you had sort of different tiers of internet service providers.
You had the tier one, ISPs, they were the transit providers, they made up the backbone of the internet. Then you had sort of the tier two providers, which were the, I would call the wholesale internet, it's the people that are the regional interstitial networks that connect the transit providers to local internet service providers and enterprises and networks and universities and hospitals and so forth that are larger participants in the internet. And then you had sort of tier three which was the retail internet service providers, these early on were the dial up phone banks where people would call in and then ultimately became local internet service providers.
But what we've seen is a whole bunch of consolidation. The whole notion of, I remember probably in the mid 1990s, say shopping around in my local computer newspaper in Indianapolis where I was at the time, looking at all the different internet service providers for dial up service and then they had the different prices and you'd try to find the one that was best. All that's gone, right? We don't have the heterogeneity of different last mile internet service providers at this point, right? If you're a consumer and you're getting internet for your home, it's Verizon files or it's Comcast cable at least here in the Mid-Atlantic region. Yeah, Vint.
- I was just gonna observe that to give you some hard data. There were something like eight to 9,000 dial up internet service providers prior to the time that dedicated services arose. And now you're quite right if you were a consumer and the question is, what choices do I have for connectivity to the internet from my residence or my business? The answer is zero one or two, maybe three if we include the 4G and 5G as an additional access method, but it's either fiber or DSL and often only one or two competitors in any particular area. - Right. And again I think you see the same consolidation, well, the consolidation a little bit different straight shape but essentially what you see is this, the hyperscalers who are now trying to deliver low latency high bandwidth content globally have built out this massive infrastructure to deliver on that need to their customers and have supplanted, right? It's not as though I needed to access transit in internet service provider to get to Google, right. Whoever's providing my last mile internet is plugged in at some internet exchange point and has mainline access straight into Google.
And so everything's kind of one hop away at this point and we call this notion of sort of the flattening of the internet. And it creates an environment where you have the last mile providers, you have the hyperscalers and just the full mesh of interconnectivity between those is probably 80% of all internet traffic at this point, which raises some really interesting questions from internet governance perspective because a lot of our internet governance separates the content providers and then they're all kind of managed by the Federal Trade Commission and then you've got the telcos that are managed through the Federal Communications Commission. But the hyperscalers are kind of all of it in one and it kind of where we're headed with the internet.
And I think it poses big questions for how we think about internet governance in the future. - Well, certainly it would be if you were to look at the BGP routing for example, you'd probably find exactly as you say that there's smaller number of hops to a service, a cloud service now than there would have been in the past because you don't need as much transit. - So I guess, I don't know whether this is a good thing or a bad thing, certainly it's a good thing for connectivity and low latency high bandwidth communications. I would wonder are we creating governance issues? Are there sort of lack of diversity issues? Right. So what happens if there's some massive fault in one of the hyperscalers sort of global networks? What effect does that have on sort of the internet resilience overall? - Not much because the bulk of those interconnects are for cloud access primarily.
With the addition of the Leo satellite systems there'll be additional options for getting connectivity, so we'll have four or five different ways of getting from a consumer to any particular cloud service provider, though you'll have either a satellite hops or you'll have access from cable or from telco services and you'll have four and 5G. So there would be quite a variety of ways for the customer to get access to any particular cloud service provider. One of the cloud service provider networks goes away, yes, it's likely that that means that the customer can't get to that cloud but they can get to the other clouds.
- Here's an internet governance question. What do you see as the role of the ITU going forward with this kind of transition to hyperscalers especially given that the UN Security Council with China sitting on it might be providing some kind of oversight or guidance for international telecoms? - I'm not sure that, I mean, the ITU has three main pieces as everybody knows, there's ITU-R for radio, there's ITU-D for development and there's ITU-T for standards. And historically the ITU standards have tended to be in the relatively lower layers of the internet architecture. The assignment of radio capacity is very much at the forefront of ITU-R and still extremely important, I don't see any change in that in the near term. So I'm not persuaded that the Chinese involvement in the ITU is necessarily alarming if that's the intent of the question. It's still the case that the bulk of the Internets protocol work is done either in the IATF or in the World Wide Web Consortium for web-based applications.
And for a lot of the wireless application you see IEEE especially on Wi-Fi and related access and then you start to see the 4G and 5G stuff which had if I'm remembering correctly, Charles, that's still three GPP more than it is ITU? - Yes. Although ITU sets the high level of requirements. So the 5G standards were based on the IMT 2020 set of requirements that were ratified by the ITU.
But I think one of the concerns is that, here's the doomsday scenario, right? Is we're just now starting to talk about IMT 2030 which is gonna be the set of requirements for 6G and are there essentially ways that China could leverage its voting block of Belt and Road countries to sort of put things into the requirements for IMT 2030 that would have downstream effects on the cellular standards developed with three GPP and sort of internet working protocols that support that through the ITF? - Well, so, yeah, that was a very good question. And one counter to that, one and tactic anyway is to say, well, let's not be so totally dependent on 5G and 6G, let's make sure that the wifi capability continues to evolve and other wireless local wireless capabilities evolve. Don't forget there is satellite capability which could be also to offset that. So I think it's important to make sure that there are alternatives to that particular access methods so that we don't end up having, the internet needs to work over all of them. We don't wanna end up with an IP layer which is only gonna work over 6G, that would be a serious mistake. - Right. Right.
And that's my concern is that essentially the, well, so this is the Genesis of new IP, right. So China in particular put forward a set of requirements for 6G and a particular set of use cases that they wanted to be able to implement. They postulated that okay, the actual radio technologies isn't really important, it's the software defined core, the virtualization and the core that is really taking us from the radio networks and to really the telephony, I'm sorry, the telecom infrastructure itself. Right. And then in order to support those use cases they came up with these ideas of mandate driven architectures to move away from sort of software based architectures, I'm sorry, service oriented architectures.
And in order to implement the mandate driven architectures now we need to restructure the IP header in order to accommodate that. Oh, and if we're gonna restructure the IP header, let's add some other stuff in there that the rest of the world probably won't like and now you've got new IP. So it's a whole progression that started with at one point but ended up somewhere very different. - Well, so this is the first year, right, to bring this up.
This is the hazard of while you're at it, the most serious for the English language. One of my big concerns about 5G and probably 6G, as well as this whole slicing idea, which essentially runs backwards in time to the narrow casting attempt to control and manage in some terribly precise way each fiddle slice of access to the capacity of the system, whereas the whole idea behind packet switching was to take it really fast aggregated bandwidth and switch the packets like crazy. And before somebody tells you all the holographic system requires X, Y, and Z I have to remind people that while we were experimenting with voice and video in the early 1980s with very modest bandwidth, we were intending to include that capability. And you'll notice that as time has gone on and capacities have increased that packet switching has been able to let us do what we are doing right this moment.
- Well, we're gonna discover that 6G's actually ATM and we're all gonna be very confused. (Vint laughs loudly) - Oh dear. I hope not. (laughs) - One question here about some of the impacts of data-centric networks.
This person listed NDN as an example versus the present location centric IP based network of today. - Let me try to interpret that and make sure that I'm getting this right. One possibility might be the idea that you were doing routing based on content. And I have never fully understood how to scale that, I mean, if the ideas that you announced with some identifiers, the information you're interested in and this propagates through the network and now anybody that has generated something which has the content of interest to you comes to you through the network.
That's the way I interpret the content defined networking. I don't understand how many, what size of the routing table could be infinite because the number of things that you might be interested in isn't trivially compressed into a small number of identifiers. It's like the Dewey decimal system in the library. So I haven't been a big fan of that particular tactic, even though I suspect there might be some smaller scale examples where it might work, like send me the next map square because I've just entered into a region and you've identified that very, very clearly but that's a pretty narrow example. Charles, do you have some sense of where to go with that? - Well, so I think it has a sort of a flag day challenge, right? There's no way to switch our whole internet and economy over to a fundamentally different sort of way of approaching it.
And I also think that to a certain extent we've addressed a lot of the NDN aspirations through things like content delivery networks that already are doing the caching and localization based on, it's building on top of IP as a second, a separate sort of thing rather than building it fundamentally into the architecture. So I mean, we've, I think every study I've been involved in onto the future of the internet over the last couple of years has always toss it out as something we need to look at but then we look at it a little bit and then we'd decide that it's totally not worth looking at. - Just however just to make a point. If you look technically at what the forwarding tables look like, there's nothing that would stop you from building a parallel forwarding table that was indexed by content, whatever the heck that means.
There's nothing technically impossible about that just like it's not impossible to run a different domain name system than the one we currently have in parallel. The problem of course, is that as soon as you start doing that, your point is made that suddenly people who can't do everything end up not being able to communicate because they didn't do everything. You did X and I did Y and we go like this. You remember X25? One of the problems with X25 is it has so many alternatives that sometimes when you got done negotiating you discovered that you had nothing in common.
- And with that, that is just about all the time we have for the discussion. I'll leave it to offer some final comments for our speakers and then turn it over to you. - So, Vint, I have a question. I heard this sort of folk tale about you trying to get residential internet service and you wanted two IP addresses rather than one. And the poor technician you were talking to at Comcast or Verizon or wherever was unwilling to grant you a second IP address not knowing that your founding role in the existence of IP addresses.
And I wanna know, is this story true or not? - Not quite, but it's close. This is a Verizon file service and what I actually wanted was not a dynamic IP address but a fixed IP address and they said the only way you could do that was the order of business service. So I ended up ordering business service. The funnier situation was when my service stopped working and eventually it turned out that somebody had failed to renew my service, and then they decided to renew it internally. But that meant that they hadn't got my voice recording saying that I had asked for service and so they had a bid set saying that no TCP connections would be permitted even though I was paying to get things, I was saying, "How come I can ping but I can't open a TCP connection."
And eventually it was said that the commercial people had not gotten my voice recording to authorize service and this caused the engineers a certain amount of pain 'cause I was hollering at them all the time. Anyway, it all worked out fine and I'm now a happy customer running at somewhere on the order of three to 400 megabits a second up and down. - The static IP address is no less. All right. Well, thank you very much for your remarks, Vint and thank you, Rachel for your expert moderation here and keeping us on time and on topic. So with that, I think we're gonna switch over to the second half of our program.
So it is my pleasure to introduce Nadia Schadlow who is our panel moderator. Nadia is a Visiting Fellow here at the MITRE Corporation and most recently served on the National Security Council as the Deputy National Security Advisor where she served as a Principal Author for the last National Security strategy. So she has been dealing with the rise of China in technology for quite some time now and is really an expert in that area and is gracious enough to serve as our moderator, and I will hand it over to her.
- Thanks so much, Charles. And it was really interesting to listen to you and Vint and Rachel, and I'm looking forward to hearing what our panelists have to say about the topics that were just raised. So what I'll do now is I'll briefly introduce the four experts that we have with us today and then I'll turn to them and essentially follow the same format that was just followed so that they have a chance to comment on the themes that were just discussed, the themes of connectivity, of geopolitics and the bifurcation of the internet, and finally some of the hyperscaler issues. So we're lucky enough today to have Diane Rinaldo with us. She is at Beacon Global Strategies and it's one of the country's leading authorities on 5G telecommunications in the internet.
We also have Milo Medin who is currently the Vice President of Access Services at Google and most recently he spent a lot of time writing about which parts of the electromagnetic spectrum should be used for 5G. So he's monitored and navigated a lot of these debates. We have Michelle Connolly, who's a Professor of Practice in the Economics Department at Duke and previously she served as a Chief Economist at the FCC. And last but not least, we have Sam Visnor who's a Cybersecurity and National Security Expert and he's currently serving at MITRE as a Tech Fellow and previously he had the position of Director of the National Cybersecurity, FFRDC there. So we have a really highly qualified panel and I'm interested in hearing from them.
So first we'll start a little bit, I think with the issue of connectivity. I think Rachel did a really good job in capturing some of the fundamental issues. She pointed out the importance of the issue. There were queries and questions about the haves and have not.
Vint commented on the rapid evolution of connectivity with undersea cables and more. We heard Charles talk about the technical challenges also of connectivity and the implications of 5G and 6G. So I'm sure all of you as experts were probably taking notes and are eager to comment, so why don't we go ahead and start. Does anyone wanna start first? Maybe just for ease of running this, so if one of my panelists raised their finger that might be the easiest way and I can call on you. So Diane, thanks.
- Thanks, Nadia. No, I'm happy to take this question first and talk a little bit about connectivity. The entire world including the United States has been struggling with connectivity, but I think with the advent of 5G and as we look to 6G it's going to alleviate some of these issues. We have a new business case that is being developed and that 5G is going to help bolster smart agriculture is something that you often hear about and one of the use cases for 5G. If we're able to create a use case in rural America to build out 5G there's going to be extra bandwidth and that will allow for in-home broadband so we can move towards a wireless broadband connection and to provide services to areas that haven't had great connectivity over the years.
I know in the UK, Cisco is currently building out in more rural areas using open Rand solutions which is the concept of standardizing the interfaces between sub components and allowing more competition within the sub-component level. And that's allowing them a business case, it's costing less to build out in these areas because of the technological innovations that are occurring and as well as the different use cases in these more rural areas. So it's a great time to be in this space and to talk about these issues and to see how technology is further advancing the lives of so many people in this world.
- Thanks, Diane. Do any of the other panelists would like to comment on this? Sam. - I'd have to agree with Diane as I've watched over the last couple of years. I think we've seen how this connectivity is revolutionary.
It's not just a few more lines, it is one in one said, quantity has a quality all its own. And in this case I think this will change economies, it will change learning, it will create more transnational communities of interest for better and perhaps in some cases for worse. In addition to working in MITRE, I'm an adjunct at Georgetown I was able to teach and I think we had a pretty good course 40 undergraduates in the last semester and we did it all virtually. That would not have been possible and interestingly enough, the course was about cybersecurity and global connectivity. So if one thinks about the level of collaboration which I think perhaps was useful even in the development of these new vaccines from which a number of us are benefiting, I think that this is terrifically exciting but the excitement also is tied, is anchored to some of the incredibly interesting challenges. How do we impose governance and how do we have good behavior into centrally a global connectivity environment in which people can self identify and self affiliate? And we're seeing some of the negative consequences of that, both overseas and more recently in the United States.
So as a friend of mine likes to say, "This is a wicked problem." It means it's a lot of fun, but it doesn't mean it's easy and it doesn't mean it's all good. - Thanks, Sam. Milo. - Yeah, I would agree that of activity has moved out of the realm of something that is nice to have to something that is now essential.
If you think about the services that we use and how we communicate with each other, how we engage with each other, it adds a huge amount of tax if not everybody is connected. And so the price for being disconnected is an all time high and that's motivating a whole set of investments from mobile networks to satellite, new systems architectures et cetera. I would say though, there are, I'm an engineer, so I have certain limitations, right? One of the challenges if we look at the network there's only one real network and that's the wired network. The wireless network is just an extension, a little bit of distance off fiber-optic networks.
And so if you are not a leader in fiber deployment you're not gonna be a leader in wireless deployment. And that's a challenge especially where for the United States, because going to 5G and getting real benefit from it requires the deployment of much higher frequency networks than we are used to. One of the benefits of 5G is that we can aggregate up to 500 megahertz of spectrum at a time instead of a hundred for LTE, but I that only helps you if you've got that much spectrum and to get that much spectrum, you can't do it just at the same frequencies that we did for our normal LTE networks. If you put 5G on the same spectrum that 4G is on you can get maybe a 20% performance improvement.
That's what we see today with the T-Mobile's and the Verizon's deploying in sort of traditional cellular spectrum. But if you really wanna 10X improvement, something that gives you a game changing speed difference enables new applications, that's gonna require you to deploy up in the three or four gigahertz range or even the millimeter wave range. And that will require probably at least three times the number of base stations that we have deployed in the United States today, even just to get to 70% of Americans covered. And so that's a challenge with our infrastructure especially when you compare it with our competitors, where in Japan you have utility fiber that you can get pretty much dark fiber anywhere for about 50 bucks a month. And in China where I think they have North of 400 million fiber to the home customers all connected, so that base of fiber enables you to extend wireless and all sorts of new ways. - Thanks, Milo.
We have a couple of questions now from the audience, so I'll turn to them. The first one is, isn't there a dichotomy of telecom consolidation hyperscalers versus the goals of diverse supply chain? So how to reconcile essentially these two tensions of the consolidation hyperscalers and the goals of a diverse supply chain. Does anyone wanna take that question? - Sure. I can try and address a little bit of that. It's interesting, you're right there's a divergence in cellular industry is dominated by Chinese and European suppliers.
There are no American suppliers of cellular technology to speak of. On the other hand, on the hyperscaler side they're all dominated by American suppliers, vertically integrated and American companies leadership in cloud gives American companies and American values a great opportunity to expand and to influence how internet is used across the world. So you're right that the supply chains are very different but it's not clear to me which one is the problem.
- Right 'cause they're interconnected. Yeah. - Let me respond to that if I could. I think Milo is right but I think the situation might be changing.
I think a new kind of hyperscaler may emerge and we're beginning to see how that can happen. What China's doing with One Belt, One Road and placing a long that One Belt, One Road a digital silk road is building a new IT infrastructure with Smart Cities like Alibaba's City Brain with an IT infrastructure of Chinese products, Chinese technology, and Chinese governance of the cyberspace that they're creating. So these Smart Cities spread out along this digital silk road dominated by Chinese companies some of which are Chinese national champion firms that are working in concert with the Chinese government. I think could emerge as a new kind of a hyperscaler and try at the same time to create a new kind of governable cyberspace through their hyperscale operations.
And I think we're beginning to see this with the export of things like City Brain to countries like Malaysia and potentially to countries in other parts of Asia and into Southern Europe. - So in a way we might be able to circle back on that a little bit during our bifurcation and geopolitics discussion. I would like to ask one more question from the audience. The question is, if 4G enabled social mobile internet, 5G enabled the internet of things, what's the use case and business rationale for 6G? And that goes to actually Diane's, I think earlier point about use cases and agriculture and sort of spring connectivity for some of those use cases.
- Sorry. - Sorry, go on. - So I was working on Capitol Hill at the advent of 4G and back then we did not know what it would produce, right? And so we got the app economy from it and I think as we look ahead to the 2030 time ranges is kind of what we're thinking for 6G, we can talk about autonomous vehicles and mass drone use, things of those natures but the real benefit is the creators are thinking right now of what they can do with higher speeds, lower latency next to zero. So I'm not an engineer, I'm a policy person but I think it's fascinating to watch it from the entire mobile ecosystem, from nuts to bolts to the innovative side. Right now with talking 5G we focus so much on let's just get it up and going but it's the backend, the economic side of things that's really gonna change the outset of our economy and the world's economy on the next big tech innovation that will lift us all up. - Thanks, Diane.
Would anyone like to comment on ideas about future use cases on 6G? Milo, I'm sure you have an idea too. - Yeah. I mean, I would say a couple things. I think a lot of the use cases that are being talked about, autonomous vehicles, et cetera are actually not ever gonna happen, and at least driven by 5G or 6G.
Part of the reason is if you think about all the small cells that have to be deployed to provide this kind of speed, how many of those cells that are sitting on light posts or on sides of buildings have emergency power. When the power goes out all your capacity networks go away and you can't have autonomous vehicles just stopping because there's a power failure. So there's a whole class of problems that we have to solve if we wanna enable those kinds of applications. On the other hand, if you think about the combination of low latency and high speed think about personal robotics. The vacuum cleaner robot right now has a fair amount of smarts built into it, but if all you needed to do was just have cameras and sensors and having that data go to the cloud and then having the cloud control the servers and have the thing move around, whether it's robotics in the home or industrial automation that enables not just a dramatic reduction in costs but enables fleet-wide learning.
So then you've got new AI models and the rest that constantly evolve in the entire fleet gets smarter at once. So I think that you're right there's a whole set of things that are going to happen with augmented reality and robotics and the rest of these things. I don't personally think a lot of the use cases that people have talked about now or are actually likely gonna be the things that happen. - If I could just add onto what Milo said as somebody who lives in Annapolis, Maryland I'm really hoping that autonomous vehicles works. (laughs) - They will work but won't depend on any 5G network.
- There are a lot of core companies that are gonna be upset with your Milo. Okay. Final question in this domain and then we'll switch to our next subject and geopolitics. With all this new connectivity, how do we address the topic of equity and reduce the digital divide? Are there policy or regulatory things that we should be thinking about? How can regulations may be impact this domain of connectivity? - This isn't an area where there is a precedent. I mean, if one goes back and Vint Serf was talking about vail and the fail of court.
When ATNT was granted their monopoly, and I'm certainly not arguing for anything like that, there were certain conditions associated with this. So among those was the right of universal access that some basic telephony would be made available to everybody regardless of their means. I don't know that they forgot to 100% penetration but I think it caused a telephonic penetration in the United States probably to be about the best in the world given both the technological innovation and the fact that the government insisted on universal access. One question that we have to ask and we we tend to shy away from this question because of ideological problems, is whether or not at some at some point this is a public good and a public good means that there is a public interest in providing it and it doesn't necessarily have to turn a profit in every case. I would say that we're facing an interesting world.
We have 328 million people in the United States trying to have 900 million people online. So they have a competitive, they may have some competitive advantage. I'm not saying that that's a bad thing but I am saying that we may need to address this by in essence printing some money or I don't wanna use the term printing money but regarding this as in fact a public good and saying that there must be some aspect of universal access. The large companies, those that are hyperscalers I think may have a responsibility along with their tremendous economic firepower to help us achieve this.
- Right. I'm gonna give Michelle a chance to weigh in. She spends a lot of time navigating regulatory issues. - Yeah. So I think, I mean, there are a couple of things to mention, partially in response to Sam. One is that there was the universal service fund for telephony and so it wasn't simply conditions imposed on ATNT that helped increase that. And there's also been a lot of debate as to whether even the universal service fund was quite efficient at achieving that in terms of the price that was paid and the fact that now we're still covering a lot of the universal service fund is becoming larger and larger and we're running into difficulties because of the way that it is taxed and that base becoming even smaller.
And that was switched to broadband deployment with the use of the universal personal service funds to allow for reverse auctions for groups to promise to build out in unserved or sometime underserved areas, primarily unserved areas to avoid overbuilt. So it's not the case that this issue hasn't been (clears throat) seriously considered there actually a lot of government funds currently, in fact, it might be a little strange because like there's the agricultural rural areas that are doing some subsidies for this and as well as the FCC. But one issue that doesn't seem to come up enough it was argued is that initially we thought we were always arguing the digital divide was really being driven by a lack of access.
And what is happening is access is not the only dimension that is driving the digital divide. So to Sam's point, it may be an issue of, well maybe a household doesn't have enough money to get this and then there can be an issue of whether the government wants to consider that. But there are also issues that there may just be households that at least currently, even in COVID times may not find accessing the internet to be a useful proposition for them. If they don't know how to use a computer, if they're not working in a job that uses these kinds of things, if you're 80 years old and you've never done this you're not going to see a value. I think that portion of the population is going to become smaller and smaller over time clearly, especially as older generations are aging out but it's not something that I would argue we'd want to consider a utility in the sense of having government step in and try and be a sole provider or try to create some monopolies. I think that would be a bad system.
- We probably should switch to the next topic although I'm sure we could actually spend a lot of time just on that subject of the internet as a public utility, an idea for a future panel. So I will switch so we have enough time to consider, unless someone I mean, it is a pretty important topic. So does anyone have a last word on this topic, Milo? - I would just say if what you really want is broadband deployment, policy should be optimized for that. So like this last auction on C-band I think is a disaster you're sending 85 plus billion dollars into the U S treasury that could have been used for network deployment. And if you had made build out or at least putting construction funds into escrow is a bidding qualification for that spectrum, it would have greatly depressed the value to the treasury but you would have at least incented the deployment of infrastructure, deployment of fiber, deployment of American companies technology and other set of things. So I think we need to think about how to align policy around deployment and that will mean changes in the way we do things and the coherency of government action.
- Could I respond to that? - Sure. - I mean, the auctions have deployment requirements though I'm not entirely sure. - Not eager to build in reasonable times or large sections, substantial service is a checkbox. - Well, I mean, different auction have different levels but they're generally active deployment components. And I would argue that the auction system is very crucial in terms of making sure that the entities who are most likely to be able to deploy are getting access to it. So I think it's a fundamentally important mechanism by which we guarantee, or we attempt, attempt to guarantee that this very finite resource is being offered for use to the best participants, the ones most able to deploy.
(multiple speakers) - Michelle, to deploy after this last auction unfortunately, that's the problem. So we can fix that in how auctions are structured but we got a problem here. - I'd like to come back to a point that Michelle raised earlier, it's point with which I agree. Even if we make connectivity available and I hope we do, there are people who don't have the technical literacy or they don't within their, as we say, pattern of life find information technology of this type useful to them. Who once said, "Never let a crisis go to waste."
I think it may have been Rahm Emanuel, but whoever said it. We have a crisis now called the COVID pandemic and I think one thing we can be doing is using the process of getting Chromebooks and other low cost computers out to students and getting them connectivity not only to give them the technology but show them how to use it and through them show their families. I don't know what mechanism will work for providing this kind of technological literacy to seniors who are not familiar with it. That's a problem I'm clearly not smart enough or young enough to solve but I think we ought to try to solve it. And I think the current pandemic gives us if anything more incentive to doing so than we've ever had. So hopefully some of the people in this session will go home and as we say and lose some sleep over this problem.
- That's a good way to end the session and you can already see formulations of public private approaches to addressing some of those points. So I'll switch over now to geopolitics and internet governance. I'll recap because I think with COVID everyone's short-term memory seems to be affected at least mine. So I'm gonna recap a little bit the comments that Vint and Charles made previously, some of the themes. One was how competition will play out on the internet.
How will it be manifested? Will it become bifurcated because of the ongoing geopolitical competition with China? Will e-commerce serves a glue or will the pull back from globalization have a greater effect? On the other hand, Vint made the point about undersea cables and lower costs of building, those actually could create a motivation for easier separation. Vint I think argued there was no real appetite for two distinct networks