Whatever Happened to Millimeter-Wave 5G?

Whatever Happened to Millimeter-Wave 5G?

Show Video

5G marks yet another next  wireless technology transition. One part of the transition promised immense  bandwidth and super-fast speeds: Millimeter-Wave. It is a fascinating technology. But  a few years into the 5G rollout,   difficult technical and  economic challenges remain. If semiconductors are black magic, then  Radio Frequency Integrated Circuits,   or RFICs, are the darkest of the dark arts.

Let us take our first Defense Against the Dark  Arts class. No cursing allowed. In this video,   we dip our toes into RFICs  and the 5G mmWave deployment. ## Beginnings Imagine a wave traveling through open space. The distance that wave travels in a single cycle   is called the wavelength.  It is measured in meters. How many cycles pass through a  given point is the frequency,   and it is measured in hertz  - named after the American   car rental company based in Florida. Not  really, don't leave a comment about that.

A megahertz means a million cycles passing in a  second. Gigahertz, a billion. Tera, a trillion. Frequency and wavelength are inversely related.  So as frequencies rise, the wavelengths fall. Some of the highest frequency waves  are unholy things like gamma rays. Their wavelengths are measured in the picometers,   and they have so much energy we  use them to kill cancer cells. Countries allocate parts of their spectrum  to certain use cases like radar or wireless   or WiFi. Spectrum is fundamentally limited, and  making the most of what you have is a challenge. The last big wireless generation was  4G. And in that generation the world's  

telecoms converged on the LTE standard  due to its superior data downlink speeds. But being a world standard, LTE  had to accommodate many different   frequency slices - anywhere from the 450  megahertz to the 3.6 gigahertz neighborhood. This made developing an LTE "world phone"  exceptionally difficult. But a larger issue   concerned the capacity constraints when you  can only use these small slices of frequency. The 1G through 4G systems have always used  frequency bands below 7.125 gigahertz. But  

over the years those neighborhoods had gotten  very crowded. Typically, a cell operator in a   particular geography has only around 200 megahertz  of spectrum to work with. And that is not enough. ## 5G So in that context, engineers began pondering the philosophical foundations of  5G. What should 5G try to be?

Yes, 5G wireless networks must be able  handle yet more data traffic - mostly   mobile video it seems. This section of the 5G  standard is called Enhanced Mobile Broadband. But its creators also envisioned 5G enabling   other types of communication. One is an  ultra reliable low latency networking,   or URLLC. URLLC is for handling super important  data that must be delivered quickly and reliably. Imagine self-driving cars communicating with each  other to avoid collisions. Or a robot arm being   wirelessly controlled. Both cases, we need to  deliver signals in like less than 10 milliseconds. The other use case is Massive Machine Type  Communications - meant for lots of low-cost   machines that transmit data infrequently  but for a long time. Think sensors,  

actuators, vending machines and the like. These three cases involve different  requirements. For example,   streaming video on your phone does not  require the same latency or reliability   as autonomous cars exchanging data to  determine how to avoid a collision.

Because these philosophical goals  for the 5G standard are so broad,   5G was thusly designed to be more flexible  particularly in what frequency bands it can   use. For the first time, 5G opens up a  whole new slice of the spectrum for us. ## Millimeter Wave In 2011, Zhouyue Pi and Farooq  Khan from Samsung Electronics   wrote an influential article in  the IEEE Communications magazine. Pi and Khan looked at all the tricks  that LTE had to do to get the most out   of the limited spectrum and concluded  that something else was necessary. So Pi and Khan suggested entering  the then-largely unexplored mmWave   regions - pointing to several mmWave  bands in the 70/80/90 gigahertz range. These are called millimeter wave bands because  their wavelengths are in the 1-10 millimeter   range. The FCC opened them up in 2003 for  new economic development. Before that,  

the spectrum was only used for military radars,  satellite communications, or cellular back-haul. Basically point-to-point connections  where a big fixed antenna fires narrow   mmWave beams at another big fixed  antenna several kilometers away. The beam tolerances are such that strong  winds can sway the towers and misalign   the beams. So you can guess why mmWave has  not been much used for consumer wireless. Probably the first consumer use case  after the FCC decision was automotive   radar. The 77 gigahertz spectrum band  was opened up to help the advanced  

driver assistance systems in cars identify  dangerous situations and prevent crashes. Another use case was the WiGig WiFi standard,   which uses a 60 gigahertz band. First formed in  2009, I can't really say that it has caught on. But after Samsung's paper, the idea gained  traction. And in 2014, published research at both   NYU and UT Austin showed it technically feasible  to bring mmWave to consumer wireless applications. So in the end, 5G gained support for a  set of bands in the range from 24.25 to   52.6 gigahertz. This is in addition  to two other sets of bands located  

in more traditional sub-7 gigahertz neighborhoods. ## RFICs Here, we should take a pause and  talk about RFICs and how they work. Your slick, sexy iPhone 16 and your  grandpa's two-way radio might seem   light years apart. But at the end of  it, they are both radio transceivers.   A transceiver being combo devices that  work as both transmitter and receiver. Let us sequentially step through  how they work. And bear with me,  

there is going to be a lot of terminology. But  these big words convey relatively simple ideas. Imagine a phone and it wants  to send a message. First,   the phone's processor sends text data to the  transceiver in the form of bits - 1s and 0s. The RFIC begins by converting those digital bits  into a continuous analog signal. The analog signal   you get as a result is called the "baseband",  just an analog encoding of your digital data. ## Upconversion Great. We can transmit the signal now, right?

No. The problem is that the baseband  is low frequency and that will cause   problems with the antenna. The rule  of thumb is that the antenna should   be about half the wavelength.  A low frequency like with the   baseband means a big wavelength and thus  an impractically large antenna for a phone. So we - and by we, I mean the RFIC, not  literally us - employ a mixer to mix the   baseband signal with a higher frequency  signal called the carrier frequency. This unholy ritual is known as "upconversion",   and it transmogrifies the baseband  signal into a passband signal.

The cell operator chooses the carrier  frequency based on what was allotted   to them by the regulator, as well as what is  supported by the wireless technology standard. When we create the passband signal, we  also create a small band of frequencies   higher and/or lower than the carrier  frequency. These are called "sidebands",   and this is where the actual data  from the baseband is encoded.

The whole slice of frequency from the lowest  to high sideband is called the bandwidth. The size of the sidebands is defined  as a percentage. And that explains why   a higher carrier frequency - like that of  the mmWave - gets you more bandwidth. 10%   of 25 gigahertz is way higher than 10% of 7  gigahertz, and that means more data capacity.

## Path Loss Now the passband signal is more  suitable to send, but not quite yet. Your iPhone needs to communicate with the cell  tower. But the nearest traditional tower is   usually about 2 kilometers away from and 30-100  meters above the iPhone. It needs to reach that. Imagine a wave traveling or propagating  through plain air outwards from the   phone. You might imagine it as like  a spherical wave moving outwards from   the mobile device. Kind of like a force  field or some bubble in a sci-fi movie.

The formal name for this expanding bubble  behavior is "isotropic radiator". In the   real world, a perfect isotropic radiator - like   a perfect marriage on Instagram - does  not exist. At distance at certain areas,   the "bubble" will flatten out. But I think  it is helpful to visualize it like as such.

Anyway, so we have this bubble expanding  outwardly from the mobile device in every   direction. The mobile phone used  some amount of power to output   that bubble. That power is spread out  across the bubble's "surface area”. So as the bubble travels outward, its surface  area expands - diluting the wave's power density.   This dilution is known as "path loss", and  it is measured using the Friis Equation.

Path loss in turn leads to "attenuation" - which   in the wireless world refers to the  gradual reduction of signal strength. Not all attenuation is due to path  loss - rain, object obstruction,   and the like are other reasons - but path loss is  indeed a significant contributor to attenuation. Thanks to this path loss and  other attenuating-causing factors,   if we want the passband signal to reach the tower,   then we must amplify its power. This  is the job of the power amplifier. ## Power Amplifier The power amplifier is the  RFIC's most power-hungry module.

It is also one of the most difficult  modules to design. Designers must design   a power amplifier that can achieve the  highest level of output power to reach   the tower while also maintaining  good efficiency and linearity. A wireless device is a power  constrained environment. Plus,  

pulling power also generates heat. So  we want to be as efficient as possible. We can measure this with a metric called  drain efficiency. If the amplifier draws 2   watts of power from the battery and outputs  1 watt then it has 50% drain efficiency. Another thing that power amplifier designers  have to consider is something called   "linearity". Linearity describes the signal's  quality as it travels through the amplifier. Finally after exiting the power amplifier,   the boosted passband signal is  transferred to the antennas. ## Antennas Antennas. We are finally at antennas. So we  are done right? We transmit right? No man,  

that would be too easy. I  have one more thing to add. If you were paying attention to the section  about path loss, then you might have been   wondering. Per the Friis equation, the path  loss is proportional to the wavelength squared. So high frequency waves travel less  well than lower frequency ones, right? If we go from 3 gigahertz to 30 gigahertz  spectrum, the path loss gets to be 100 times   worse, and we would need immense  power to send any signal, right? This is only the case with a pair of identically  sized antennas. But since the wavelengths are   smaller with mmWave, we can also make the antennas  smaller. They also have this gnarly spiral shape. And so we can stuff whole arrays of  them into the same area space - in   some cases as many as 32. These antenna arrays can  

generate directional beams aimed at the  tower or base station and vice versa. I am strangely reminded of the eye of Sauron   in that Lord of Rings movie.  And that was on a tower too! So that was transmitting to the tower.  When the tower sends a response,   it forms another beam towards the handset.

Then inside the handset, it  is pretty much the process in   reverse. We receive and isolate  the transmission, amplify it,   and then use a mixer to pull out the analog  signal for converting back into digital. ## Line of Sight One of the big issues with mmWave  - one most people are familiar   with - is that the signal cannot  easily penetrate certain objects: Buildings, foliage, and  even the human body. Bricks,   trees and outdoor tinted glass are  particularly good at attenuating   mmWave signals. AT&T flagged the latter  as a major issue during their trials.

On the other hand, heavy rainfall does not  seem to be a significant factor. Researchers   conclude no major effect for people  within 200 meters of a base station. Even just facing away from  the base station can hurt   performance. A 2022 experiment done at  a mmWave deployment in Boston found 10  

times better downlink and 3 times  better uplink as compared to LTE. Nice. But if you only just turn  your back to the base station,   then you suffer a drastic drop in both metrics. So the general rule of thumb is  that mmWave works best in extremely   busy areas with "line of sight":  Sports arenas, malls, airports.

I suppose that I should mention  here the 5G conspiracy theories,   like the link to COVID. I have seen  plenty of rhetoric and fear-mongering,   but little credible scientific evidence that these  mmWave rollouts cause damaging health issues. I am no doctor. But it seems to me at worst,  the waves hit your skin and imperceptibly warm   it without penetrating very deep. I reckon  the Sun's UV rays - a considerably more   ubiquitous and energetic electromagnetic  wave - do significantly more damage. ## Densification So what does this densification  mean for the telecoms? Previously with 1G through 4G, telecoms  installed large macro-cells with range   of several kilometers. With mmWave,  they will need to install many small  

cells in dense amounts. Perhaps  as little as 50-100 meters apart. Acquiring this many 5G sites is a challenging,   time consuming task. These sites have  to have power that is available 24/7 so   to provide emergency services. As well as a  back-haul connection to the larger network. And they also have to be aesthetically  inconspicuous because nobody likes to   look at a cell tower. Not to mention the  environmental permitting and all that. Telecoms have considered leveraging  reflectors - integrated into billboards   or exterior glass - to bounce mmWave  signals off of buildings. Kind of like  

how sunlight reflects off windows to  heat-ray birds and cars. It sounds   a bit crazy but that saves money as  opposed to acquiring another cell site. ## Deployment The deployment from LTE to 5G has  been complicated. Very complicated. The definition of 5G is any device using  the 5G New Radio standard. As I mentioned,   the 5G New Radio standard supports three  sets of 5G bands: Low, mid and high bands. The low and mid bands are between 410 megahertz  and 7.1 gigahertz. This is more like 4G,  

good coverage but weaker capacity.  So they - particularly in the low   bands - are basically not that  much different than 4G LTE. The high band is where the mmWave  spectrum is and where the most   capacity and data rates are promised.  However deploying it would be the most   work due to the propagation  and densification challenges. Now, telecoms can't just abandon  their expensive LTE networks,   so they got two ways to deploy  5G: Standalone and Non-standalone. The standalone option is to build and deploy  a true 5G network. 5G base stations attached  

to a 5G core network and connected to 5G New  Radio consumer devices. It is completely new. The Non-standalone option is a middle  ground stepping stone between 4G LTE   and Standalone 5G. There is a 5G New  Radio, but it connects to a 4G LTE core. This gets you somewhat faster  speeds as well a LTE fallback,   but the improvement isn't great  especially at lower bands.

## United States The big American telecoms Verizon and  AT&T were the first to go into mmWave 5G. It made sense for them. The low and mid band 5G  frequencies were already occupied by their legacy   3G and 4G networks. So for those telecoms,  the high bands were green field territory. In 2018, Verizon started deploying Non-Standalone  5G for mobile and their fixed wireless access   service, which is like internet broadband  but delivered using a wireless connection. The fixed wireless access stuff works well. Other   companies like US Cellular and T-Mobile  offer it too but it is a niche service.

The issue had to do with the mobile service   part of the mmWave rollout. By the time 5G came  around, the smartphone ecosystem had matured.   Most customers were not super-willing  to pay a big premium for faster rates. The super-confusing branding of  what is or is not 5G did not help   much either. Lots of people had no  idea if they were connected to LTE,   the LTE-5G middle ground, the "real"  5G, or the actually super-fast 5G. ## Pullback The biggest hope was in late 2020, when Apple  released the iPhone 12 in the United States. The US version of the iPhone  12 had 5G mmWave support.

Unfortunately that still did not much  change things. Analysts expected that   mmWave would carry about 5%  of mobile traffic in 2021. And Verizon said that "over time, 50%  of urban traffic will be on mmWave". But in 2021, the analytics company  OpenSignal reported that just about 2.9%  

of mobile traffic in US urban areas  was consumed on mmWave 5G. A miss. Another study of San Diego by EJL Wireless  Research at the time hinted at why. It found that   after connection, any slight movement would cause  the phone to fall back onto 4G LTE. They were   skeptical that Verizon could get a good return on  its investment if it tried to cover a large city. Verizon and Qualcomm - the latter a major backer  of mmWave - argued that coverage mattered less   than how much data traffic is shifted to  mmWave. That would alleviate pressure on   the existing 3G and 4G networks, and thus  be a net positive sum for the telecoms.

Nevertheless, Wall Street was not pleased,   especially as the insurgent third telecom T-mobile  came out with low and mid-band 5G deployments. Since then, it seems like Verizon and AT&T  have tweaked their strategy. In February 2021,   the two companies bought licenses for  5G mid-band spectrum in the "C-band",   3.7 gigahertz. A prime band. Verizon by itself  spent over $45 billion on this spectrum. Since then, the two companies have focused  on expanding their mid-band offerings,   including their standalone 5G network. Meanwhile, device support - especially on the  Apple front - has been weird. A few years in,   it is still only the US  iPhones that support mmWave,   and their iPads even recently  dropped support for it entirely.

## Conclusion Before we conclude, I want to thank  PhD student Tal Elazar for his help   in walking me through this complicated ecosystem.   The video would not be possible without  him and any mistakes are mine, not his. The telecoms are still expanding  out mmWave, but mostly for fixed   wireless access in rural areas and high  density urban areas like airports and   sports stadiums. And honestly, that is where  mmWave works best, unless something changes. Is there a super-compelling, super-broad killer  app for mmWave out there? Many of the "internet   of things" and industry 4.0 initiatives that 5G  was supposedly to be good for have not emerged.

In the United States, deployments have not  met expectations. And so the FCC has started   looking at more creative solutions to use this  mmWave spectrum - which remains largely unused. Outside of the United States, the rest of the  world hasn't really jumped onto mmWave. One   significant exception being Japan, which set up a  mmWave network ahead of the 2020 Tokyo Olympics. But Technology continues to evolve. Telecoms  continue to move towards Standalone 5G,   the true 5G, bolstered by continuing improvements   done by Qualcomm and others. But will mmWave  benefit from that progression? We shall see.

2024-11-16 13:51

Show Video

Other news