Toward Autonomous, Software-Defined Networks of Wireless Drones (Tommaso Melodia Northeastern Univ.)
SHYAM GOLLAKOTA: So let's all welcome Tommaso. He is a professor at Northeastern University in Boston. He's done some incredibly diverse set of work.
He was the founding director of the Institute for the Wireless Internet of Things. And also the director of research for the PAWR project, which is basically a NSF funded project, which basically ends up talking about deploying large scale wireless deployments in cities and so on. There's a lot of things which he has done. He's an IEEE fellow. He's basically a recipient of the NSF career award. And there's so much more I can keep talking about.
But Tommaso is going to share with us some of the work which he has been working on, in terms of software defining networking and creating software defined networks of wireless drones. So with no further delay, Tommaso the floor is yours. TOMMASO MELODIA: Thank you very much for a very kind introduction. Good afternoon, I guess morning for you guys at this point.
A very big pleasure to be here. I'm going to be sharing a little bit of some of the work we've been doing in the past, I would say couple two, three years with a great group of people. You see them all listed here. I hope I haven't forgotten anybody.
These are mostly PhD students and postdocs at Northeastern, some collaborators of AT&T. The work that I'm going to be discussing has been primarily sponsored by the Air Force Research Laboratory, as well as the National Science Foundation. And partly done in collaboration with AT&T. So autonomous software-defined networks of wireless phones. That's the idea of developing networks that have intelligence and have the flexibility to adapt their behavior to many different operational environments.
Why what would we want to have network drones? Well, UAVs can be thought of as flying wireless network nodes. Right? So you can think of UAVs that fly in isolation to perform a lot of different tasks. That can include things like tactical sensing and reporting, going to a certain area, collect data, send it back wirelessly or physically.
Increasing the network coverage of a network, so basically putting base stations on top of drones to extend the connectivity of a network. And you can think of drones as aerial cellular users that are doing things like streaming video and sending this video to a fixed base station. Now and think also of network drones as multi-hop extensions of cell phone networks or of multi-hop networks that operate in isolation with respect to existing networks right? And it could be used in disaster recovery scenarios, areas in which connectivity has been lost for whatever reason. Or in situations where the ground fixed cell network is not trustworthy for whatever reason and you want to do something different. Now, in military and tactical context, network drones can have a lot of different applications that have to do with search and rescue, identifying specific targets. And in these situations, the network activity off the, grid they're not connected the way with infrastructure based cellular network.
You can think also of surveillance applications in civilian contexts in which you want to have smart monitoring identifying specific objects or moving vehicles and the like. Now obviously, this has significant privacy implications and concerns, so we don't want to get into a surveillance society. But from a technical point of view, this is a possible application. Now there's a number of court challenges in developing networks of connected drones. One of them has to do with prototype, right? How do we develop prototypes that are fully programmable? So we want to be able to control the behavior in terms of the communication, in terms of the control, in terms of their sensing. At the same time, we want to enable stable and repeatable experimentation.
There's some questions about experimental facilities, like how do we develop facilities, testbeds, and platforms, can enable experimentation with these scenarios, with drones? What are some of the right trade-offs and how do we make these platforms available for experimentation? I'm going to touch on these and some of these problems a little bit towards the end of my talk. There's a lot of interesting problems, from an algorithmic perspective. How do we design new organisms to control the behavior of drones? Again, the sensing, the communication, the motion capabilities. And how do we do it in a way that considers all the complexity of the sensing, the controlling, the networking functionalities? And how do we do this by exploring the nexus between innovation in algorithms and softwarization? We want to be able to control everything, but we also need to have the right abstractions so that we don't have to start from scratch every time that we want to develop a new application or a new concept. OK? So I'm going to some of the work that we've been doing in the past couple of years in this space. Look at one of the frameworks algorithms that we've been developing.
And I'm going to start with some work on what we like to call interference minimizing UAV control. This is work done in collaboration with AT&T, so it's looking at one of the aspects that I mentioned at the beginning. So the application of UAVs in commercial wireless systems. Then we're going to talk about completely different scenarios, in which we want to look at networking in disaster scenarios where we have lost infrastructure. And then we're going to talk about what is, in my very personal opinion, the most interesting part of the work, which has to do with developing a framework for software-defined principled wireless control for drones. Right, so one thing that we need to do if we want to look at connected drones is to look at the architectures for those.
We've developed a few different prototypes over the years. But the general idea is we've been connecting the DJI drones or Intel Aero drones, so drones that have a certain degree of openness and programmability. Interface them with general purpose programming board, a board like Intel NUC, et cetera.
We have typically connected the processing board with software defined radio. We've used quite a bit to USRP B210 and USRP B205 mini. These are software defined radios with small form factors.
They can be interfaced with drones, typically. And then we've typically implemented the software protocol stack in the drones. For example, the example that you see here, we are running in the Intel NUC an SRS LTE protocol stack. This is a basically fully softwarized standard compliant based station for several networks. So you can, if you're running an SRS LTE based station on the Intel NUC with a USRP device, you can create a fully compliant base station that can connect smartphones and still have the flexibility to experiment with the protocol stack because it's all open and softwarized.
And this is just a couple of examples of how we have realized this prototype with the Intel NUC on a DJI M600. In this case, connected with the USRP base station. And with the, you see the two antennas here that provide the wireless connectivity. And I mean, this is just what it looks from our perspective, from a cellular perspective in this specific work we have put both the nOB the base station, and the core network on the drone. But there's a family set up in which you implement the nOB of the base station on the drone. And the core network can be implemented on the ground and wirelessly connected with the drone.
OK so just wanted to give you a little bit of a sense of the platforms that we're going to be considering. And the first part of the work has to do with enabling high data rate for drone video streaming applications. OK now, why do we want to connect drones to the satellite infrastructure for such? Well, this has a number of applications for live broadcasting and surveillance, as you can imagine. So getting high resolution video, streaming it from drones for, again, for live broadcasting or for getting specific scenes of interest. There's a number of other applications that probably don't require as much, as high a data rate as video streaming.
But video streaming is clearly a core and very important application. And what's the problem with flying users, from a cellular perspective? Is that when you have a ground UE. UE stands for user equipment, and it's basically the device that connects to the base station in a cellular network, right? So when you have a ground UE, typically, a ground UE has a certain signal through which it connects to a serving base station serving cell.
And then it also generates interference for other cells. But especially if we are in an urban environment or an environment with a number of objects, the interference is somewhat limited because the signal is often weighted by bouncing on objects that are in the environment, right? And the problem is that when you are instead considering UEs that are carried on unmanned aerial vehicles, you generally have great service because the drone, there's a lot of visibility on a number of different cells, right? But it's also creating interference to cells that are not providing service to signal. And this is a well-known problem in the industry, to which when you have drones flying at an altitude that are streaming video especially, what you get is a significant amount of interference to other cells. And the industry is trying to look for solutions to this.
Just to give you a sense, this is an experiment that was done again with AT&T, in a parking lot. And basically, there is a net key modem that is connecting with an LTE base station. And you can show in this plot here on the right, that as throughput versus time, that you have a significant throughput degradation to the ground nodes in the presence of the UAV. So a potential patch is to, you know, just limit the capability of drones to have high speed uplinked connections.
But a better solution is to try and do something different. So what we try to do is to come up with a framework through which we can determine and control, with a certain level of granularity, the directionality as well as the location of the drones, to minimize interference with ground nodes. So again, intuitively, is what you see here in this cartoon on the top. The red lines represent interference between the drones to specific base stations, and the blue lines represented a useful signal right from your drone to the [INAUDIBLE] that it's connected to. And from the UEs to their serving base station.
What you see is that there is significant variability depending on if I move the drone between these five different positions, and this is experimental data. One, two, three, four, five. The level of interference that I'm generating to the UEs varies quite a bit. And it has a significant effect on the performance of the network. The other aspect is the level of directionality. Now, typically, In cellular communication, antennas have, for the most part, been omnidirectional.
So the signal propagates in all directions at the same time. You can, however, put directional antennas on drones. And when you do that one typical aspect of the drones is that you can rotate them, right? So they can rotate and transmit the signal in different directions. And you can control that pretty easily. So what you see in the experiment here below is a similar experiment with what we have above.
But instead of changing the location, we are just changing the angle through which we send the signal by means of the direction. And that has an effect on the throughput that we obtain, but also on the interference that we generate to the other nodes. OK? So what we see in this experimental campaign is that by controlling the location of the directional transmitter, you can get an improvement up to 40% in the performance of that you obtain. So then we look at how to designing and staging a problem, which we want to basically optimize the video streaming capabilities for drone.
By controlling the drone location and the directionality of the drone, there's constraints because we assume that there is a point of interest, something that the drone is trying to observe, right? And the drone, instead of, in classical systems, a drone would just move above the point of interest. We have some constraints in the sense that we want to get close to the point of interest, but there's a certain margin. There's a certain area around the point of interest that is fine for us to be in. So now we need to have a certain minimum upload quality of service. So we want the quality of the video that we're streaming to the base station to have a minimum quality. We have a constraint of the maximum distance from this point of interest.
And then we want to find the best location and the best transmission directionality for the drone that would minimize the uplink interference to the other ground users. And from an architectural perspective, the great news is that the way that 5G networks are evolving, they are exposing a number of APIs and control knobs through a concept that is known as open-RAN. I'm not sure if the folks in this class are familiar with the concept of open-RAN. But it's basically something that is happening in the 5G industry and that will probably change the wireless industry for years to come.
And it has basically two main ideas. Number one is the idea of disaggregating and visualizing, in some cases, the classical stations that are using wireless systems. So in the past, you had all the functionalities in a monolithic piece of hardware on your base Station. Now the different functionalities that the base station has to do are sent to the cloud or to the edge. And disaggregated the [INAUDIBLE] in a number of different functionalities that are connected with one another through standardized interfaces. OK? So now you have this level of softwarization and this aggregation that enables, first of all, multi-RAN operation, which will innovate in the industry.
But the second aspect is also enabled. It makes it much easier to expose control knobs as well as information that can be used for decision making in the network. The second big idea in open-RAN is the idea of exposing a centralized control interface from the network so that you can bring specific control applications into the network. It can be executed to control specific aspects of the network.
So in the past, the control actions were completely limited to what your monolithic base station could do, right? And you couldn't change information. You couldn't change the functionalities implementing the base station. Those were in the hardware or in the firmware.
And you needed to work with the manufacturer of the base station to change anything or to control anything. Now you can basically, through open-RAN, through a concept that is known as the RAN-intelligent controller, you can bring control applications, potentially from third parties, right? So you're going to have an AT&T operator that can put a third-party application that is specifically designed to control the behavior of drones. Right? With open-RAN, you're going to be able to implement these open control loops in the network. So the idea of this specific controller is it will be running on the RAN intelligent controller. In a certain area of the network that has access, by means of these all interfaces, to information related to the load of the various cells, the position of the various nodes, et cetera. And it can run based on the information, a control loop that in real time controls the position of the drone, as well as the directionality of its antennas to solve the problem that we discussed in the previous slide.
So minimizing the interference while satisfying the constraints of being close to the point of interest. And what we'll see from the other one, because of the point of interest, and obviously, the quality of service in the uplink. OK? So that's the idea and you can, I'm not going to bore you with the details, but you can formulate this problem as an optimization problem, solve it in real time, and get the solution that will determine how your open-RAN controller will control the behavior of the drone.
We have prototypes of this concept with the DJI drones and with softwarized base stations running on the drone. And you can show, we have conducted both some small scale experiments with these softwarized base stations where we compare the different control strategies, what we call NC. NC means basically no control, just get as close as possible to the point of interest and send your transmission directionally in the best direction. Away, that means basically find the best angle only. And OPT, that is the optimization scheme that we discussed. Then we can show an increase with respect to NC and we'll say that is quite significant as far as cellular operators go, right? And an improvement 20% in cellular operations is a significant saving for the operator.
We've conducted also a number of different large scale simulations to see the impact of that scaling on this problem. And this basically just confirmed a lot of the findings of the experimentation. All right, so this was an application of some of these ideas or some of the softwarization to cellular networks. I mentioned that we're going to talk briefly about mesh networking and robotic networking for internet sharing in disaster scenarios. This is work that was done in my group by a group of great PhD students.
And the idea is, clearly there are situations in which communication links can be disruptive. And this is pictures from recent, the impact from Hurricane Irma and Hurricane Maria that happened in 2013. This is September 2017. The Hurricane Irma hit on September 11. And what you see is, in this slide, in this evolution, is that basically, there was a significant impact on central connectivity until a week later.
OK? So significant parts of Florida, they didn't have cellular connectivity for about a week before everything was restored. Even worse when Hurricane Irma hit. You see the entire island was pretty much without cellular connectivity on 9/21. And 9/23, still no connectivity. 9/27 started getting something better. So it took over a week to restore connecting the island.
And clearly connectivity is needed, especially when there is a disaster scenario, right? You need to be able to connect people for their daily lives, as well as provide connectivity for disaster relief operations. So can drones help in these situations? We believe that we can. We tried to put together a framework that tries to answer three main questions. How can you find survivors how can you provide connectivity? And how can you maximize the network lifetime? We refer to this framework as HIRO-NET. And it's based on a couple of ideas.
Idea number 1 is to start creating a number of spontaneous mesh networks that are generated by using cellular connectivity between devices that are nearby. And we specifically used BLE, Bluetooth LE to demonstrate the idea. But clearly, whatever wireless interface you have on your smartphone could be used for that. Right? So combinations of Bluetooth LE and Wi-Fi make a lot of sense.
And then use robotic devices, specifically drones, but also we can see with this framework some ground robots, to find, identify these connected mesh networks. And provide the connectivity services by means of long range links obtained with VHF and UHF radios to create a upper tier mesh that can connect to the various devices. So one of the challenges was to develop procedures to quickly and spontaneously create these mesh networks with Bluetooth LE. And to make sure that you can route messages, provide basically internet connectivity, through these mesh networks provided by Bluetooth LE. So imagine if one of the drones has connectivity to a cellular network, then that connectivity needs to be shared with all of the other nodes in the network, including the nodes of the lower tier.
And we specifically looked at providing an architecture that can enable sending emails and sending tweets with that respect. And then the second part of the work had to do with developing algorithms through which we can quickly connect the various-- quickly, first of all, find these two different phases. Phases which we want to identify, find, look at a certain territory and cover it as much as possible, to identify the various spontaneously created measures. And the second part of the algorithm is how do you connect these messages to provide these measures, to provide maximum connectivity.
Details in the paper, if you're interested. But the second part was primarily an algorithmic framework to move the drones in an effective way to do that. This leads me to the third part of the talk, where we're going to discuss, so we looked at connectivity in disaster areas. We looked at connectivity for commercial applications. Now wanted to guide you through the way we've been thinking about developing control frameworks for self-optimizing measures of drones.
OK? So the assumption here is that we have a mesh network of drones that is not necessarily connected to the network, right? They could be completely operating autonomously. We assume that we can control the protocol stack of the drones in the network. So we can control their connectivity functionalities. And we want to have a flexible framework through which we can program the behavior of the networks. I am going to qualify what I mean with programming the behavior of the network, but I want to be able to define different specific objectives in terms of how the network should behave.
OK? And I want the network to be able to follow the behavior that I have defined and do it in a distributed way. OK? What I want to do is I want to obtain an elastic and programmable network. And I want to do it in a distributed way, but I don't want to have every time that I change the objective, that I want to change something in my network, have to write code that does that for me based on a distributed abstraction. Right? Want something a little more flexible than having to put my distributed piece of code on each individual drone and then hope that the general outcome of the distributed behavior of the different nodes would give me what I want.
I also don't want a centralized control, right? I don't want old fashioned centralized control, because if I try to do that, well, there's a control loop. Because I need information from each individual drone to be able to provide control from a centralized abstraction. And then I need to be able to send control directives to each individual drone, OK? And that's not great. It leads to latency and a single point of failure. And it's not good from many different points of view. So with what we call SwarmControl, our framework, we try we certainly try to provide distributed control.
So the actions need to be taken by each individual drone. But I also want to be able to send control objectives to my drones. OK? And I want to be able to send these control objectives once, or maybe when the objective of my mission changes. OK? Now, even better, I want to be able to, not only send the control objective, but to define specific objectives, maybe specific constraints on top of a centralized abstraction.
Right? I want to be able to look at my network from a centralized abstraction, have a single point where I write some code that defines the behavior that I want for the network. And then I have a framework that decomposes these-- sorry, I'm getting a phone call. I have a framework that decomposes this behavior that I define in the centralized abstraction, decomposes it automatically for me and generates distributed control programs that are executed at each individual drone. And that give me, through their distributed interaction, the behavior that I want, that I have defined on a centralized abstraction.
OK? And the question is, can we guarantee optimality? Well it depends. It depends, certainly, on what we mean by optimality. But it also depends on the specific problem. But the idea would be to try and automatically generate distributed optimization algorithms to be executed on each individual drone. That will be executed in real time based on local state, right? And try to obtain the behavior that I want, that I have defined in my centralized abstraction on the high level. Right? So I define my desired behavior at a high level or a centralized abstraction.
I have this magic framework. We'll talk about how we utilize this. It's nothing, magic I have a framework that takes this behavior and generates distributed and low level control abstractions that control the behavior of the biggest drones to obtain the behavior that I want. OK? And then, if I change my objective or I change some of the constraints that I want in the desired behavior, then now the framework generates a different optimization problem and sends it to the drones to be executed. OK? All right, so what we are trying to develop here is a framework for automated distributed control.
Well, let's see, automated generation of distributed control programs for self optimizing UAV networks. OK? All right, so how does the SwarmControl framework work? It has a number of different components. From an architectural perspective, it has a centralized component that is executed by the network operator, whomever that is, that provides, basically, a centralized abstraction where I can define different control problems. And the control problems are defined with a language that resembles the language of mathematical optimization. So I can define a system metric that I want to obtain.
For example, I want to maximize the throughput of my network. I can define some protocol canvas for my protocol stack, and I can define some constraints. OK? For example, I want certain nodes to have power limited over a certain boundary or range.
Then the framework basically generates automatically a network control problem and some abstract mathematical representation of the control behavior that I want. OK? And then it decomposes this problem to generate distributed algorithms to be executed at different layers of the protocol stack. And then it sends these distributed algorithms to the drones. And the discipline algorithms are executed on what we call a drone programmable protocol stack. This is basically a software framework that has a data plane where, based on product canvas, that you can define in your centralized abstraction. A register plane and a decision plane that executes the algorithms that control the specific behavior of the drones.
OK? And then these are busy with these solution algorithms, are sent and executed into your types of drones and they control their behavior. Well let's see a few examples. Going into a little more detail, we have, yes, a control interface that provides this high level API through which we can define the behavior for the network. We have this abstraction layer that generates the centralized representation.
And then we spent quite a bit of time on developing this framework, to automatically decompose the problems. I'm not going to go into a lot of the details. They are in the paper. And if you're interested in learning more, I'll be glad to discuss that offline.
And then this control framework sends wirelessly the information to the various drones. The drones are connected to software defined radios, B205 mini, and they have some specific models that they can control. So they have flight control functionalities.
And in the drone protocol stack, you have protocol repository. And then you have some numerical solution algorithms can be executed to control the behavior of the drone, a data plane that is connected to the USRP driver, to provide all the networking functionalities in software. We also considered the, what we call here for lack of a better term, layer zero. That is a motion layer that is connected with a flight controller. OK? And the specific hardware prototype that we utilized for this work is based on an Intel Aero drone.
This has been discontinued, unfortunately. They were nice pieces of hardware. You could control quite a few things and they were quite expensive. So we bought quite a few in the lab, and we were sad to see them discontinued. But then we connected them with a B205mini.
Basically, the protocol stack was completely implemented with custom libraries and interfaced with [INAUDIBLE] radio for the physical layer. Now, to give you a sense of some of the things that you can do with this framework. We've tested it in indoor scenarios, right? With considering eight drones. The control parameters are the transmission power, the routing tables, and the congestion window at the transport layer.
Here we didn't consider MAC layer. MAC layer can be controlled also with this framework, but it's very hard to implement, to really with SDRs and little computational power like you have with drones. It's quite hard today to make meaningful experiments in which you control the many MAC control layers as well. And the control schemes that we compare that our SwarmControl, that it's basically the entire framework. Best response algorithms in which instead of just developing automatically these distributed control programs, we just let each drone greedily optimize the parameters in a completely independent way.
And no control basically uses what we found to be good average parameters. And again, we did in these experiments here, we controlled the physical routing and transport layer. The position of the different drones in the various scenarios is fixed. We tested them in an indoor space in our lab, as well as in a former church that has sort of an open indoor space.
And there's multi-hp hope flows that go from sources to destination that are also interfering with other multi-hop flows. And the details are here of the various things used. But for example, in these scenarios that I mentioned earlier, you see here represented the useful signal as well as the interference.
Comparing SwarmControl best response and our control, you see that depending on the scenarios, you get pretty significant improvements in the performance by using this more flexible control framework. Now, but maybe more importantly from my perspective and from the perspective of the our sponsor, yes, you can have these experiments in which you obtain good optimized performance. But the most important thing is that you can completely change the behavior of the network, mind, by changing one line of code. OK? So now if I go to my centralized abstraction, instead of saying that I want to maximize my sum rate, I modify my objective and I say now I want to minimize the sum power. The SwarmControl framework generates different algorithms that optimize the behavior of the network in a completely different way. What you see here is a comparison of a number of parameters, like throughput transmission, power, et cetera, and TCP rate for two different situations.
There are two sessions that are operating in parallel and then at some point, one or two sessions stops transmitting and the other one continuous. Now, if the objective is maximizing the sum rate, the session continuously to transmit data somewhat high power to maximize the throughput. But if instead my objective is to minimize some power of the nodes with constraint on the throughput, or the minimum throughput, what will happen is that instead of transmitting at a high power, the second session, you see here, has a completely different behavior. It was transmits at a very low power to provide the minimum throughput that it's programmed to have. OK? This long discussion just to say that what this framework provides is a flexible way to control the behavior of a distributed protocol stack by changing some simple parameters for a centralized abstraction. Right? This is another experiment that was done in an echo chamber that we have at Northeastern.
It's an interesting facility. It has, basically, imagine the size of a theater. In fact, it was a theater in the past. It's 50 by 50 by 22. And it's been equipped to be, it's a complete anechoic chamber.
OK? So you have former cones on the ceiling and on the walls that absorb the signals. So you're going to have multi-path propagation. That's good for doing wireless experiments if you want to understand the behavior of what you're doing without having a lot of interference that you don't know where it's coming from.
And it has also a number of antennas connected to software defined radios on the ceiling. So we have 16 antennas on the ceiling that are connected to software radios and computational power. A number of cameras, it's an OptiPlex tracking system that can give you the coordinates of the various nodes with very high precision. And so it's a Faraday cage, so it doesn't get interference from outside.
And we have tested a couple of different drone scenarios with flying drones in this environment here that you see represented here on the right now. And we've done experiments here in the chamber in which, in addition to the various layers of the protocol stack, we also controlled the motion functionality. Right? So now we're able to modify the position of the drones to basically be able to maximize or minimize, depending on what performance objective you are trying to minimize, the behavior of the network.
OK? So again, this is plots there. But what we're seeing is that, I think the degree of freedom that you obtain from the motion gives you opportunity to obtain even higher increases in throughput. This is, again, the problem in this example, max sum throughput with two different flows that are being transmitted over multiple paths in the network. Some flying drones, some static drones. And maybe you see better from this little video here that shows an experiment in exactly that scenario that I described. So the idea is you choose your optimization problem.
And then in a couple of seconds, the framework generates the control problems and sends them to the drones. And then the drones that need to take off take off. And now they generate the mesh and they start transmitting data. So there's different flows that are being transmitted in parallel on multi-hop paths.
And the UAVs start observing the environment, executing the algorithms that they have received, and control the transport rate, the routing tables, the transmission power, as well as their location, to maximize the network throughput. And what you see here, and you'll see it better in a minute from a different angle from a different camera, is that the drones that belong to different flows start separating from one another. Right? So they get far away from one another, the drones at the top from the drones at the bottom, to minimize the neutral interference between the various flows. OK? And at some point, they converge to a more stable state where they have a high level of throughput. And clearly, it's an experiment in a somewhat controlled environment. But I believe it gives you an idea of what is the potential of this framework and where this can go.
Now, what is a clear limitation of this framework? A clear limitation is that the performance that you obtain is as good as the models, and these are optimization models based on their deterministic optimization models that are sort of hard-coded in the framework. OK? So the composition happens based on models that we know to be good and that are implemented in the framework. But the performance that you obtain in a real environment is as good as the models that you implement in the framework. There's no way around that. Right? And when I talk about optimality, I need to qualify that, because there are some problem in some models for which we know, the art framework, knows how to decompose them in a way that will lead to optimal performance.
There are some models for which we don't know how to obtain, maybe it's not even possible to attain optimal performance in a distributed way. OK? So we obtain a performance that is, indeed, sub-optimal. It's usually known the experiments that we've done better than if you don't do anything, but it's not something simple. So what we've been, and this is a work in progress, we don't have any publications on this at this point, and it's something that we're working on as we speak.
We want to extend this SwarmControl framework to basically be able to decompose our optimization problem and provide control by means of data driven algorithms. And one key challenge is, again we want to move away from the model-and-optimize approach. Of course, we don't want to do worse than modeling.
So modeling is sort of our baseline. But we want to see if we can do better by taking data and using data to generate controllers that will perform better than our model based approach. OK? And so the idea is to improve our cultural framework too by including a field or training environment in a virtual environment. And then the last stop, once we've been able to train agents is to send the agents to control the programmable protocol stack. Now, a major problem in this phase is that the batteries of drones can vary. It depends on the drones, depends on what you're trying to do with them, but it's usually tens of minutes.
Right? And collecting data to train complex decision making agents for sequential decision making in complex networks is something that is complex, requires a lot of data. OK? So how do you collect these large amounts of data? When, you know, it takes one person to fly one drone and you can fly the drone on for 20 minutes and then you have to wait a long time to recharge the batteries. So we have been working, and this is worked on with sponsorship from the Air Force Research Laboratory, to develop an emulation environment that emulates the drone stack plus the wireless channel. Where we can train multi-agent deep reinforcement learning models on the emulator themselves. So be able to generate networks, and when I say networks here I mean deep reinforcement learning networks, that perform well in the emulator. And once we've trained these deep reinforcement learning agents, use them on real drones and learn further from real environments in what is known sometimes as transfer learning.
You learning in a certain environment, you train the agent in a certain environment. And you're moving to different environment and you continue to train it. So this is what, I'm not going to go into all the details, but basically we have developed an emulation architecture that is based on the core and EMANE.
EMANE is an emulator that was developed by the US Navy. And it's basically a pretty sophisticated network emulator in which the various nodes in the network are implemented as an individualized environment, as individualized containers. So they have a complete protocol stack that is fully executed. And we we have ported the entire drone programmable protocol stack into EMANE.
And then the wireless channels are emulated on EMANE. And we're using this environment to train. And we have generated a number of different techniques to train the multiple agents in this virtual environment. And we have developed techniques to parallelize the training so that we are able, with one specific environment, to train multiple different reinforcement learning agents. And then average the training of the various agents to obtain one that performs well in multiple environments.
Anyway, this is again a work in progress. The results so far are pretty promising. We have specifically looked at the max sum throughput objective, and we see that we can obtain throughput that is significantly higher than with other classical strategies. But the objective is to be able to port this testing from the emulation and training environment into real operational environments with drones and software defined radios. And this is just a very, very simple demo of the deep reinforcement learning. But again, this is an emulated environment.
So it's a cartoon, in a sense, but you see that there's two relays, the blue nodes that are learning to move themselves to the optimal position to maximize the throughput of a multi-hop network, with two different flows operating in parallel. OK? All right, this is the idea. I think this research and the intersection between wireless drone sensing and softwarization is to me, at least, extremely exciting.
But it's not research that is easy to conduct in universities environments. One of the reasons is that the testing facilities and the platforms that are necessary to do this kind of research, they're expensive to build. It takes a lot of manpower and the relationship between effort and reward is often not the best. I think, as Shyam mentioned at the beginning of the presentation, I've been involved with a program that is spearheaded by the National Science Foundation that is called Platforms for Advanced Wireless Research. The Platforms for Advanced Wireless Research program is trying to create shared community platforms of testbeds for experimentation in various different areas of wireless. And I wanted to just say one thing.
One of these platforms that is being-- it was awarded about a year ago, and it's now being deployed and developed. It's called AERPAW and it's being developed in the towns of Raleigh and Cary in North Carolina. It's a consortium of universities and industry players that is led by North Carolina State University.
[INAUDIBLE] This is going to be specifically focused on the nexus between drones, softwarization, and new advanced applications of verticals related with drones. So I invite you, if you're interested in this area, I invite you to follow what happens with AERPAW. They should become available for experimentation over the summer. And you would be able to find a lot of different toys, a lot of different base stations. And the idea would be that you can basically generate your experiment.
Define your experiment, define the software and the scenarios for your experiment. And there will be a bunch of folks in North Carolina State that will conduct the experiment for you. Or you can, obviously, visit them and conduct experiments in person. But there's going to be different modalities of interaction with them.
And this concludes my talk and I'll be more than glad to answer any questions. SHYAM GOLLAKOTA: Looks like we have a number of questions in the chat. TOMMASO MELODIA: Oh my gosh.
SHYAM GOLLAKOTA: Thanks a lot, Tommaso, for giving this talk to us. So we'll start with a couple of questions So Svesha has a question about, in the videos that you showed, the drones are tethered with wires. Are they for power or are they for control of the drones themselves? TOMMASO MELODIA: Neither one. So I've showed two videos, right? The second one was an outdoor video in which the drones, they had tethers.
Those were for power. So they had a 60 gigahertz base station that are very power hungry, so the power was-- in the indoor video, those wires are only for safety, basically. So to prevent, because there are people in the room, so they are there to prevent the drones from going, you know. You know what.
But yeah, no. The control and the communication is all happening wirelessly. SHYAM GOLLAKOTA: So is there a path for-- you said that 60 gigahertz millimeter wave radios or base stations are power hungry, but they also provide high bandwidth for data transmission.
So is there a path for making them untethered if you're using 60 gigahertz base stations on these drones? TOMMASO MELODIA: Well I would say that will depend. That specific base station we've used, Terragraph, we could not find a way to operate it without the tether on those drones. But in general, yes. You can put-- it's a matter of identifying base stations that have-- and this is going to be a challenge for awhile, that millimeter wave. Because the amount of power that they consume is certainly an issue. But there are improvements that are happening in the industry these days.
In sub-six gigahertz, I don't think it's a problem. There's plenty of radios and base stations that you can put on drones and not have any specific problem with power. SHYAM GOLLAKOTA: So that brings to a related question which Svesha had. Can you speak a little bit about the constraints about the drone itself, the payload and the power constraints? And you see and the Intel Aero bots take a significant amount of payload. But looking into the future, what do you think about scaling this down to maybe smaller drones? Maybe there's no need to scale to smaller drones, but can you talk more broadly about the weight constraints and the power constraints of this going on in the future? TOMMASO MELODIA: Yeah so again, there's a number of different-- you can see my screen now right here, right? SHYAM GOLLAKOTA: Yeah. Yeah, yeah.
TOMMASO MELODIA: So these drones here, these are DJI M600. I forgot the exact number, but they can carry a few kilos. So the drones that cost a few thousand dollars, their size is something like this.
You can carry a few kilos, can fly for, I don't know, 40 minutes, something like that. There's a lot that you can do with those. The Intel Aero these are-- again, they have been discontinued, sadly. But they were in a sort of a different space, about $1,000. So I would say five, 6x less than the DJI M600.
They could carry, I believe, a little less than 1 kilo, OK? So once you put smaller SDR and some sensors and maybe antennas and a few other things, you're pretty much done. So you don't have a lot of capabilities of carrying. They were appealing for their programmability, the fact that they had an Intel board already on board, so you didn't have to do a lot of engineering to add additional boards. And also the fact that, being inexpensive, you could get quite a few and get larger networks without significant expense.
But I would say that, for most of the applications that I can think of, carrying a few kilos is quite a bit. Being able to carry a base station and a few sensors and a board, you don't need much more than that. In fact, you can put an additional board with GPUs if you want to do AI.
And you're fine. In fact, we've done it in some other cases. So yeah, I would say with drones in that space, there is a significant limitation for payload. Of course-- SHYAM GOLLAKOTA: I want to change the topic a little bit, to the composition of the distributed algorithms. The first question from Don is how are the algorithms transferred across the network, and how reliable is it using this 6 gigahertz.
Is it using some other, what is the transfer mechanism? TOMMASO MELODIA: Yeah. You have software defined radio, so you can basically choose the-- but that's a good question. It's not something that we have put a lot of effort investigating, but it is a good question. I mean, if you need to be able to do that, you need to have a reliable, either control channel or a reliable link that you can use between your centralized controller and at least one connected drone.
So that is an issue which, in a practical application, you would want to investigate that and make sure that you have a super reliable link to be able to do that. But it's not something we put a lot of effort in. We just used the channel that we had and we distributed the algorithms there. SHYAM GOLLAKOTA: So a little bit more about the distributed nature of this. It seems like pretty much like a magical thing because it's very powerful, honestly. So can you talk a little bit about-- not all operations, I mean, you are able to provide some optimization functions that can decompose it into different codes that can run.
They are distributed on each of these drones. I'm assuming that there are certain optimization functions which cannot be decomposed that way. So can you speak a little bit more about what classes of things can be done in the current framework versus what cannot be done? TOMMASO MELODIA: Yeah. So if the control program that we have defined can be expressed as a, basically, convex optimization problem.
There's some pretty sophisticated machinery that you can use to decompose them and in many situations, be sure that you would get the optimal solution with the right amount of iterations and exchange of messages between neighbors. OK? Now, for non-convex problems, there's still things that you can do to decompose the problem. OK? And one approach is to, for example, do some partial linearization or partial convexification of the problems.
And then decompose that pattern. There's other approaches. You don't have a guarantee that you would get the optimal solution. OK? So that's, in a sense, a limitation. Although, you know, even with centralized control, in many situations, you have the same problem. Right? You don't know how to find the optimal solution in a finite amount of time.
So that's, in a sense, an unsolved problem that we have to deal with. What we would like to do is, in what I described at the end. With the deep reinforcement learning component, the deep reinforcement learning can basically learn from the environment complex functions and complex realities that could be lost with the deterministic decomposition that we do now. We don't have a comparison at this point. So I'm unable to tell you whether we can indeed do better than what we can do with that. So what I can tell you is that we can decompose a broad class of problems, for sure.
And for some of them, we can tell you that we can decompose them optimally. Some others, we don't know. SHYAM GOLLAKOTA: So on a related note, the same topic of questions, Don had another question on the training environment. Like how do you evaluate the accuracy of your system? Is it just solving for a single metric or is it multidimensional, in terms of the various objectives you want to meet? TOMMASO MELODIA: We would like to meet various objectives. At this point, we've looked at throughput. That's just what we got to, but in terms of how do we evaluate the-- well, again, it's hard to say.
Because you don't have a ground truth of what is the optimal, best possible way to control the drones to solve these scenarios. So you can compare with respect to other existing strategies, that make sense? It's complicated to be able to say for the very general class of problems that are doing. You are doing the best possible thing, in absolute terms. But yeah. SHYAM GOLLAKOTA: So the other question is a little bit about scalability. So Alex had this question on like, so the prototypes right now are around seven to eight drones at the same time.
And there's all this networking architecture which is being created, there's all these multi-hops being created. What are your thoughts on what are the paths for scaling it to hundreds of drones? Do you need those, first? And if you need that kind of a scalability, how would it scale? TOMMASO MELODIA: These are all great questions. On do you need those, I don't know.
I'll just be very honest. I haven't been able to-- I mean, let me rephrase. I can think of military applications that would-- they could use 100 drones, for sure.
Right? Civilian applications, I'm not 100% sure. It's possible. You know, often, we don't know until somebody does it. And then we see. But I don't know.
Now, and let me say separate the answer now, in terms of what you can do experimentally and what you can do in simulation, emulation. OK? Simulation, emulation, this can certainly be scaled to 100 drones. As of today in the emulator, I think we've done something in the order of 20.
And it's primarily for computational capabilities that we've reduced, but you know, with more computational power, you can do more than that in EMANE. I don't see significant problems there. The exchange of information is localized, so that means that there isn't a global exchange of messages going back and forth that would really limit your scalability. Second part of the answer, experimentally. Doing what we did in that experiment with eight drones, in which four are flying and four are fixed, was not easy. And you need to have one person per drone, pretty much.
You don't see them because they're hiding in the corner that is not in the camera angle. But there is somebody with an emergency remote control, controlling each individual drone. It takes a very long time to prepare the drones, to do the experiments. Then they fail. And the experiment, then something will go wrong, and then you have to re-do it. But it's very time consuming.
That's what I'm-- and we haven't been able to find situations in which we can do anything more than 10 drones. SHYAM GOLLAKOTA: Is it time consuming because the drone technology is not yet there? Or is about the networking aspects you're building? TOMMASO MELODIA: Well, I would say it's a combination. It's a combination of the drones are not the most reliable, per se. And they require quite a bit of maintenance and tweaking.
And then, on top of that, you're adding an additional board. You're adding a software radio which, per se, is not the most reliable thing. Right? So you're putting together multiple things that are somewhat unstable. And you're trying to put them all together in a system that hostability is what it is. OK that's part of the nature of sort of research.
That happens in a lot of the things that we do, otherwise, you know, otherwise you would be working on products, and not-- which is great, but it's a whole different set of-- SHYAM GOLLAKOTA: That's fine. So before we let you go, we have two more minutes. But I want to ask you a question which actually stood out to me, which is that when you put the layering in the network stack, you call mobility layer zero. This goes to Ian's giant question of like, OK, so does this framework work for other areas? Because like gliders or blimps, and so on.
But making it a little bit more broader, I wanted to hear your thoughts a little bit more on layering. In particular, you need to abstract out mobility, because clearly the mobility of your drone itself affects everything in the application layer, the transfer, it affects everything. So is the right way to move forward to abstract it out and make it stable enough, like the physical layer? Or to jointly design things with the applications? TOMMASO MELODIA: Yeah, this is a great question.
So let me say, and this is something that I haven't really discussed. But there's two levels of mobility, right? There's some mobility that, I mean, we can call it exogenous, right? That we can't control, because it's part of the application, right? Maybe the drone needs to do something. And you don't have the capability to control that level of mobility, right? That's what happens in classical I don't know, ad hoc networks, right? The nodes are mobile, but you don't control them.
I don't know if putting it at layer zero is the right thing to do. What we wanted to, however, to convey in doing that is the fact that now, in this problem that we're looking at, mobility is indulging us, in a sense. We generate the mobility, and the mobility is at the service of connectivity throughput to whatever we want it to be.
But it's at the service of the network in this specific framework, right? And that may work or may not work for some, depending on the application. But this is something that is part of the networking problem for us and that we can control. And as you were saying, very correctly, it affects everything. And in fact, I didn't spend a lot of time discussing that, but the degree of freedom of mobility affects the throughput even more than our control of the other layers. OK? The major effect that you see in that experiment in which the drones are separating from one another is that they are trying to reduce interference.
Yes, they are controlling their powers to reduce interference, but what really matters is the fact that they're getting far away from one another. And so that has a huge effect on the application. The throughput has a huge effect on everything, pretty much. Also, of course, the routing is not affecting this experiment, but it could be affected by the mobility. You may want to move a node in a different position because it affects interference, but then you may have to reroute information.
So what's the net benefit of all these two things? Yeah. SHYAM GOLLAKOTA: Perfect. I think we are running out of time. Thanks a lot for staying on for much longer than we agreed on.
But this is really great, because we had a really nice conversation here. And it was an excellent presentation. So thank you so much for your time, Tommaso. TOMMASO MELODIA: Thank you, Shyam.
Thank you for inviting me. It was a pleasure. And thanks everybody for listening.
I hope it was informative and enjoyable. SHYAM GOLLAKOTA: Definitely. Thank you so much, bye bye. TOMMASO MELODIA: Thank you. Have a good day.