Hello, my name is Erkki Harjula and you are welcome to follow my talk on 6G enabled digital healthcare. Which is a highly relevant topic at the moment as the healthcare sector is one of the key verticals benefiting from the newest developments of wireless systems. I'm working as a dinner track assistant professor at CWC Networks and Systems Research Group.
And we are working with my team, which is shown here, on wireless system level architectures for future digital healthcare. So let's start with why we are doing our work. In the coming years, the healthcare domain is facing many challenges on global scale.
So first of all, the population is getting older, which means that more beneficiaries cause higher costs to be covered by less taxpayers. The lack of workforce is another challenge. For example, according to studies, 170, 000 nurses in Finland are going to retire in the coming decade. Then, the growing prevalence of chronic and lifestyle diseases is another challenge, and that is coming from the fact that the people or the population is aging, and also the lifestyle is more static. Furthermore, different kinds of new threats, even so called black swans, where no one can predict the global consequences or even the appearance of these events.
So these appear every now and then, COVID pandemic was a good example of such. The technical challenges for healthcare communication systems include, first of all, lack of integration. So, for example, vendor locks are very usual these days. So it basically makes it difficult to develop universally working services for the healthcare. Then the complexity of technology.
It causes challenges in first of all, interoperability, then usability and maintainability. And overall these hinder the efficient delivery of healthcare. Then the security and privacy. So with healthcare systems, we are dealing with sensitive patient data and also there are critical life-maintaining functions such as ventilation machines and all kinds of attacks to that kind of systems are also safety issues. Then, regulation and legislation are very strict in healthcare domain. So, all technical solutions in this domain require multistage verification before taken into use, which causes delay in the technology adoption.
Then cost and resource inefficiency is a known challenge of the current systems. And the importance of this point is growing due to the worsening dependency ratio that we discussed on the previous slide. Novel technologies set ever-growing requirements, for example, the real time requirements or growing data volumes and those. Regarding to the last point of this previous slide, the following trends in healthcare technology and the technology development set strict requirements for the communications and computing architecture. First is the rapidly developing sensing and medical instrumentation technology.
There is wider, more detailed and up-to-date patient data available from this new instrumentation, which generates needs for better integration with the service infrastructure. Then, novel user interfaces including, for example, virtual and augmented reality. Basically, extended reality applications.
These generate need for ultra low latency solutions capable of running these real time user interfaces. Then the AI, machine learning, big data, explainability, and so forth used in data driven analysis and diagnostics generate need for high computational performance, also the scalability, and also interfacing to third party systems. Okay, then let's move on to our focus areas. So in a nutshell, our research focuses on developing and optimizing the communication and computing architecture for healthcare services and applications. The key goals include, first of all, ensuring the dependable flow and processing of data between the critical components of the digital healthcare systems. If you look at the figure on the left.
So first of all, we need to sense this data from the patients using this health sensing actuation and then communicate this data through the networks. Then analyze with advanced analytics. Then we need storage systems like electronic health records, EHRs, and then this data is used using services and through the user interfaces so that these key stakeholders: the patient, the healthcare personnel, healthcare administration, and service providers are satisfied. Then the integration of these parts to a logically manageable distributed digital healthcare system.
Using centric technology enablers is another goal. And these enablers are shown on the lower part of the figure. So, for example, this 5G, 6G communication technologies, edge computing, AI, machine learning, and so forth. The key technical requirements include real time communications, high reliability and resilience And interoperability between systems and components using, for example, open interfaces. And then, of course, sustainability -so basically resource and energy efficiency- these are important today.
And then scalability. And, of course, security, privacy and trust that we discussed on the previous slides. One foundational cornerstone of our research is the decentralized service architecture. Where these services are composed as a set of nanoservices deployed into the computing platform And with these nanoservices, we mean this kind of lightweight microservices that are running some focused granular tasks.
Let's take this heart monitoring as an example. One nanoservice there could be the sensor, which is basically taking this data out of the patient and sending it forward into the system. Then another nano service could be analysis algorithm, analyzing this data and creating, for example, some kind of alarms based on it or making some advanced functionalities. And then third nano service could be the user interface showing these results to the doctor or the patient himself or herself. And these nano services are our term.
There are also other names for the same system, same concept, like tasklets, for example. So these nanoservices are deployed in a distributed manner. So basically any suitable node in the computing continuum on any tier of this continuum can host this.
And these nanoservices can migrate between the nodes based on the need. For example, if we have a patient that is treated on accident or incident site, and then moved to the ambulance, and then from the ambulance to the hospital, we are able to have this kind of monitoring service that is following the patient from the site to the ambulance and to the hospital using this available hardware, in each part of this flow. And then, this system needs to be orchestrated. So, intelligent orchestrator takes care of the deployment, redeployment, and undeployment of these nanoservices as shown in this figure on the bottom. Another cornerstone of our research is the three tier edge cloud computing continuum, on which the different components of healthcare services and applications are deployed as nanoservices. This continuum consists of cloud, edge, and local tiers, shown on the figure on the left.
The basic principle in deploying these components and tasks is to fulfill the given application requirements while optimizing the system level efficiency. Examples of such requirements include, for example, heavy computing. That is basically if we have heavy computing, usually the cloud tier is the most optimal place for that.
If we have tasks requiring low latency or network efficiency, those nodes that are close to the need, the local and edge nodes are the most optimal. And if we require cost or if the application requires cost and resource efficiency, these nodes that inflict the lowest cost or energy, or let's say carbon dioxide emissions, and so forth can be considered as the most optimal in these cases. Another example, if we require high security and privacy, we need to consider nodes that have sufficient cryptographic capacity and on the other side, low attack surface, to make it optimal. The challenges here include the mutually conflicting requirements.
So as you can see from these requirements above, some of these can be in conflict with each other. So we need to deal with that in some way. Then we have many times limited resources. Computational and networking resources can be limited.
Continuously or at times. And then we need to also consider the mobility of the nodes and availability of these serving nodes in different situations. So all in all, we have need for intelligent orchestration, taking care of dealing with these challenges. For example, prioritizing requirements and managing this resource competition between different tasks.
And also to continuously learn and adapt to varying requirements and resource availability. Our doctoral researcher Johirul Islam is working on this topic. An important part of our research is intelligence in this computing continuum. Basically, it means combining the edge cloud architecture with artificial intelligence and machine learning.
And for this, there are two viewpoints of the joint operation: of the platform and these algorithms. First is the AI for Edge Cloud, and the other is Edge Cloud for AI. The first, the AI for Edge Cloud, means the use of AI and machine learning algorithms to optimize the computing continuum itself, and then the other means the edge cloud for AI. It means that this computing continuum provides a computing platform for this AI and machine loading algorithms related to, let's say, some kind of healthcare applications. So the next slides will cover these viewpoints in more detail. According to this AI for edge cloud viewpoint, we are working on a resource-aware computing orchestration concept, which manages the service and task deployment in this computing continuum, and it's based on application and service requirements, for example, what kind of requirements do these applications and algorithms have regarding to computing performance, communication latency, storage, what kind of security policies are needed, and so forth.
Another is available capacity. So what kind of capacity is available for running these tasks? How much CPU, how much GPU or TPU capacity do we have? How much communication link capacity do we have? What kind of storage capacity do we have and how much energy is available at different points of the system? Then mobility, we need to consider client node mobility. So, those user devices, but also serving-node mobility in these new scenarios where also the serving node can be actually the user nodes. And then we need to consider qualitative data. For example, the observed performance of the link, the node, or E2E performance.
What is the resource cost at different nodes? What is the energy price at certain moment? What is the availability, and so forth? So the orchestration platforms require the resource profiling and energy profiling. We need hierarchical orchestration scheme that is able to operate in this architecture of three computational tiers where we have these local nodes, the edge nodes, and the cloud data centers or service. The orchestrator components need to communicate with orchestrator at other tiers and exchange context data and migrate tasks and so forth.
Our doctoral researcher Afahim Sahid is working on this area. Then according to this edge cloud for AI viewpoint, the distributed machine learning and AI have high potential to optimally exploit the advantages of these different computational tiers. So we can, for example, significantly improve the computational performance by using parallelism between the nodes at the same or even at different computational tiers. And another example is to deploy the algorithms as close to the need as possible and with that way to improve the latency, and also reduce burden on the network connections. Then the computing in local and private networks can also improve security and privacy because the sensitive data can be kept local in these local computing scenarios.
And in particular, we are working with federated and split learning schemes that enable, for example, privacy, so the patient data can be kept local, while only the machine learning model updates are shared from the local level and above from there. Then the collaboration, it includes or enables cross institution learning without sharing the raw data, including or improving the privacy. This performance can be improved by fast local model updates, by local data processing. And efficiency is also improved based on the reduced need for data transfers, improved local resource utilization, and so forth.
Our doctoral researcher Vilkehan Akdemir is working on this topic. Then, ensuring patient data privacy and safety of medical health applications also require sophisticated security management. There might be high variations in the security related requirements. For example, if we look at the applications, the healthcare domain emphasizes patient data privacy. We need to comply with regulations and also the safety of the patient is important because the attacks may compromise patient's health or even life and the related data.
The security solutions also need to be aware of the basic characteristics of nodes running on different computational tiers. For example, the cloud tier, we have high computational nodes, computational resources. So, for example, the cryptographic capacities is very high. We can run very advanced cryptographic algorithms.
But at the same time, the exposure to attacks is high. So there are a lot of potential nodes that can attack these public servers. In contrast, at the local tier, we have lower computational resources, so we may not be able to run as advanced cryptographic operations there, but at the same time, the exposure to attacks is lower. So basically, because we are talking about local deployments, the visibility of these local services is limited to only those nodes that are operating in the local domain. So the threat landscape is highly dynamic as the conclusion.
It requires, for example, security measures adapting to the relevant security threats and the available capacity for these security operations. And secondly, the real time handling of potential security incidents. Our doctoral researcher, Ijaz Ahmad, is working on this topic. Next, I will introduce a couple of our projects as case examples of the concrete output of our research. In a Business Finland funded TomoHead project, we collaborate with another research unit at the University of Oulu, the HST from medical faculty, and then the University of Helsinki. In addition, a bunch of related companies, including, for example, Planmeca, Detection Technology, Nokia, Nvidia, and some smaller companies.
We are collaborating with these partners in this project on augmenting the radiology workflow using edge cloud architecture. From our perspective, the project focuses on addressing the challenges of the traditional computer tomography image reconstruction, and particularly the workload of this reconstruction. This traditional workflow is illustrated on the left. This traditional reconstruction workflow has some challenges.
First of these is the dimensioning problem. If we have reconstruction capacity, which is typically as a separate computer next to the scanner node, if we fit the capacity of that reconstruction computer for peak load, we might have to have high investment cost for that. And if this capacity is fitted for average load, so that it normally serves well, but during high demand, if there are, let's say many scanners requiring this reconstruction at the same time, the performance might get some penalty. An alternative today's workflow could be based on cloud based reconstruction.
However, this is quite inefficient in its resource usage because all the raw data, which is a huge amount of data, needs to be sent to the data center and each can be like, order of magnitude can be gigabytes. So what we are proposing in this project is this edge-cloud-based reconstruction. We are having a scalable local swarm of GPU-enabled nodes, so generating this kind of shared pool for reconstruction tasks, serving multiple CT scanners. During peak load, this workload can be offloaded to MEC, or cloud tier, which is shown on the figure on the right. So we are having this all three tiers available for these tasks. And because of that ability to offload this functionality to the edge or even to the cloud level, we are having a very scalable and very resource-efficient system at the same time.
In this Business Finland funded Eware 6G project, we work with VTT towards more sustainable wireless networks, which has also been identified as centric goals of the healthcare domain. In this project, we focus on optimizing the E2E energy efficiency of communication and maximizing the use of sustainable local energy sources through energy aware solutions in communications and computing. The project considers various components of the communication path, including intelligent end user and IoT devices. Then the mobile radio access network, including also the edge computing nodes.
Then the cloud backend, renewable energy sources available also locally, and also the energy measurement infrastructure. Our team's special focus is on energy-aware task deployment and orchestration, which is exemplified in the figure on the right. Here we have different types of tasks. Some of these tasks are time sensitive, and some of those are not that time or delay sensitive. Of course, we need to deploy those delay sensitive tasks to the execution immediately.
But actually, if we have some tasks that can be run on background, we can even consider delaying those tasks to be run at the times when the energy cost is lower. For example, during the night time. Or we can manage this task deployment based on the energy profile of these alternative nodes. So that is another dimension of this optimization. Then, in this European Union funded Hola 5G project, we participate in building a private 5G network for research purposes in Oulu University Hospital.
And the consortium for this includes the Boldyn Networks, which is building this network infrastructure. It is based on Nokia hardware and software. And then from the University of Oulu, we have CWC, but also a research unit from the medical faculty. And then we also have the Pohde as one of the partners and WICOAR, which is providing end user devices and systems to evaluate things we are developing. So the overall motivation for this project has been bringing this private 5G network to the hospitals. The motivation is to bring benefits of the newest RAN radio access technologies to the hospital use, while maintaining full control over the network management and the related policies.
And also to define the technical requirements for this private 5G networks to support the future hospital use cases. The specific goals that we have in the project: first of all, we are conducting from our side, the radio frequency measurements to detect potential interference between the 5G radios and the hospital instrumentation. And also to cut wings from this unfortunate rumors of this 5G safety issues by showing that the indoor 5G radios are contributing to the safe zones in the radio spectrum.
Then the system evaluation, which is the main role of our team. There we are evaluating this computational infrastructure from the viewpoint of performance, reliability, and security. For example, we are, let's say, switching off base stations or generating some fast testing and creating artificial denial of service attacks to see the system's response to this kind of unexpected situations. In other words, we are going to stress test this 5G private deployment. Then also one goal is to evaluate the feasibility of this overall 5G standalone installation for extended reality and wearable use cases in real hospital environment. So SCOAR is providing this part for this study.
And then overall create guidelines and instructions for interference and exposure limits in the hospital environment. One important thing here is that after the project, this private 5G standalone infrastructure will actually remain in the hospital premises for future research purposes. So if there are continuation projects or new projects from the needs of, let's say, industry partners, we are able to use this private 5G installation at the hospital to provide these real world use cases. In addition to the current project activities, we are continuously exploring future topics. And one of these is using large language models and collaborative generative AI. It means the collaboration of humans and AI to generate services.
So far, these services have included, for example, generating text or images based on the human language input. But these services could be also something else. And that's what we are doing. So we are considering, among others, a scenario where we define a digital care pathway using this co-gen AI. In this site, we see an example scenario related to emergency response.
So here, the attending physician can define a digital care path, including on site, first aid, ambulance transportation to hospital, and the hospital care, including the critical care and recovery at the ward. This scenario is illustrated in this figure on the left. In this scenario, this intent-aware orchestrator can deploy these needed service components for each phase so that the digital care follows the patient through the available devices and instrumentation on this care path.
These service components depicted with these hexagons in the figure, can be, for example, medical sensors in an ambulance reserved for the patient during the ambulance transportation, recording, analyzing and sending data to the hospital and so forth. In this scenario, after the patient has arrived to the hospital, the service component is undeployed from the sensor in the ambulance and released for the next use, for example, the next ambulance ride to the next patient. And this is illustrated with these hexagons crossed over in the middle part of the figure. Similarly, the next service components can be then deployed on this hospital's ICU unit instruments and so forth.
Okay, but this was an illustrative example of what kind of use could we have for this kind of new types of AI related technologies in digital health care. To the end, a couple of further medical scenarios, just to give some understanding of how we are deploying these technologies that we are developing into the real world healthcare and medical use cases. Surgical navigation is one of these examples where a real time XR, XnReality user interface is used for precise navigation of surgical instruments enabling better treatment outcome and less tissue damage through better precision and accuracy. So here we have, for example, the extensive use for edge computing, providing this low latency so that we are really having ultra real time user interface for this navigation. Another example is clinical monitoring, where continuous and multi parameter and non-invasive wearable monitoring is used to detect clinical deterioration. The analysis functions are flexibly deployed on the Edge cloud architecture to achieve high reliability, performance and efficiency.
Here, for example, the analysis can, through sensor fusion, create an overview of the patient's status, show the personnel only the meaningful data, basically to prevent flooding the personnel with unrelevant data, or irrelevant data, and give alarms if the overall condition of the patient requires some kind of attention. Okay. And that concludes my talk.
So if you would like to hear more about our research, so feel free to contact me. So there is an email address and from these links below, you can find more about our research. Thank you.
2025-01-20 19:26