AI-driven IGT Technologies: Augmented Reality for Therapy Delivery with Nissan Elimelech (Augmedics)

AI-driven IGT Technologies: Augmented Reality for Therapy Delivery with Nissan Elimelech (Augmedics)

Show Video

- Okay, so good evening everyone. Welcome again to the Spring 2021 INOVAIT Lecture Series. Who wasn't too excited to play the Pokemon Go game that came out in 2016? There is a common belief that augmented reality, or AR, was invented only for gaming. Although technology has quickly entered into many industries, including real estate, retail, education, and even the automotive industry.

And as we will see today, AR has a growing place in healthcare with significant potential to change every-day medicine completely for physicians and patients. My name is Ahmed Nasef. I'm the Programs Manager for Training, Outreach, and Networking at INOVAIT, and it's my privilege and pleasure, on behalf of the INOVAIT team, to welcome you here today. I'm very delighted to be introducing our guest speaker tonight, Nissan Elimelech. The Founder and Chief Executive Officer of Augmedics.

And he will be providing an overview of the current state of artificial intelligence in image-guided therapy with a special focus on combining AI and augmented reality to advance image-guided surgery. But before getting in to the specific introduction of today's session, I'd like to make a couple of housekeeping announcements. So for those who have missed last week's talk, by Dr. Anne Martel,

it has been archived on YouTube, and we will also be shortly archiving it on our website. But in the meantime, I'll be posting the link in the chat, but I highly encourage you to subscribe to our enewsletter since this is the main way we communicate with our followers to provide updates on our news and programs. Attendance is free for these sessions but registration is required.

You can now register through our website for the next two sessions. Registration for other lectures will be available soon on our website, as well. The format for these lectures, each lecture consists of a 45 or a 50 minute talk, which is followed by a 15 or a 10 minute Q&A session. Please use the Q&A chat box in Zoom to post any questions you have for the speaker.

Please don't use the general chat box since it's not being monitored by our panelists. We will be recording all of these lectures, and they will be available for later viewing with French subtitles. If you have any questions, please feel free to email me at or emailing the INOVAIT team at

So in today's session, Nissan will discuss key points in starting a company and commercializing a technology in the ER and surgical navigation space. Drawing reference to his experience in founding Augmedics, a company that is developing the first AR navigation technology to be used in surgery allowing surgeons to see the patient's anatomy through skin and tissues. A truly exciting and a phenomenal disruptive technology.

Some of the topics that we will hopefully cover today include the role of AI in image-guided surgery and surgical navigation, the current and potential AR applications and technological developments in image-guided surgery, and some of maybe the potential commercialization gaps, challenges and rewards. Our guest speaker, Nissan, is a visionary entrepreneur with 10 years of experience in various medical device markets. Again, he's the Founder and Chief Executive Officer of Augmedics. And prior to Augmedics, he worked at Medtronic at the spine surgery unit and in Neopharm at the general surgery unit. He also co-founded and was the inventor of another medical device startup called Medizn which has developed a smart surgical hernia mesh. Nissan holds a biomedical engineering certificate and also holds an MBA.

So without further ado please join me in welcoming Nissan. Nissan, thank you very much for being our guest speaker this evening. We know you are currently in Israel and we truly appreciate it. It's late right now at Israel. So we appreciate really you taking the time to deliver this lecture for us.

We are very much looking forward to your talk. So passing this over to you. - Thank you very much Ahmed. Thank you very much for inviting me. It's really a privilege and honour to present Augmedics to all of you here. Let me share my screen and we'll jump right through the presentation.

Okay. So in my talk, I'm gonna walk you through the stages of the company, of Augmedics that I have founded in 2014, as Ahmed said, I'm the CEO and the founder. And I have a background in biomedical engineering.

I worked many years in the medical device industry. I learned about technologies and the needs of surgeons for new technologies in their practice. In this presentation I will show you the technology that we're using and how we developed it, and also explained the artificial intelligence that we used to develop our product in the AR. Our product is called the XVision.

It's actually the first and the only, currently the only, augmented reality surgical navigation system in the world. We are aware that there are some competitors around they are trying to tail us, we know that we're not gonna be the only one forever but we feel satisfied that we reached the market the first one. Surgical navigation system is basically a stereotactic system that was invented like 25 years ago mainly to treat patients for brain surgeries. A company called Brainlab and also Medtronic started this technology to treat patients and to perform in higher accuracy the brain surgeries. There's a very high adoption rate in brain surgeries.

About 100% of surgeons are actually using stereotactic systems to navigate their instrument inside a patient's brain. There's a camera, an infrared camera that tracks the instrument of the surgeon relative to the MRI or a CT scan of the patient. And the surgeon need to look at a distance screen to navigate through an instrument inside the brain. There's also other applications for navigation, spine surgeries, joint reconstruction, knee and hip placement, ENT surgeries. And basically everything that the surgeon would like to get better accuracy when navigating instruments and implant inside a patient.

Over the years, there have been a very good experience in using these systems. It was proven to be very effective, with high success rate, it reduces the time of the procedures in some cases introduced the radiation because surgeons don't have to take fluoro shots, or x-ray images during the surgery. They can just look at the screen and navigate their instruments. And all the big companies have navigation systems such as Medtronic, Stryker, Brainlab. However, these systems are very expensive.

They cost about quarter to half a million dollars, and they're not very comfortable to use. When a surgeon uses a system like that they need to look at a distance screen to navigate an instrument, which causes them attention shift. It also takes a lot of time to learn how to use these systems. There's a long learning curve and there's line of sight interference between the camera that is tracking the surgical instrument, between the camera and the surgical instrument. Basically somebody who passes the hands just blocks the line of sight.

And there's no navigation happens. In spine surgery, there's a very low adoption rate. And people over the years tried to figure out why there's a low adoption rate in spine surgeries. The AO Foundation did a research that asked about 800 spine surgeons if they hold a positive opinion on a computer assisted surgery, a navigation system for spine. And 80% hold the positive opinion about computer assisted surgery.

However, most of them don't use the navigation because it doesn't meet the surgeon's expectation. It is too expensive, it demanding long learning curve and just need a better integration into the existing work flow. So, if you look at the real world, only 15% of the spine surgeons actually use a navigation during the surgery, whereas all the other just use a fluoroscopy, an x-ray radiation during the surgery and the rest just use just a free hand.

No radiation, no monitoring. They know where to place the implants, the screws inside a patient back. And that's what they do.

Either results in a 10, between 10 to 23% of inaccurate screw positioning when they treat spine surgery, when they treat the patients. And there's a lot of spine surgeries performed in the United States and in the world in general. That's actually the scene that I witnessed myself when I was working in Medtronic. I was working, as I said, many years in the field, working closely with surgeons. And that's what I saw when I was working in Medtronic when I sold this kind of systems.

When the surgeon need to navigate the implant, the screw they need to take their eyes away from the patient and look a side on a distance screen, to see the visualization of the implant, of the virtual implant or tool relative to the patient's CT scan. As I said, it takes a long learning curve until the time that they understand what they see and how to all get their hands to coordinate between the eyes and the hands. Every time they take their eyes away from the patient a look at the screen and back to the patient it causes them attention shift. I also mentioned the line of sight interference between the camera that is positioned relatively far away from the patient and anybody who passes their hands or body and block the line of sight, then the navigation stops and you don't see anything on the screen. And on top of all this, the systems are just too expensive to buy. Not many hospitals in the world or countries can afford buying a navigation system.

And that all bring to a very low adoption rate of 15% in total. So my vision was x-ray vision. That's what I thought that would be the solution to all this.

If surgeons could see the patient as if he was fully transparent that would probably bring to a higher adoption rate. So I tried to see surgeons as Superman that can look through the patient's skin and tissue and see anatomy of the spine and other anatomical landmarks directly without taking their eyes away. So that is the moment that I realized that there's a need and there's a way to solve it by augmented reality.

The XVision is what I, that is what Augmedics developed. That was the vision that I had in 2014. I take you now six years forward and that is the system that we have developed to allow that x-ray vision for surgeons. It's basically a self-contained navigation system that has all the components of traditional navigation system built in inside a very small headset. We have the tracking units in the front piece, that is highly accurate. It's an infrared tracking that has a .3 millimetre accuracy.

We have the processor, that process all the data and all the tracking information received from the markers of the tools and the patient at the back of the head. It's a fully wireless system. So surgeons can walk around the OR, move from side to side of the patient without being tethered to everything. And we have its own sparing lenses, of course which is the augmented reality portion of the headset that projects all the images directly to the surgeons retina while they look straight at the patient and get all the navigation data without any need to turn their head around. That is the image here on the left. That's an illustration of exactly what a surgeon is seeing when they're using our system.

Now we try to capture behind the lenses, images to show you how it looks, but it doesn't look so appealing because it's a two dimensional, it's only a two dimensional photo that we can take from one lens. So it's not gonna be nice to see, but I thought that these will be necessary to show you how it looks behind the lenses. Basically, this is a real thing.

This is how we project the image. That's the rendering of the images. Obviously in reality, it looks much more beautiful because you have the 3D, you have the depth perception but the information that we project is the 3D anatomy in the centre of the green, you have the green circle in the centre of your view.

Then we have the conventional views of the traditional navigation, the axial and sagittal cuts of the CT scan based off where the tool is positioned relative to the patient. Let me show you a quick video on some of the feedback that we received from potential customers. (bright upbeat music begins) - XVision was pretty fancy when I first saw it because it was the first time that you could really see three-dimensionally through a phantom or a fake spine. I was pretty surprised at how fast I was able to adopt the techniques with the XVision in the short period of time that I used it. Typically, what we have to do is we have to look away from where we're working, but this has all of the image guided information directly in front of you within the goggles that you're wearing while you're placing the instrumentation.

- As I turned my head it looked like a normal person would look at a patient. I could actually see the details of this three-dimensional anatomy basically through the patient. - This gives you added confidence I think and real time confidence that you're placing the instrumentation in the correct location. And I think that's really pretty cool.

- Imagine if you were driving a vehicle and you have GPS which is what image guidance is, would you want that GPS intuitively displayed translucently onto your windshield so that you're still always looking at the road or versus what you do now, where you're looking at a GPS down in the console or on your phone? - So the XVision setup with an optics onlay is really no different than wearing some type of shield, which is often what surgeons do. This was actually lightweight and easy to use and was translucent. You can see through the actual image. - [Participant] Anything that improves your surgical efficiency in many ways, is gonna be better. - If we can increase the information we gather this is all gonna lead to efficiency.

This is gonna lead to better outcomes and it's gonna lead to better safety. - The number one benefit for XVision is that you are always looking at the patient. You are never distracted from the patient.

- The patient's in the operating room for less time, which means it's safer. And it's also easier for the surgeon. - If we take this technology at face value right now, it's a game changer. - So how do we make it happen? Well, it all started as I said, about six years ago. Oh my God. So almost seven years ago now, it's 2021.

So yeah, seven years ago when I left Medtronic and I started Augmedics in 2014. We started three people just a side note, right now the company has almost 100 people working in Augmedics right now. But we started three people, three entrepreneurs. We grew to five people a year later. And we started as a small company, as a seed company in an incubator in Israel. Our first prototype really didn't look very nice.

As you can see here, it actually looked like a suicide port actually with all the wires attached to the waist. And we did many iterations rates. It was not only about the look of it but rather if it's effective, if it can work. So we tried many iterations. We didn't know what's gonna be the best of what surgeons, how we project the image, whether it's gonna be a projection top down or from the sides. And where are we gonna have one camera or two camera, tracking camera? Are we gonna have only tracking camera or RGB cameras? So these are all things that we have done over the years.

We have developed some prototypes that include the computer. Initially it was a very big one. And then we turn it into a small mobile system which was not small at all, but it was smaller than the big cart, heavy computer that we had back then. And to validate the technology, to see if it actually works or if surgeon actually wanna use it or will ever use it. We did a lot of cadaver labs. We connected to surgeon, top surgeons in the US, mainly in Johns Hopkins surgeons.

And we went there every few months to test our products. And that's the first-ever video that any surgeon ever saw. It was in the middle of 2016. - Nothing permanent. - So basically all we did is just to visualize a 3D spine over the real patient. That was it.

And then obviously surgeons really got excited because of the idea. And we started to get attraction, many other surgeons and industry, they all wanted to see what we're developing. It wasn't very new and very innovative.

So we had crowded cadaver labs every time that we scheduled something. We continued to develop and to improve our system. And in August 2018 we finally got our system ready to be tested on a human being, on a real live patient. And we did the first in human case in Israel which is close to where we are. Of course, I mean we are, we developed it in Israel. And that's footages from the first ever surgery that we did here.

(faintly speaking) So as you can see, I mean, we had a prototype that was wired, connected to the computer. We didn't have obviously the wireless version then, and because we only had one, because we had only the headset connected with the wire to a computer. We couldn't have more than one headset connected at a time because you cannot just pass the cables to the other of side of the table.

So we knew that we will have to develop a wireless headset that's gonna be inevitable. So we did this and in the middle of 2018, we worked very fast to turn the wired version into a wireless version. And we had few versions of that as well. We had few concepts and that's a real challenge to see what is more comfortable to wear more than anything. And the accuracy here stays the same but it's more about comfort of use. We had the first lab of the wireless headset also in 2018.

So we always worked closely with surgeons and we made sure that whatever we design it's gonna fit perfectly the needs of surgeons. And then in 2019, we had the first surgery with a wireless version. And once we had a wireless version we obviously could give the headset to two surgeons to work together. So that's again, that's a real surgery conducted in Israel. And that is the current design that we have.

Over the years as you saw in the presentation and the various models we've changed the model a lot of times. And that is the model that was submitted to the FDA and eventually got the clearance. And that's what we're selling right now. We didn't only work on the headset and the hardware. We worked a lot on the algorithms and all the visualization.

We started by only showing two dots as you can see in the right side of the screen. And we had initially only red and green dots that directed that trajectory, that shows the trajectory of where the surgeon need to go. That was the first visualization that we had on the headset. And then we improved it over the time and we added the 3D models of the spine and also the sagittal and axial cuts. We also worked on the user interface and the cart and everything there and we are continuing to work on the next generations. And that's I think it's the first time that I ever presented, I thought that it will be cool to present it here, the next generation of headset that we will have very soon, I would say.

Other than the headset and all the mechanics and the hardware, there's a lot of work that we did on the tracking units. All the navigation system that currently exists on the market has infrared stereoscopic camera that track the markers, that track the patient and instruments. But we didn't want to use that bulky camera because we wanted something that fits the headset.

So we had to develop our own sensors and the tracking unit. So we build everything from scratch. We build our camera, we develop the algorithms and we also developed and manufactured the markers. The tracking system that we have is extremely accurate. Because of the short distance that it works from the camera to the markers. We actually get an end to end maximum position on the error of about 0.76 millimetres

and angular error one degree. That is compared to the other systems around that is actually at least twice as good as any other navigation system available in the market. And you can see the results on the FDA summary that we submitted it and got the clearance for. So there's a table that shows all the accuracy that we have for the system, which is very good compared to all the other solutions. We also tested our accuracy based on cadaver trials that we did. We placed 93 screws in five cadavers percutaneously over the skin.

And actually we got 99.1% of accuracy. This was submitted and published in the Journal of Neurosurgery. These are phenomenal results that we got and that brought us also the clearance from the FDA.

Now let's talk about some of the augmented reality technology that we're using here. The augmented reality is nothing new. Basically it started a long time ago, maybe 50 years ago.

Some of you may or may not know the history of augmented reality, but I'm gonna share with you some very old pictures that were taken like decades ago. Henry Fuchs tried using augmented reality in 1992. There's Baillot and Rolland in 1998 for orthopedic surgeries and even Professor Nassir Navad is using a lot of augmented reality and image guided overlay on patients.

There's a lot of studies that he conducted over the years. There's actually one publication that was published in 2012, using a commercially available Epson glasses augmented reality for spine surgery. So even what we're doing ourselves in Augmedics was already done and published a long time ago.

There's a lot of augmented reality technologies available on the market. We are currently using augmented reality by Lumus. It's an Israeli company that has developed a augmented reality lenses, optic engines for pilots. But there's also the Microsoft HoloLens, there's the Epson, the Vuzix.

There was a lot of companies in the past. Some of them actually were shut down and some emerge right now. So augmented reality was also used in medicine, in medical, in surgical field as well.

Sony developed an augmented, it's actually a headset that has a projection of the laparoscopic images directly to the surgeon's eyes using this headset. And Stryker developed, they bought the OptiView which is doing exactly the same but it was like 20 years ago when they did it. So all this technologies were relatively old.

But the question is, why no technology was ever evolved to be the standard of care for surgery? All the new technologies that are now currently available using the HoloLens, or the the Magic Leap and others are considered not a surgical navigation system but rather a visualization system. These are all near eye display that projected data directly to the surgeon's eyes but they do not by itself are a navigation system. And there's also a difference between an optic based augmented reality and a video based augmented reality. When we talk about augmented reality optic based it means that we're projecting images directly to the reality while we can still see the reality just augmenting the reality with this, however, video-based augmented reality, means that we're capturing the reality and then projecting it back to the eyes together with the augmented data. Currently, there's no other augmented reality system for surgical navigation because there's challenges involved in that.

In optic based augmented reality, which is very comfortable to use when surgeons can still see reality and then get additional information into this reality. You don't get a lot of accuracy because it's really demanding, or it's actually almost impossible to overlay virtual objects on reality in a very high precision. The error that you get there is more than two centimetre of error and it's just not acceptable for spine surgeries or almost any surgery at all.

In addition, in OR, you have a very intense lighting. So there's a limitation to the projectors of the augmented reality in a very bright environment. You can hardly see any projected the image coming from the lenses. Video-based augmented reality on the other hand, is very accurate. Surgeons can use it. However, it's not comfortable.

It's not comfortable. It can cause disorientation, nausea, and you don't see the real patient, which can be also risky of course. So the XVision technology actually combines between the augmented reality optic based to the video based. What we've done, is we've included in our headset an occlusion mask that blocks some of the portion, just a portion of the reality and replaces that portion of reality with a video based augmented reality. That way we get the benefit of both worlds.

First, we use transparent lenses. So surgeons can see the real patient in front of them. And in the portion where we want to navigate in high precision, we just replaced that portion, that region of interest, with a video based augmented reality, which is extremely accurate. And the user that look and concentrate within this region of interest does not bother by the small movement of that region of interest relative to reality. It's unnoticeable and it's tolerable.

Nobody can actually see that there's a slight deviation between the region of interest to where it's actually supposed to be. And that way we can actually trick the eyes of the surgeon and let them feel that they're operating with an optic based augmented reality, not a video based augmented reality. This is also the real footages that were shot behind the lenses. As you can see here, you see the occlusion mask is actually seen as the black circle. So we're covering the the real scene of the patient with that occlusion mask with a black and replacing that anatomy with the rendered anatomy of the spine or any other organ.

So that's how it looks. It looks pretty accurate, and it looks real. It looks as if you're using an optic based augmented reality. Now I'm gonna explain a little bit of how we render the images and how we use the artificial intelligence in the augmented reality technology that we use. Just before I'll dive into this.

I will tell you that we are working, the concept of how we do what we do is basically the same as any other navigation system. To navigate on patients, we need to use a pre-op or intra-op CT scans, or MRI scans. And we basically navigating on the DICOM images of the patient.

So we get the DICOM images and upload it to the system. And once we upload it to the system, we see the coronal and sagittal views of the CT scan of the patient. The orientation of course of the patient is coded into DICOM in the DICOM tax of the CT scan. And then once we upload it, we segment the vertebra. The segmentation is done automatically. Our system segment it.

We are using the deep learning to do the segmentation. We trained convolutional neural network on about 350 labeled CT scans. So we trained the system to recognize every single vertebra of the patient including the iliac and also the skull. And the convolutional neural network is based on the unit design that we chose. After we upload it to scans and do the segmentation of the spine. We need to register the segmented spine to the real patient lying on the surgical table.

So we do this by taking a fluoro shots. X-ray images of the patient on the surgical table. Obviously we need to orient, we need to register how the real patient is lying on the surgical table relative to the pre-op CT, that was probably lying in a different position when they took the CT scan.

So we do that by detecting the marker rotation. We create a DRR, which is a digitally reconstructed radiograph from the CT scan. We have a marker. We have a marker that we place on the patient during the procedure. And after we take the scan, we know the position, we know the angles and the position of the patient lying on the table based on that marker. And we we rotate the CT scan based on this, on the marker.

And then the user matches, just the initial guess that we do, matches between the vertebra of the CT scan to the x-ray, to the DRR of the fluoro image. The registration is then performed per vertebra. So we do it individually per vertebra.

It's not like a global registration. We do this one by one. Starting for the vertebra selected by the user. So we first, as you can see the left side we highlight one of the vertebra, which is segmented.

That's a CD of course, and selecting the corresponding vertebra on the x-ray shot to the right side. Now each vertebrae, as I said, it's registered using an initial guess for the next vertebrae registration. And that's how we register all the vertebra that we took on the x-ray shot. Then the registration is done by calculated the DRR from each segment of vertebra CT.

The vertebrae is then moved and we rotate it till the DRR matches the x-rays. So we basically know and verify that it matches based on the visualization highlighting the overlay of the DRR matches the x-ray that you see over here, that's the green. And then eventually the user, the surgeon need to confirm the registration based on that look. And after it happens, we can start the procedure. And we create the 3D model.

Eventually the 3D model that the surgeon is seeing is basically just a 3D mesh of the surface of the constructed vertebra that we have. And with that I will conclude. I hope that I gave you a little bit of taste about the technology that we're using and I'll be happy to answer questions if you guys have. - Thank you very much Nissan. That was an excellent talk and a special thanks for highlighting the difference between the optic based and video based AR. I haven't had any idea on that.

And I think the XVision is a truly phenomenal intraoperative imaging technology for a minimally invasive surgery. And it was very interesting to hear the story of how it started and your development journey. And so thank you very much again for sharing this is truly inspiring.

So I guess I'll open it up for questions now. I don't see any questions yet. So perhaps I can start. And so, I would love to hear from you, what advice would you give to image guided therapy researchers and scientists who are looking to integrate AI or machine learning based kind of modalities into their products? What would be in your, based on your experience in the field, what are the right questions to ask or things that you should essentially consider before you take the step of merging with AI or machine learning in general? - I think there's a lot of innovation that is yet to arrive to the image guided surgery. There's so many things that I wish I will be able to implement and to add to the systems that we have, but it's gonna take time until we can do this.

For example, the first thing I, let's go back, when I thought about starting Augmedics I really wanted to create a robot. so robotic surgery, I'm familiar with the robotic surgery but everybody knows that robots right now do not operate by themselves. And there was a reason for that. Although they are more accurate, they know the orientation of the patient and the instruments and the implants more precisely then surgeons, still, there's a gap between the level of confidence that surgeons can give to robots to operate by themselves. So obviously it will take time until robotic will take its place.

But I think the main thing that AI can can bring is basically the knowledge that computers has or can have, that is greater than what one individual can have an any point of time. And that's the true power of AI. But we cannot use these technologies yet, because it is not well adopted in surgical field. So my number one goal was to increase the adoption of this computer assisted surgery devices such as augmented reality. And to give this tools, the computerized tools to surgeons.

Once we get the adoption, the level of adoption which is significant, then we will be able to add more layers of augmented reality. And that is basically, that's like the next step. Right now we're only providing them data. So we're showing the surgeons the data and say to surgeon listen, your tool is right here. It's right there.

We don't give them any advice. We don't tell them how to operate. We don't tell them, listen surgeon, you may want to avoid going this direction or maybe this way is better than this way. So I think that's the next step.

And that's where AI will gonna break through because this is gonna make the difference between showing or providing surgeons an information only, Just the data. And how you gonna use the data. And what is the suggestions alert that you want to provide surgeons. And that's the next step. I think only by taking, only by using the AI and educating a computerized system about how to use the data and how to use it correctly, that will be the breakthrough that will allow eventually machines to operate in some cases.

I don't wanna be too optimistic here but in some cases to operate by themselves or at least suggest surgeons what to do by themselves. Because the mechanics is not a problem. Everybody knows that robot are more accurate than just a human being hand but it's only about knowledge, about where to go, which trajectory to pick and what to do next. And that's the rule of the AI in that field. So I wish I will be able to incorporate more AI into our systems in not only providing the data, the visual data and positioning but rather to tell them where to go, or what's the preferred action that they need to do to treat their patients better. - Perfect.

Thank you so much, Nissan. We have a few questions that came in, but I just would like to remind people to please post your questions in the Q&A chat box, and not to end the general chat box since the general chat is not being currently monitored by the panelists. So again, please use the Q&A chat box in Zoom to post your questions. So yeah, a few questions came in.

So yeah. Someone is asking about, if you could elaborate where is AI being used? The only part I heard related to deep learning was for image segmentation. I guess you talked a little bit about that. - Yes. We currently like, I mean, I just answered that in my last question here.

But yes, currently we use the AI for segmentation of the spine, but we know, I mean, we're doing it ourselves. And we're not stopping there as I just suggested that we are looking into incorporating the AI in various other features that we will launch, but it will take time. I mean, even if we have a feature that is ready and can for example, suggest the best trajectory to place the screw inside a vertebra it just takes time until surgeons will fully trust a system like that.

So there's a lack of confidence right now, in treating patients with artificial intelligence. I would say. We have the capability. We have the ways to convey that technology directly to the eyes of surgeons, but it's too early right now. So we're still learning. We're still using it.

We're learning how to use it. And by the time it will be implemented inside our system as well. - Absolutely, yeah. Thanks Nissan. Another question it's from Dr. Bradley Strauss "Very interesting technology.

"How do you validate better clinical outcomes "in clinical trials?" - So we use precision analysis. We place the screws and after we place the screw, we know the location of where we replaced the screw based on the system. So we save this data and then we scan the patient using intra-op CT scans. And we compare between the position of the real screw relative to the virtual screw that we placed with our system.

That's how we can analyze the accuracy of our navigation. That's the precision analysis that we're doing. Clinically, we can assess the accuracy based on the clinical accuracy is driven by where the screw is supposed to be inside the vertebra.

And that is analyzed by radiologist. So often just after the surgery, we give the scans, the post-op scans to radiologists and then they analyze and score actually the position of the screws inside a patient. And that's how we get the clinical accuracy. That's the validation that we're doing there. - Sounds great.

Another question; from Khosrow from U Waterloo. "Do you work with universities for technology co-development "and/or licensing?" - We tried in the past, but no, we are a private company, private startup funded by VCs. So there's a little bit of challenge, in licensing technologies from academic centres. But we may, I mean, we're not saying no, we just didn't find the right formula to work together with universities. So we are developing everything in-house. - Okay, great.

And so another question, "What are the current limitations of the AI software "or hardware to make more progress in the surgical field? "Is it more like on the software side "where coding experiences needed "or hardware creation "that needs to be kind of more creative?" - I don't think it's the hardware it's mainly time and data. So to train the neural network we just need a lot of data. I mean, it's relatively simple to get a data, enough CT scans to train a neural network to segment a spine. But it's definitely more challenging to get a data and to train the system of what to avoid off.

For example, if there's a tumour and we want to get the best approach of how to get the tumour out or what we need to avoid, or techniques of one surgeon versus the other, it just takes a lot of data. There's anatomy changes from one patient to the other, especially tumours. There are definitely different. So I think the most challenging thing is to gather the data and to train the system with the very specific cases.

But we're on it. And that's definitely what we're focusing on as the next step from us. - Perfect. Thanks Nissan. So another question, "Other than high accuracy, "the high accuracy tracking system built in "what else makes the XVision system more suitable "for surgical navigation "than off the shelf AR glasses, "like, for example, the Microsoft HoloLens?" - Right? So the Microsoft's HoloLens and other eye wear, augmented reality eye wear do not fit the needs of surgery. Because of a few elements that I mentioned.

First is the projection, it's the angle projection of the image in all consumer products such as the HoloLens. They project the image to infinity. It's basically the surgeon or the physician need to look straight to the horizon to see the image.

Usually when surgeon operates, they tilt to peoples down, 30 to 40 degrees down. And if the image doesn't come from a 40 degrees downwards then surgeons need to bend their neck a lot. And then it hurts. So we have all the consumer products in our office and we tried it and it just doesn't fit. Believe me, if there was an easier way for us to develop whatever we do now, by just taking something off the shelf and use it, we would have done it.

But it's not that easy. There's no solution right now that actually fit the core needs of surgery. That's one.

The second one is I mentioned it it's about the accuracy that you get when you overlay images on reality. Our technology, which is patented, its granted patent that we have on the occlusion mask, that we replace some of the reality, portion of reality with a video based reality. This is something that is not, that doesn't exist in HoloLens or other product.

The third thing is the high illumination, the intense illumination that you have in the OR. Usually surgeons operate under the OR lighting which is very high, a 150,000 lux. So this is very bright environment. And with other consumer product you don't see anything under that ambient light. Our occlusion mask also serves for that purpose as well.

So these are the main features that we have in our system that is different than any other augmented reality hardware available today. - Perfect. Thanks Nissan. Another question about the limitations, if there are any limitations to using the system. For example, is there a segmentation algorithm trained on say non-adult spines or younger populations? - No, we haven't trained the system on the younger population. We only did it on adults.

But right now the results that we see that it's indifferent than adults to pediatric, but yeah, but we haven't trained in. - Okay. Another question about accuracy. "How do you verify the 3D images you see on patients "are really accurate in terms of depth and distance, "100% of the time." Is there a potential that it could fail sometimes? - No. We have many safety measures, it can never, I mean, the only failure point is if a user will take the marker that is attached to the patient and move it while, move it from the patient itself.

The marker is fixed. Is fixate to the patient bony anatomy. So it cannot move. But if for any reason it moves then there will be inaccuracy. Other than that, there's no like a software glitch or anything that can happen there.

We have measures to make sure that nothing is gonna be inaccurate such as verification of tool tip every time that they switch the tools, we have timeout from navigating tools. There's a certain time they need to re-verify and recalibrate. There's a lot of safety measures. - Okay, perfect. Another question. "Could you comment on the interaction of the user "with the system like the hand audio gestures, for example "and is this interaction hindrance to clinical adoption? "And how does multi-system user communication work?" - So our system is controlled by a user interface or a workstation that is inside the OR.

It communicates with a workstation. And there's somebody who presses the buttons on the workstation and operates the headset while the surgeon is sterile and obviously cannot touch the headset or the computer. That's how we operate.

That's how we work the headsets in OR. There's no hand gestures or voice command. The only voice command is the surgeon telling the technician what to do. So that's the only voice command. But other than that, it's just a workstation that's wirelessly connected to the headset and operates it.

- Okay, perfect. Thanks Nissan. "Can you elaborate on the cost and time of your system "particularly does an operation using your system "takes longer than the traditional surgery?" - So the cost of the system is cheaper than any other navigation system on the market, conventional navigation. It's roughly $100,000 for a system that has two headsets and all the instruments accompanied together with it.

As for the time that it takes to navigate, our system it's seamless, almost seamless to the surgeon. The set-up time is minimal. We make the system really intuitive to use.

So there's no camera that you need to position or to orient. The only thing that the surgeon needs to do is just to put the headset on. As I mentioned before, there's a technician, there's somebody in the room. It can be a technician, it can be even the surgeon himself before the surgery that take the pre-op CT, load it to the system, register. I showed you the registration process. And from that point, they just put a headset on and start navigating.

So it's not really extending the surgery itself, but I can tell you that, we haven't measured it. So it's not like something that we actually measured and can tell you that it extend or it actually reduce. There are other publications about computer assisted surgery and surgical navigation that have demonstrated a reduction of the time of the procedure because of using the navigation system, just because surgeons are now putting an implant or navigation tool without the need to take the x-ray fluoro shots during the procedure. So that eliminate some of the time that they usually are doing without a navigation. - Thank you very much, Nissan. I'll just take one more final question.

I'm just mindful of the time. "Is the occlusion mask placement dynamic." So for example, any regions in the field of view can be masked or the display has fixed occluded regions. - It's a dynamic.

It's a dynamic occlusion mask. It moves while the surgeon moves their head. The dynamic always stays on reality, including the reality there.

- Okay, perfect. Well with that, we are sorry beyond the time. So again, if anyone has any questions please feel free to email them to me and I'll be happy to forward them or connect you with Nissan. Nissan, thank you very much. It was a pleasure to have you with us tonight.

Thank you to our attendees for joining us this evening. Please know that our next lecture is taking place on Thursday, April 15th at 5:00 p.m. Eastern Time. Our next guest speaker will be Dr. Jordan Engbers. A neuroscientist and the Chief Executive Officer and co-founder of Cohesic and a very interesting talk on the integration of AI in clinical prediction and treatment planning. So please mark your calendars and register for the event at our website,

And please take the time to fill in a survey. You will be prompted to fill in a survey after this session ends. And so please take the time to provide us with your feedback which will really help us improve future events. Thank you again and see you next week. Have a good night.

2021-04-13 18:02

Show Video

Other news