[musical chime] PRESENTER: Good morning. And welcome to AB's CI 2021 breakfast symposium. We're thrilled to have you join us for a cup of coffee, a croissant, and a look at some of the technology in the Sky CI M sound processor. Sky CI M was designed with the lives of pediatric recipients in mind. And today, we've invited two guest speakers to join us.
Our first speaker is Dr. Jace Wolfe. Dr. Wolfe is the chief of audiology and research at the Hearts For Hearing Foundation in Oklahoma City, Oklahoma. He is also an adjunct assistant professor in the audiology departments at the University of Oklahoma Health Sciences Center and Salis University. His primary areas of interest are cochlear implantation and pediatric audiology. He provides clinical services for children and adults with hearing loss, and is also actively engaged in audiological research.
Dr. Wolfe is going to talk about the AutoSense and Roger technology that's offered with the Sky CI M sound processor. Welcome, Dr. Wolfe.
JACE WOLFE: Hello. I'm so excited to be here today to discuss my and Hearts For Hearing's experiences with AutoSense. We truly believe that is a technology that can allow all of our recipients including our children with Advanced Bionics cochlear implants to have their cake and eat it too. Before I talk about some of our experiences, I do want to acknowledge the efforts of several of my collaborators at Hearts For Hearing. They have all played a really integral part in the collection of the data that I'm going to share with you today.
At Hearts For Hearing, we kind of operate unofficially at least on the motto to shoot for the moon that came from a book that was written by a motivational author and speaker Norman Vincent Peale. In one of his more famous books, The Power of Motivational Speaking, he has a quote that we should shoot for the moon. And even if we miss, we will land amongst the stars. And I think that's probably not quite astronomically correct. The last time I checked, the moon was a few million miles closer to the Earth than the nearest star. But I think you get the gist of what he's saying, that if we set our goals or targets really, really high and we do everything we can to try to hit those goals and targets, we'll probably be satisfied with the outcomes.
And I think today, more so than ever, we can shoot for the moon for the outcomes that we strive to achieve for our children with cochlear implants. And there are so many steps involved in shooting for the moon and achieving that best outcome. We know about the importance of early ID and intervention and we know about the importance of a language rich listening environment if we're going to really optimize listening and spoken language outcomes. But I think one of the factors that's critically important is making sure that our children with hearing loss have access to all the important sounds that they would want to hear in all the challenging acoustical environments in which they find themselves in day in day out. And that was our interest in evaluating AutoSense technology in the hearing aids of children that we serve here at Hearts For Hearing.
We have over the years noticed that they struggle in difficult listening environments with a lot of noise and reverberation. And we felt that these technologies really had the potential to improve listening experiences and hearing performance in these challenging environments. But there were very little in the way of research studies that evaluated that and proved that.
So we set about to evaluate the AutoSense technologies and Phonak hearing aids in some of the children that we serve. And that's what I'm going to share with you today, some of the results of some of the studies that we've done at Hearts For Hearing to look at the potential benefits and limitations of AutoSense for pediatric hearing aid users. So we have a couple of different studies that we've conducted over the last two or three years that I want to share with you today. The first study that we conducted was with 12 children, school age children, with moderate to severe hearing loss. They were all experienced hearing aid users. We evaluated their performance in three listening conditions, in the default pediatric program which used the real ear sound microphone mode, which seeks to mimic the natural directivity of the external ear.
In the AutoSense program, which has an acoustic scene classifier, and monitors the environment, and selects the right type of noise management technology that's supposed to try to optimize hearing performance and listening experience for a particular environment. So it'll automatically select an adaptive directional microphone or digital noise reduction for a particular environment. And then we also evaluated and manually selected noise management programs that had different types of directional technology and or digital noise reduction. We had a couple of different phases of this study.
Phase one, we evaluated speech recognition in a laboratory environment that sought to simulate real life listening conditions for children with hearing loss. They were all fitted with Phonak Audeo V90 hearing aids that were fitted to DSL 5.0 target with real ear testing. They wore these hearing aids for two to four weeks just in their day to day listening situations. And then during that two to four week listening period, we had a real world trial in which they journaled their listening experiences as they switched back and forth between the default pediatric program with real ear sound and minimal noise management technology, and AutoSense. And we inquired about the potential benefit limitations of these technologies in their real world environments.
As I mentioned before, especially in the laboratory environment, we evaluated performance with three hearing aid programs. The calm program was the typical pediatric default that had the microphone mode that was set to real ear sound and very minimal noise reduction. It's a typical kind of historical pediatric program with very minimal noise management technology available in the program. AutoSense, which is the environmental scene classifier, which selects the noise management technology that's designed to optimize hearing performance for a particular situation. And then we tested several different manual programs, a speech and quiet program with minimal noise reduction and real ear sound, a speech and noise program which had a first order dual microphone beamformer, and then a speech and loud noise program which had a third order binaural beamformer which uses the binaural voice streaming technology that is in Phonak and Advanced Bionics hearing aids and sound processors that allows for the use of stereo zoom, which can provide more proactive or specific directionality of focus on sounds of interest coming from the front. There's also more noise reduction in the speech and loud noise program as well.
We evaluated this in real world listening situations with multiple loud speakers that were used to present noise. We had classroom noise for the speech and noise and speech and loud noise situations at pretty adverse signal to noise ratios. We had car noise that was recorded and real life traffic noise that was presented from these speakers. And the subjects listened to speech in the presence of this car noise. And then also, they listened to speech in quiet.
We evaluated across the three programs. Both the examiner and the child were blinded to the technologies that were being used. And all the technologies and the situations were counterbalanced to try to prevent an order effect for testing. If we look at results, you can see when speech comes from the front and noise comes from other directions. You can see there's pretty considerable improvement, even in these real life listening situations where there's real life reverberation present and real life noise coming from all directions at a pretty unfavorable signal to noise ratio.
We see almost 30 percentage points of improvement in the directional mode and the AutoSense mode compared to the default pediatric mode. And you can see the AutoSense mode compares very favorably to the manually selected programs that were selected to be specific for each situation we evaluated. In the car, because it recognized the car noise as car noise, it didn't switch to directional mode. So there's no difference in speech recognition between the directional program and the pediatric default because it remained in the real ear sound microphone mode.
Excuse me. You see a minimal bump that might be attributed to the noise management technology, the digital noise reduction, and potentially prevention of upward spread of masking. But for the most part, speech recognition appears to be similar in that car condition.
And in quiet, there's no difference in speech recognition as well. When we look and compare two conditions where the speech comes from the front in the presence of noise or loud noise, and the speech comes from behind, again, what we see is almost 30 percentage points of improvement when speech comes from the front in the AutoSense condition. When speech comes from behind, we do see some directional detriment. But it's on the order of about 10 percentage points.
So the directional benefit far outweighs the directional detriment. And that might not be specific to all types of automatic scene classifiers. We've only evaluated this with AutoSense. But there is something that's specific in the way AutoSense responds in these conditions that allows for more directional benefit relative to the directional detriment that we see when speech comes from other directions. We also had the children rate how intelligible speech was when using these different technologies. And one thing that you can see here is, once again, when speech comes from behind, there is some detriment.
The children do notice that speech is not quite as intelligible. But when we allow them to face whatever direction they want to face when we have them do these speech intelligibility ratings, you can see that they typically face toward the signal of interest, toward the direction from which the speech is originating. And they rate the AutoSense program to have the highest intelligibility rating compared to the pediatric default, or to the manual program. If we look at their report from the journals that they completed in the real world listing experiences, that phase of the study, anything that has a positive number would indicate a preference for AutoSense over the pediatric default. And you can see for all the situations, for cafeteria, for home, typically we have positive numbers indicating a preference for the AutoSense program.
And that's also true for in the car and in restaurants as well. There's typically a preference for the AutoSense program. And in spite of the fact that all the children who entered into this study had not used AutoSense prior to the study, they'd use the pediatric default, by the study's end, not a single child preferred the pediatric default program over AutoSense. So really impressive finding there. We repeated, or conducted, a follow up study just three years ago in which we looked at a newer version of AutoSense that was designed more specifically for children. 14 school age children with moderate to moderately severe hearing loss all fitted with Phonak Sky V90 hearing aids once again fitted to DSL targets.
This time, we had five different programs. Each one had a little bit more noise management technology. So you see omnidirectional microphone, just a quiet frequency response, no noise reduction, still omni with the noise frequency response, the DSL noise frequency response and noise reduction on. Then we add in UltraZoom, then we add in real ear sound, and then we add in everything with UltraZoom and noise reduction on and the noise frequency response. And we evaluated it in three different microphone modes again.
And this last program here is what you would find in the AutoSense technology that is in Advanced Bionic sound processors now. So condition number five, technology condition number five there. We evaluated speech recognition and noise, once again, in a real world listening situation with speech coming from in front or behind and noise coming from loudspeakers surrounding the child, and once again in a simulated classroom environment. And what you'll see here is similar to before, where when we had the AutoSense programs with directional microphones, the adaptive directional active, we see about a 25 percentage point improvement in performance. In real ear sound, we see about a 10 percentage point improvement. Not as much as what we see with the AutoSense UltraZoom directionality enabled.
When speech comes from behind, once again, we do see some directional detriment for sounds that aren't arriving from the front when we're in AutoSense. But the directional detriment in the AutoSense program is still on the order of about 10 to 15 percentage points. So not as great as the directional benefit that we see when AutoSense is active and the speech comes from the front. As I mentioned, we also evaluated localization from an eight loudspeaker array. A dog bark was presented from the different loudspeakers and the children had to point to the loudspeaker from which the dog bark originated. And what we can see, and this might surprise some, but compared to the omnidirectional condition, localization performance was better in the directional mode than it was in omnidirectional mode.
And it was best in real ear sound mode. And remember, in AutoSense, if it's quiet, it'll stay in real ear sound mode. So that's what the child's going to be using. If it gets really noisy, then it'll go into directional mode.
And you might think, well, gosh. I would have thought it would have been better in omnidirectional mode. Remember, when worn on the head with a behind the ear instrument, omnidirectional mode makes sounds, or makes the device, most sensitive for sounds that come from the side and behind the listener.
And so that probably explains why localization is poor in that situation. We also did a MUSHRA task where we subjectively evaluated the children's preference for these different technologies. They're blinded, again, to what they're choosing. And we asked them to rank order their favorite from their most favorite to their least favorite across those five programs that we evaluated.
And this is one of the most surprising findings that I've ever found in any of the research studies that we've conducted. We had them complete this MUSHRA testing for how comfortable the listening situation was, how well they were able to understand speech, and their overall favorite. And what you could see almost consistently, almost universally, the children ranked the AutoSense Sky program as the best for all listening situations. And that's not only true for when speech comes from the front, but that's also true when speech comes from behind. And you might think, well, why is that the case? Because that's probably reducing their ability to understand speech. But the reason I believe it is is because children hate noise.
We've seen from Ben Hornsby and Vanderbilt, Erin Picou's research that children really experience listening fatigue in noisy situations. And they really welcome the use of these noise management technologies, both for comfort, and speech understanding, and their overall favorite noisy situations, especially how it's implemented in this AutoSense that's in Phonak hearing aids and Advanced Bionic sound processors. If you're still not convinced and you think I'm all wet making this kind of recommendation, I will refer to Harvey Dillon who's far smarter than I am and to his textbook and the chapter on pediatric amplification when he speaks about adaptive directional technologies. He mentions that based on their research at NAL, infants and young children should routinely be fitted with advanced directional microphones. And that's part of the guideline or protocol for fitting hearing aids in children at the Australian Hearing Services for the children with hearing loss who are served by HS in Australia.
So there's evidence to back this up, not just our evidence, but from other studies and researchers as well. And so with that said, I thank you for your attention. I do say that great outcomes are probable. When we do what it takes, we can shoot for the moon. And a big part of that is using advanced technology for the children we serve so they can hear in the most challenging listening environments where they are.
Thank you. PRESENTER: Thanks, Dr. Wolfe, for those insights. Those of us who maybe are excited to bring these technologies to pediatric CI recipients. And that brings us to our first two CEU questions.
First, RogerDirect requires no additional boots or receivers attached to the Marvel CI sound processor. A, true. B, false. If you said A, true, you're correct. [musical chime] Marvel CI users don't need any extra equipment attached to the processor, keeping the sound processor small and lightweight for classroom and playground use.
And for question number two, what listening environments were factored into AutoSense Sky OS specifically for pediatric users, A, conference rooms, B, noisy bars, C, classrooms and playgrounds, or D, dance clubs? When thinking of our younger recipients, classrooms and playgrounds, C, are probably the environments that come to mind. And now we're going to hear from a Sky CI M user. Emmy Cartwright is an AB CI recipient in a Sky CI M user. Emmy, thanks for joining us today to share some of your experiences. Can you tell us a little bit about yourself? EMMY CARTWRIGHT: Hi.
My name is Emmy Cartwright. I am a bilateral AB recipient currently using the Marvel sound processors. I was implanted in my right ear at the age of 13 months old.
And my second year was done at six years old. I'm currently 20 years old and a junior at Northern Arizona University, majoring in elementary and special education. PRESENTER: Thanks for joining us today, Emmy. So how did you find the process of upgrading to Marvel CI? EMMY CARTWRIGHT: I personally found the process of upgrading to the Marvel processors to be very seamless and exciting.
I have been loving the curved aspect to the processor and the batteries. And the batteries are smaller, yet they have a bigger lifespan, which has been amazing. And the app has been something I've been immensely enjoying. The integrated Roger and Bluetooth capabilities has been huge, especially right now while everything is online.
That, and I just like to stream music all day. But that's just me. The transition to me to AutoSense-- historically in the past as someone who's been implanted for 19 years, I have been notorious for not being a fan of any software upgrades. AutoSense however, has kind of been the exception to that.
I have been loving AutoSense a lot. PRESENTER: Wow. That's great to hear.
So are you using AutoSense Sky OS as your default program? EMMY CARTWRIGHT: Yeah. So when I had my first audiology appointment, I was getting everything set up and I knew it myself. I knew that in the past, I have not liked any other software changes, definitely not drastic ones.
I'm very much someone who after 19 years of being implanted, I need that sense of control. So I wanted to try AutoSense, but I knew I didn't want it as my default program. So I asked my audiologist to have it set up as an additional program. I went home that day and I kept switching between my default program and AutoSense just trying to figure out what the difference was, which one I preferred. And I did this for the next day or so and then I emailed my audiologist about 24 hours after I had got fitted with the Marvels. And I asked her for another appointment because every time I put on my processers, I would automatically switch to AutoSense.
I automatically had to be in that program. So I set up another appointment and I got AutoSense as my default setting. I had to go home from college for my audiology appointment.
And I didn't come back until I had that second one where AutoSense was my default setting. PRESENTER: Marvel CI has a number of different connectivity options. Can you tell us about how you're using those to connect to other devices? EMMY CARTWRIGHT: It is no secret that I am very connected with my processors. I am always streaming something.
I am either on my computer watching a show, using the Bluetooth capabilities to stream that, or I'm on my phone listening to music, or taking phone calls, once again using the processor capability of Bluetooth. And for phone calls, I called my mom and I was able to hear her very clearly. I left my phone in one room and walked around the apartment talking to her.
And it was crystal clear the entire time. And on her end, she has told me that it is also very clear, which is a big improvement and something that I'm really loving. And the same goes for music. The music is very defined. And once again, just clear.
I can hear everything that's going on, all the details in the music. And to be able to do that hands-free has been amazing. PRESENTER: And then, are you using the AB remote app too? EMMY CARTWRIGHT: Yeah. So for the app, I've been loving the app for many reasons. I love being able to see the battery life. I am someone who I have always had-- my nervous habit is checking my batteries before I do a presentation at school, before concerts in high school, whatever it was.
I always had to take my ears off and check the battery, checked it five times, and then put it back on. Being able to see the battery life on the app is reassuring. It's just nice to see where you're at too. And I also will change volume as needed.
But the biggest use for me out of the app is the mixing ratios for the audio, and streaming, or whatever it may be. I use this when I'm streaming with the Bluetooth capabilities or largely when I'm in class and using the Roger Select to stream my classes. In my apartment setup, it gets loud. There's-- I have three roommates and we're in a four bed, two bath with a living room, and a kitchen. And it's still a dorm. It gets loud.
It can. And that can be hard when you're trying to do classes. And so we have a roommate group chat. And I can honestly say that one of my biggest contributions, if not my main contribution to that chat, has been, hey. Can you turn the TV down? When I'm in classes and they're listening to the TV, I can hear what they're watching. I can hear it so well that I know what they're watching.
And it's so distracting when you're trying to focus on class. And so being able to go into the app and change that mixing ratio, make the environmental sounds quieter, and err more on the side of the streaming from Roger has been huge. And for me, a lot of that too is not having to ask my roommates to do something or change something for me, but for me to be able to have that sense of control and be able to change that on my own, and not rely on someone else.
PRESENTER: Yeah. Environmental balance for adjusting the streaming and microphones is incorporated in the AB remote app, and can also be adjusted from the multifunction button in the Phonak remote control. So you mentioned being in classes from your dorm room. And I think you also started classroom learning again.
What's been your experience with these different learning environments while using Sky CIM? EMMY CARTWRIGHT: So for classes, right now, there are kind of two different settings to talk about. I've had-- most of this time has been online during which I've been using the Roger Select to stream the audio of class and now using the app to change the mixing ratios as needed. And that has been really helpful. I have recently had the opportunity to go in person to classes. And I will say that I was a little worried about it just because everyone was wearing masks, and that can add an extra layer of difficulty at times. So I went to class, and I had AutoSense, my Marvel processors, and I felt I didn't struggle at all.
I feel like AutoSense really helped me in that situation. I was able to really hear my professors and classmates as they were talking. And the way that our classes are set up, they're hybrid.
So even if you're in person, you have half the class attending online. So you're getting the computer audio from the Zoom class coming in. But you also have your peers around you.
And so just switching between those two different audio was very interesting. But I found that I really didn't struggle in it. And I thought that to some level, I might. So that was really reassuring.
And it was nice to have the processors and the technology to help me for that. PRESENTER: Great. Thanks so much. Can you tell me a little bit about how you're hearing in other difficult listening situations? EMMY CARTWRIGHT: So since getting the Marvel processors, there have been a few different environments in which I've been in that have been louder. I would say that for me, the most notable situation so far where the Marvel processors and AutoSense have really helped was when I went to go get my second COVID vaccine.
They are doing the vaccine site on campus. And it's actually in a gym. And gyms, just with the echo, they are not the easiest situation. And so I went to go get the vaccine. And they had booths set up where you could sit down and someone will give you your shot.
Well, the booth that I went to, the person had their music playing in a gym with a bunch of other people in it. So once I heard that music, I just thought, oh man. Here we go. And we're all wearing masks. And I sat down and I never had to ask her to repeat what she was saying. I never said, what? I was just able to hear everything that she was saying without any additional struggle of one, the music, two, the gym, and all the other people.
It was really seamless. And I didn't have to do anything. I didn't have to change programs. I didn't have to use Roger. I was able to go from outside in a quiet situation, walk into a gym where it was loud, and walk right back out.
And I didn't have to do anything. And that was really big for me. It was just very helpful, and not something that I've been able to do in the past.
PRESENTER: Wow. That's very cool. It's great to hear that Sky CI M offers the flexibility that you need for the adventure that is attending college in the midst of a pandemic. Thanks again, Emmy, for sharing your experiences with us.
Keep us posted on how you're doing. And best of luck with your classes this semester. So now as we wrap up, we have one more CEU question for you. Recipients are able to adjust environmental balance while streaming using A, the multifunction button, B, the AB Remote mobile app, C, the Phonak RemoteControl, or D, the Multifunction button, the AB Remote mobile app, and the Phonak RemoteControl? And if you guessed D, you are correct. [musical chime] Environmental balance can be adjusted via all of those options, the Multifunction button, the AB Remote mobile app, and the Phonak RemoteControl.
From all of us at AB, we'd like to thank you for your time and attention during our breakfast symposium about Sky CI M. [musical chimes]
2021-05-02