NeuroTechX Boston + BrainMind: A Conversation on Ethical Neuroinnovation
Hello, welcome everybody. For any of the stragglers, please just come in and grab a seat. Everybody got so quiet so quickly.
All right, well, thank you all for coming to this amazing neuro panel event. I kind of want to start things off by welcoming all of you and giving a huge thanks to the Dana Foundation, who's sponsoring this event, as well as our partner BrainMind and our co-host the MIT Press. My name is Alex Higuera.
And I'm the lead organizer at NeuroTech Boston, which is the Boston chapter of NeuroTechX, which is an international community of almost 20,000 neurotech enthusiasts. And with that, I'm going to pass it on to Adam Sefler. Like I mentioned, he is the CEO of NeuroTechX Services. And thank you all for joining. Thank you, Alex. [APPLAUSE] I'll be very quick.
So, yes, I am part of NTX Services. But I'm also a board member of NeuroTechX. And what is great today is to think about how the topic of ethics have evolved throughout the years.
So back in 2015, '16, '17, it was a casual conversation across a genuine event, people meeting up. And if we think about where we are today, there's an event like the one we have. We have actual initiatives. We're talking about it. We've published chapters of books about this.
And we're also working on a NeuroTech credential. So the first program about NeuroTech, with Queens University-- so thank you Susan for being here, which is a00 Susan is our partner in crime on the Queen's side. And there's a whole course dedicated to neuroethics. So I just wanted to frame today's event in a much broader context, where the neurotechnology industry is evolving quite a bit and ethics is really becoming an integral part of every conversation. And the reason why we chose to have a course about it is that as much as people need to understand about neuromodulation, neurofeedback, and so on, they should also think about neuroethics as being a foundation of neurotechnology.
So with that in mind, Bob Pryor from MIT Press will say a few words. Thank you. [APPLAUSE] Thank you, Adam.
Good evening. I'm Bob Prior for the MIT Press. We're one of the sponsors this evening. MIT Press is 60 years old. We just celebrated our 60th anniversary.
And in 1962, when we were founded, one of the first books we published was in neuroscience. So our roots with the field of neuroscience run very, very deep. We are also very heavily involved in publishing of neuroethics. I have to look at my notes.
Martha Farah's book on neuroethics was published in 2010. That was probably the first book on neuroethics. And we have several others coming on bioethics and brains and another by Walter Glannon at the University of Calgary, which is going to be a textbook on neuroethics. So this is an important area for us. And we are very privileged to be part of this community. And I hope very much you enjoy this evening's event.
And now I introduce Diana Saville, from BrainMind. [APPLAUSE] All right, that's a lot of introductions. So I'm going to keep this very short. I'm Diana Saville, co-founder and CEO of BrainMind.
And we are a non-profit organization. And we ask this question, what neurotechnology would you create if you could take profit entirely out of the equation and only think about benefit to humanity? So we bring together funders, researchers, entrepreneurs around that question. And we bring resources into that space. But you cannot talk about impact, the effect of neurotechnology on the future of humanity, without ethics, without a strong ethical framework. And that's what brought us to neurothics. And we were really excited to find that very smart people, for multiple decades, have been thinking about this topic.
And actually, 20 high-level guidelines have been published, nationally and internationally, to address these questions. So we have some answers, but still many open questions about right and wrong when it comes to neurotechnology. So we have this incredible panel here to discuss the organ that is really-- from which derives our sense of being human. And when we're talking about modulating, and sensing, and decoding that organ, there are major societal and ethical questions that emerge. So without further ado, I ask you to lend your precious brains for the next 90 minutes to Anna Wexler-- please come up-- our facilitator for the evening.
She is one of us. [APPLAUSE] Anna is faculty at UPenn and the Director of the Wexler Lab. Next, we have Dr. Rachel Conrad. Welcome.
[APPLAUSE] Rachel is faculty at Harvard Medical School. She teaches neuroethics to undergraduates at Harvard. And she is the Director of Youth Mental Health at Brigham and Women's Hospital. Welcome.
Then we have Adam Molnar, who's the Co-Founder-- [APPLAUSE] --and Head of Partnerships at Neurable, a BCI company. Pardon the cheers. And finally, we have Ben Stecher, who is a patient advocate and Head of the Patient Advisory Board for Rune Labs, a brain data company. [APPLAUSE] And we are also joined by Ana Maiques. Anna, thank you for staying up late in Barcelona to join us. Her flight was canceled.
[CHEERS] But she's making it work. Anna is the CEO of Neuroelectrics, which provides EEG solutions to labs all over the world. And she's going to also be speaking from the entrepreneur perspective.
So thank you all for joining. Great. Thanks, everybody. We really have a fantastic panel for you here today.
So as Diana said, my name is Anna Wechsler. I'm a Professor of Medical Ethics and Health Policy at the University of Pennsylvania. I study ethical, legal, and social issues surrounding neurotechnology.
And I wanted to kick things off by asking each of the panelists to give a bit-- more of an in-depth introduction. So if you could introduce yourself and just talk a bit briefly about your experience related to neurotechnology. My name is Rachel Conrad.
And I am a child and adolescent psychiatrist. And I continue to practice psychiatry. And I'm also trained in bioethics at Harvard Medical School. And right now, I'm really spending a lot of my time teaching.
And so I'm teaching from freshman year of undergraduates to fifth year residents and fellows in medical training. It's really interesting to work with students at such different levels of their training and experiences in personal growth. But I think what brings the ethics of-- or the experience of good ethics education is teaching people to sit with things that are uncomfortable and teaching them to have conversations that they otherwise avoid, particularly with the physicians, drawing out topics that no one wants to talk about in the hospital. And I think ethics begins when everyone's uncomfortable and no one knows what the answer is. And so that's really what we work on in the ethics curriculum at Harvard. Hi, everyone.
I'm Adam. I helped co-found a company called Neurable about seven years ago. And I work primarily on the business side.
And in doing so, interface with so many different types of organizations, that I realized early on that getting ahead of some of these concepts is really critical to technology that's going to change the-- everything. So, yeah, I've been talking about this for a little while now. And I've been in a unique position where I've been at the crossroads of, I think, very important ethical decisions, at least on a small scale, that permeate to greater ramifications. And I'm excited to speak with these lovely people here today. All right.
Thank you, Adam. Hi, everybody. My name is Benjamin Stecher. I'm from Toronto, Canada, by way of Nairobi, Kenya, but Israel, and Denmark, and a bunch of other places in between as well. So I'm here today because my foot is shaking, kind of. I was diagnosed with Parkinson's disease about 10 years ago.
But last year, I actually underwent a preradical, kind of new treatment for this particular disease. It's called deep brain stimulation therapy. It's been done about 200,000 times all around the world.
But what was unique about mine was that mine is actually an adaptive system as well. And what I'm going to do maybe-- I don't know if now's a good time? Sure. OK.
So now I'm actually going to be demonstrating to you what exactly that looks like here-- so Adam. Thank you. This is a Samsung programmer from Medtronic. If you look on the screen above you, you'll see that was me about one year ago today-- or the first one was June 1st of last year. That lovely looking cap that I had on my skull was a frame that got bolted into a much larger contraption, that then held me in place as the deep brain stimulator was implanted within me. That next picture that you see was three days later in the hospital, after the battery was inserted in my chest and the wires were tunneled through my neck.
If you go to the next picture though, so that was during this operation itself. If anyone is-- I should said this before-- but if anyone is squeamish, they might want to look away right now. Too late, though-- I'm sorry. But in that picture, holes are being drilled through the tops of my head.
And I was awake for that entire surgery as well. It was about an eight-hour procedure. And this is basically the result of that. Now before I show you this next thing, please go to the next slide. And I want to show you what it sounds like inside your own [INAUDIBLE]. [CRACKLING] So what do you think that was? Any guesses as to what that sound might have been? Well, obviously, it's a part of my brain.
But which part-- well, actually, it's more or less what your brain sounds like as well, if you go deep, deep, deep inside of it. Any idea what that structure is called though, that mine-- it's the most common structure that DBS is implanted in, in people with Parkinson's disease. [INAUDIBLE] Very good. It's up there.
So, yes, that was the subthalamic nucleus. And that's more or less what yours sounds like as well. Now, I don't know about guys. But to me, that sounds like 10,000 dolphins all chattering, all at once, very quickly.
If we can play it one more time while I set up this next thing? And please zoom in on this if you have a minute. [CRACKLING] Thank you. All right.
Now what's happening is-- I don't know if you can see this right now. Yeah, good. OK. All right. So the communication is in progress. So what's happening is there's a message going from here to this transponder thing, to the battery, to the wires, and then back out again.
And what I'm going to do now is I'm going to show you the three things that is really unique about-- well, kind of unique about this thing. So first, there's the adaptive setting, which is currently on right now. I'm going to turn this off momentarily. A couple of things I need to say first-- well, actually I'm going to hold off on some of these things for now. But first, I just want to let you guys know that I'm just going to pause the adaptive therapy. And you'll see a slight change within me as it reverts back to the continuous DBS settings.
Come on. It's loading something. There you go. All right.
So now, in an instant-- and I don't know how much you guys were able to perceive, to be honest. But I feel it pretty instantaneously, as my tremor tends to get a little bit worse. But there's also a bunch of things happening within me as well, that are somewhat imperceptible to you guys because you guys don't know what it's like to be in my brain, obviously. I'm going to turn this back on now. But then what I'm going to do next is going to be a little bit more dramatic as well. Hold on.
And was it totally off just now? It's on continuous. So that was just the continuous settings as well. However, I have a bone to pick also with Medtronic.
I hope there's no Medtronic reps here right now. But I don't know if anybody could just see that there's two words written, left and right. What do you think left and right actually means though? So neither me nor my doctors really knew at the beginning because the right side of your brain controls the left side of your body and vice versa. It was not properly labeled on this lovely little Samsung thing-- and so, yeah.
But now it's going to be a little bit more dramatic. And for some people, it might be a little bit uncomfortable to see this next part. I'm going to turn off the whole device. And you'll see what the whole effect of DBS is like.
And then I'll turn it back on as well. A couple of things to note-- those that-- I might not be able to speak properly during it. I'll continue to talk. I'll try to talk as much as I can. But you'll see how my ability to speech-- my ability to speak will be limited in those times as well.
Yeah. Without further ado, let me just turn this baby off. OK.
Yeah. Can you zoom in, so I can see it. Come on. All right. So now what's happening is in an instant, the electricity from the battery was turned off. Some of the most difficult parts for me is something called dystonia.
Does anybody know what dystonia is? It's like a-- it's when your muscles become very, very tense. And, again, this happens pretty instantaneously because of this wonderful little DBS device that I have implanted inside of me. So it's something that you can't see. In my foot, in particular, it's a little bit painful to use sometimes because it's curled up. Like, even it's hard for me to actually let go the grip on this transponder here.
Anyway, when I turn this back on-- and hopefully this will turn itself back on as well. Communication in progress. Hold on.
Still-- OK. Ah, OK. So I feel it instantaneously. But usually it takes a few minutes for the effects then wash through my whole body. I'm going to stop talking for now though and hand it over-- back over to our moderator over there.
Thank you, Ben. [APPLAUSE] Thank you for your willingness to share that with us. And I think that's a very moving and very strong demonstration of the power of neurotechnology.
I mean, right before-- right before our eyes here. Ana, it's a bit hard to follow this, for your intro. But you do incredible things as well. Would you give us a brief intro and talk a little bit about your work in neurotech? You're on.
But right now, what I'm wearing now is a non-invasive-- not only EEG, but a sort of neuromodulation. So Neuroelectrics has developed this technology, which is an EEG, but also a non-invasive electrical stimulation. So, of course, we cannot get to profound areas of the brain.
But we still think we can modulate brain activity in cortical areas and help millions of patients that are in need. So as a CEO and founder of this company, that is in clinical trials with the FDA on epilepsy and depression, the first time I met a neuroethicist I was so shocked in terms of the questions they were asking. I never thought of them in my technology development.
So I became an entrepreneur, super interested in questions that neither the FDA, neither all the quality measures as a medical device manufacturer, nobody was asking me. So I think that, knowing Anna and so many people in the field, made me really rethink the way we are developing this new therapy for patients at Neuroelectrics. And I often think of issues of identity, autonomy, and privacy in ways I would never have thought if I wasn't exposed to this world. So I think that this event today is amazing.
I'm sorry I cannot be there. I'll be there tomorrow. But I think it's great that the audience out there at MIT, the researchers and the people that are developing these technologies, are aware of these challenges and that we are dealing with the most sophisticated, yet more unknown part of our body and what defines us as humans. So I think that in the debate, we can think of whether there should be neural-rights, as Rafael Yuste put it at Columbia, pushing they should be regulated-- so me as a company should have stronger regulation-- or they should be more of guidelines by the industry. So I just leave a few items there for the discussion.
But I'm always very happy to join your ethics. And I honestly embed them in my day-to-day work as a CEO. Thank you. And that's actually a great segue to our next question for the panel because we're not just here to talk about neurotech.
We're here to talk about ethics and neurotech. So this is a question for all the panelists. What do you think are the main ethical issues in neurotech? And I know there's many. And I know we can spend many hours thinking about them. But if you had to pick your top one or two things, that for you are the most pressing, what would those be? And a related question, if you want to touch upon it, is do you think that these issues are unique to neurotech or are they issues that we want to think about generally with other sorts of technology? And we could just go around. So Rachel, you can kick things off.
So one thing that I think about a lot in my clinical work with psychiatry, as well as in the neuroethics course that I teach, is what is a human's identity and what, kind of aligned with that, when are they responsible for their behavior? And so I think that a really pragmatic example of this is if someone has a computer/brain interface, some type of technology in their brain, and they commit a crime when they have that computer in their brain, how do we know if that person is responsible for that crime, or is the person who programmed that computer responsible for the crime, or is no one responsible for the crime? We don't have legal precedents for any of these things. And further, ethics is actually not necessarily always aligned with the law. And as Anna mentioned, regulatory bodies and legal precedents follows far behind novel technologies.
And so that's something that I think about a lot, like where does human identity end and where does something else begin, and when are people responsible for their actions, and who is making those decisions once we have computers so closely aligned with our minds? Hi, everyone. I had a really interesting meeting early on in my career where I met a woman from Europe. She's part of the largest think tank there. It's a German name. I wouldn't dare try to pronounce it.
It's like 20 characters. And she was working on data ethics. And she had reached out to me, seeing that I was part of a neurotechnology company. And I thought it was really interesting that she reached out.
And she and I had a long conversation about the implications of the technology, implications of security, and ethics in general. And one of the, I guess, conclusions that I came to in that conversation was that a lot of the issues of potential neurotechnology are very similar challenges to data. I think data ethics is the same, if not the bigger umbrella that neuroethics falls under.
And one of the challenges that we face as a field is somewhat of a semantic argument of this versus that. And so I think, from my perspective, being able to educate end users or future end users about what is brain data, why is it important, how do you understand it, what are the implications, getting that kind of literacy out there in my opinion is one of the most important things that we can do, especially because even if you just look at your smartphone data, most people truly don't understand the extent to it. Actually, if you look in the news just today, TikTok is under supreme hot water because it's been found that it's funneling a whole ton of information to China, that a lot of the data that we thought was private is actually not. And what is it-- the FAA, FCA-- one of those associations was like, we need to remove TikTok today-- so, yeah.
So that's my unpopular opinion, that neuroethics really is just data ethics and that data literacy is one of the most important things that we can do as academics, as entrepreneurs, et cetera. So there's a lot of different things that we should be talking about, a lot of different things that could be talked about as well. Two things that my fellow panelists just mentioned as well, are very important as well.
But for me, honestly as a patient, this isn't very much related to the brain per se. But I think it is the informed consent as well. At the beginning of every clinical trial, there's an informed consent sheet that every patient has to sign.
And you're very much bound to what's in there for the duration of that trial. However, it's basically just a piece of paper that I signed over a year ago. I have no idea today what is in that piece of paper or that document. And so I think that informed consent should be a process of educating patients. They should have regular-- I don't know if testing is the right solution. But it should be a process whereby I am continually reminded about what's in that informed consent document.
And then I'm always re-educated about that as well. Because I have no idea what's in there and I'm completely legally bound by basically everything that's written in that document. And I'm expected to know it as well, even though I don't know it. And I consider myself a fairly well-educated patient. But I don't remember what I signed over a year ago.
So, yeah, that informed consent document should be part of this discussion as well. Ana? Yeah. I feel your pain as a patient because our 190 patients in our pivotal phase III study, we are also doing informed consent. And I totally agree. I mean, even our first indication is epilepsy. And this is noninvasive.
Brain stimulation is pretty new. So there is a huge education to do, not only on the patient, but also on the neurologist or in the case of depression, on the psychiatry. So it's a relatively new technique. That is not neurosurgeons, like DBS, that are applying this technology, but it's really neurology or psychiatry. So I really feel your pain. And I think that sponsors and, in general, companies, we should make a bigger effort to constantly educate patients and their physicians.
But with other minds-- I mean I understand the data issue. But to me, I am more concerned, because for the EEG side of the technology, for the reading of the brain, I agree that is huge. But for brain stimulation, for neuromodulation, there is still a lot of open questions. When an entrepreneur from the Silicon Valley comes to me and says, Ana, I want to do a similar technology like yours in the consumer space, I often ask them do we understand the long-term effects of stimulating the brain constantly, day after day after day, on our consumer space? And the answer is we don't know.
We don't the long-term effects. So that's kind of like an open question, that I think should be further analyzed. And the other is judgment.
I understand the data privacy. But when we are stimulating the brain of a healthy person or a patient, are we changing their judgment, for example? So I think that's why neurotech, and especially brain stimulation, is so unique in terms of ethics. I think it's another dimension, that general data or artificial intelligence, in terms of ethics. Ready for some arguments? Sure. Let me-- you can respond. And then I want to pick up on the patient engagement point.
For sure. Ana, do you use any form of social media, LinkedIn, Instagram, Facebook? I'm friends with you in LinkedIn. And I knew that.
Yes, I do. I do. I would argue that all of those are things that are neurally stimulating and have an effect on our personality. And that's why I say it's a battle of semantics when we're talking about neuroethics versus data ethics. It's a lot of the similar problem, just couched in different, charged emotional feeling or some-- I think that people put brains on a pedestal almost, to some extent.
But, yeah, in my opinion that there are so many things that are administered to us in our environment on a daily basis, that are intentionally trying to change our neurochemical processes, et cetera. So that neuroethics really is data ethics at the end of the day. But, yeah, I throw it back to one of you, and Ana. I'll leave it to any of the panelists if they want to jump in.
So I think that actually we can tie all of these pieces in with patient consent, change in behavior, and what kind of closed-loop technologies mean. So, for example, if someone says they're developing some kind of neuropsychiatric illness. And they say in their kind of living will, saying their future medical wishes, if Ana develops a technology that's going to cure my disease, please give it to me.
So a decade later, they're totally demented, bed-ridden, haven't spoken for years. And Ana's new technology is approved. And so they have their pre-stated wishes that this person wants this technology. And so you put Ana's fancy cap on and the patient wakes up after a decade. And then they decide that they don't want that treatment anymore.
So who is the patient? Is the patient the person a decade before, who made this planning for what they would want when they developed a neurodegenerative disease? Is the patient the person who's unable to speak and lying in a bed? Or is the patient the one who's now wearing the cap and has arisen from the dead? And so then it becomes questions of identity, who's responsible for behavior and who's allowed to make decisions. Yeah. In my opinion, I think that if you're a patient, that's signing away that right. You should be educated, to everyone's points so far, as to what are the potential implications. Are you signing away that in 10 years you may wake up and you may have a different response? Which direction are you going to go? And obviously some of these things are not possible to discern, like I think an actual very real challenge with neuroethics is that there are so many unknown unknowns.
Their are unknowns that happen from the unknown policies, applications, capabilities that are about to be put out there. So it's interesting. So I'll just tie this back to some of the debate that goes on in neuroethics, which is, I think, something that all the panelists were sort of hinting at, which was whether there's something unique about the brain? And so in neuroethics or philosophy, we often call it neuroexceptionalism. Is there something unique here? Should we be treating the brain differently when we think about ethics than when we think about ethics of pharmaceutical drugs, or ethics of cardiology or psychiatry? Well, psychiatry gets to the brain-- but you know. So I think that's one main question.
And I would say there is no answer. As you could see here, this was a great example of debates in either way. Adam saying these are the same issues that we have in other areas. This is just-- I think if I understand you correctly, right, that this is a different instantiation really of issues that we talk about more broadly in other fields, like data ethics.
And there are those-- like, I think Ana, you'd probably fall along this line, and Rachel, who think that there is something fundamentally different here about the brain. Where I do think we all agree is that these issues are really important to talk about. I would say that and that I think we can learn from the fields that have explored and grappled with similar problems. So I hope I portrayed people's comments accurately.
But I want to move to a point that Ben, you picked up on, and that I think your perspective is really powerful here. And that's with regards to patient engagement, especially in trials. So you're in an experimental trial right now.
And there's been a lot of discussion recently from the National Institutes of Health and in the ethics literature about really the lack of patient engagement and the lack of adequate patient education when it comes to trials, especially of implantable devices. It's not like you're taking a drug and you stop taking the drug, and you don't have the drug in your body. There is an implant that's in your body, perhaps permanently. And so it raises a really different set of questions.
And I know you serve on a patient advisory board. And you've been really a patient advocate. And so I wondered if you could share with us from your experience, what are-- so you mentioned, one, about informed consent.
What are other things that you see are the main issues in this space? What are things that you would like to-- if you could speak to developers of neurotechnology what would you tell them are points they should be considering about patient engagement and the patient perspective? OK. So first off-- so, yeah, I'm currently building a patient advisory board for this company called Rune Labs. It's going to be the fifth one that I've helped to create. Two were for two giant pharmaceutical companies and two were for two clinics up in Canada. It's amazing that you've done all that. However, the reason why I'm doing everything that I'm doing-- I also have a website called tomorrowedition.com--
just some free pub for me, I guess. But the reason why I'm doing everything that I'm doing is because I noticed very early on in the course of my disease, all the gaps that were there. And I think it actually stems and starts a lot from the researchers and the academics because they write for other academics and for other researchers. And yet, they also are responsible for creating a body of knowledge about Parkinson's or whatever condition might be out there.
And yet, they only speak to one another really. I mean, they're only really talking in their papers to each other. And they only really care about the recognition of their peers and their colleagues. And I saw that as a huge problem early on because I looked for like information about Parkinson's disease. And you get through the first few pages. And then you suddenly see that you're in this minefield of this academic jargon.
And it took me a long time to navigate it. It took me a long time to understand what was going on inside of me. And to be honest, I don't think anybody still really understands this particular disease very well. So I created this resource, Tomorrow Edition, in kind of a hope or an effort to try and make some of this language a little bit more digestible to regular individuals who might not be academics or who don't have that research background.
However, I would love it if researchers the world over, and I'm sure there's some here, at the beginning of every paper that you submit to every journal, if you just write like two or three sentences at the top for the lay audience. And you should start with that as well. I think not only would your paper be far more impactful that way, but you'd be able to communicate what you're trying to do to the world and in a language that they would understand. I think it's very important that you speak to a general audience when you're creating a body of knowledge, creating any kind of knowledge really. And you have to speak to regular, everyday people who are not in your field.
So that would be my one-- my next request, I guess. I have a lot of them though, so stay tuned. So Rachel, I was wondering if so your perspective as a physician-- both a physician and as an ethicist, what do you think physicians don't know-- physicians and physician researchers, what do you think physicians don't know that they should know in terms of thinking about ethics in this space and thinking about patient engagement considerations? Yeah, I would say I'm kind of overall like pretty alarmed at the lack of ethics-- medical ethics and neuroethics education across medicine. It's very recently become-- I mean, I guess this answers your question. It's only very recently become a training requirement that physicians are trained in any sort of ethics. And there's no specifics of what that constitutes at any level of education.
So just like-- we're actually trying to do a publication right now on the ethics curriculum at Harvard and just trying to find citations of other similar programs in the country. We were just like emailing all of our contacts. And basically, we really can't find very many ethics training programs at medical schools in the entire United States. So I would say that there's enormous gaps.
And I think that there's cultural problems in the field of medicine. There's something that's often called the hidden curriculum about assimilating and submitting to the dominant medical culture and not questioning things. And so a lot of what I do-- I teach medical ethics to the first-year medical students and to the fourth-year medical students. And the first year, I mean they're delightful. They're so enthusiastic, and warm, and open, and human. And when we talk about something, they talk about their grandparents.
And they cry when we talk about end-of-life or difficult medical decision-making. And then the fourth year, they're very shut down because they've been exposed to all these distressing situations in the hospital, and told that they can't talk about it, and they can't feel about it, and to go see the next patient. And so we think about it as kind of like trying to bring back out their distress, trying to help them get into contact with those feelings and recognize that they should be upset by the things they're seeing. And so-- but it's really hard to hold that level of nuance and complexity with the intense demands, sleep deprivation, patient volumes, and rapidly changing technologies, to actually create space to think and feel about the complexity of these decisions.
I want to move to two questions for the entrepreneurs on our panel, Adam and Ana, so shifting gears just a little bit here. So both of you have founded, co-founded, neurotech companies. And I know that from talking to both of you, that both of you really think carefully about ethics and ethics.
So I have a two-part question here. One is, have you encountered practical challenges when trying to incorporate ethics or ethical thinking into the work that you-- your day-to-day work at the company that you run? And then a second, broader question that actually-- I should say that these questions, I culled them from some of the audience. I know you guys all submitted questions beforehand.
So some of these are-- and actually many of you had this next question, which is, do you think that the goals of ethics are fundamentally at odds with the goals of the companies that are more profit-driven? And you have certain responsibilities to your investors and your shareholders. So two-part question-- one is really about practical challenges. And the second is about this tension between-- potential tension, say, between ethics and business? Let's actually switch things up.
So let's-- Ana. I was just going to say, Ana. Yeah. Go ahead Ana, first. Thank you.
So I think it's interesting. I mean when I first was exposed to neuroethics, I thought it was fascinating. But I didn't know how to apply it to my company. And I remember being invited to Korea, to the International Brain Initiative, kind of neuroethics global summit. And it was so impressive, as Rachel was mentioning, how the cultural differences were very apart from the brain research projects in Korea, in Japan, in the US, in Europe, and so on. But to summarize, I got a lot of inspiration from the guidelines of brain research.
And I did write a paper myself, to try to come down with specific questions that I could maybe bring into my own hardware, or software, or clinical development process. So it is not perfect. And I know there is some efforts in the industry to try to develop those guidelines. But to me, going step by step by the requirement that the brain research projects had to comply with neuroethics was a super useful exercise.
And as Ben was saying, it has to do with a lot of how you inform the patient. It's like a set of questions that you have to go after. So that was very useful. In terms of the conflict with investors, I'm very fortunate to have amazing Boston-based investors. And I did raise a significant series A year ago.
But I also think that the fact that most of my investment is going to clinical trials under the FDA, it also puts a lot of these ethical questions into the patient and all the clinical trials' execution. So we are really driven to approval of these new therapies. And I think there are blind spots in the FDA process.
But I don't think there is misalignment right now in the investors, and the FDA process, and the ethical issues. So I'm sure there are things that will be debatable. And as Rachel was saying, or Ben, maybe there are things that will come five or 10 years from now that we couldn't anticipate. But on the bright side, when you see DBS, or what Medtronic has achieved with many patients suffering from Parkinson's, or what we can do non-invasive to deal with brain diseases, if you see the huge need in mental health, in neurodegeneration, and the amazing opportunity for entrepreneurs, and new technologies to deal with the brain, I'm always optimistic. I know there are challenges.
But the benefits for patients, if you do it right, are just amazing. So I will keep on fighting hard to do our best. But I think the opportunity to help patients is gigantic.
Yeah, I totally agree with Ana. And if the medical space has holes, the consumer space, which is where we operate is a black hole. Is truly the Wild West in terms of ethics and regulation. And I'll give a boring answer, and then the fun, entertaining one, that's also a little bit of a case study. So the boring one is with two of my colleagues, who are actually in the audience, Dr. [? Virani ?] and Dr. Stanley.
We're co-authoring a chapter of a book on neuroethics. And we're actually the only company that's writing a chapter. Everyone else is an academic from a lab around the world.
And what our submission on, is mapping the role of various stakeholders. So what is the push or pull from investors, or a board, or customers, or the media? All of these different groups have a different incentive in push decision-making relative to the organization in a different way. So if you can map that trail of influence, you can essentially see how an organization makes decisions. And if a government entity or some other agency has policies put in place, that's another stakeholder, that's another set of influence.
So that's the boring answer. The fun answer is I want to talk about a case study of Neurable when we were-- we initially had tried to commercialize neurotechnology into virtual reality, especially when it was really, really exciting in 2016. And we created an amazing technology and an awesome demo, a game which we demoed right here in 2017, that we got the cover of The New York Times for, a game you can control with your mind. So awesome technology, but struggled with the product market fit. We had to go back to the drawing table and see where were people paying for neurotechnology and how could our capabilities help solve a problem? And in doing a whole bunch of interviews-- I went to I can't tell you how many different conferences, and changing ideas and business models, we finally got a contract with a very large check size. And that was from consumer packaged goods.
It was to do neuromarketing. We were offered a lot of money, which as I can tell you a venture-backed startup is a very important thing to have, to bring to your investors. We finally had a check in front of us for a noninsignificant sum of money.
And my co-founder and I, and my team, we had a series of conversations. And ultimately, we turned it down. And that comes down to mapping the various stakeholders, mapping the various influences.
It wasn't in my company's culture to sell data for profit. It wasn't in my company's culture to only work on applications that take advantage of people, in my humble opinion, as opposed to helping people. And there's also a very practical consideration, where if we wanted to create this technology that could change the world, it needs to be with a brand and an initiative that people trust and want to encourage as opposed to a brand or initiative that people might actually fear-- or worse, are ignorant and unaware of. So that's the first real, like crazy ethical crossroads that my company had to face.
And I'm very proud and supportive of the investors that helped us navigate that difficult decision. And the other funny story was actually just several months ago. I got a cold email from a very, very, very large tobacco company that you all know. And they're like, we will pay you if you can show that our products are calming.
And I'm like, wow, you are evil. And, again, like as a company that needs-- literally needs-- to show revenue in order to grow, a company that has employees and their families to think of because in a startup you grow closer than you would in a large organization-- like my employees have started to have kids and make families. So as a co-founder, you're looking out for their well being. So saying no to money is hard. And it's somewhat of an ethical-- what's called a thin line where, do I make the ethical decision to my team or do I make the ethical decision to humanity or whatever larger humanitarian initiative you're embarking on? But I think, especially for a small company that's stripped of resources and having to make hard decisions every day, neuroethics is a very interesting topic. And the way to address that, and to say whether they're at odds or not, I think are the influences at odd? Thank you for that.
And I should say both-- I want to commend both Adam and Ana. I think you two are two of the few people who have actually written about neuroethics, actually done the work to write and publish in this space. Adam, I may or may not have been one of the reviewers for your chapter on neuroethics. But I really want to commend that. And I want to encourage other people in the audience, if anybody is an entrepreneur or working in this space, to really get involved and think about ethical issues. I want to circle back to a conversation we started, and one of the opening questions, which was about responsibility, identity, impacts to autonomy and agency-- big questions, right.
And I'll throw this one to anyone on the panel who wants to respond. But one question is, do we think these are real? I think at least some of us on the panel think these are real issues and real concerns to be grappling with. But thinking about practical steps that we can take to anticipate and address these issues, what should we be doing? Is there anything that neurotechnology developers, or researchers, or physicians-- is there anything we can do to anticipate and potentially mitigate some of these questions. So, yeah. So when I turn on my DBS, as I just did right now, I always get like a small burst of joy. Now, I do know as well, just from experimenting with my own doctor back in Toronto, if I turn it up too high, too quickly, I can quickly become very manic.
What that means for me-- well, I get labeled as being a manic individual in those circumstances. But that's because psychiatry still has a way to go, I think, in really getting down to the individual and seeing what's happening at the individual basis. What I mean by that is I would explain myself in those moments as feeling like a child. All of my senses also feel heightened. I don't know why that is. It could be just because the neurophysiological response from being able to move properly again or it could be something happening deep in my brain.
But I do feel like a child in those moments. And, like, everything tastes better, as you might be able to see. Everything feels better. The drinks-- like every single thing-- like all of my senses feel heightened.
And actually, there's somebody here in the audience. I don't know if he wants to talk right now, but I'll give him the chance right now. His name is [? Felic. ?] I don't know if [? Felic ?] is here somewhere. Do you want to speak now? Is now a good time for you to speak? We actually have the table afterwards, with the patient engagement perspective.
Sure. But there's one question I'd love to ask him that might be relevant, I think, to this whole discussion about identity in particular. Because I can't say that I feel like the same person that I was beforehand, before I did this DBS surgery. And I'd like to ask [? Felic-- ?] because [? Felic ?] has been there through my entire journey. He saw me-- he's known me my whole life.
He's my uncle. I'm staying in his house right now in Brookline. And I wanted to ask him if he could give me his perspective right now in front of everybody here on how my identity or personality may have changed over the course of getting this DBS implanted deep inside of me. And then maybe give him some time just to reflect on how he felt watching his nephew go through this whole procedure as well. So yeah. [? Felic, ?] the floor is yours. [INAUDIBLE] [LAUGHS] Start us off.
[CLEARS THROAT] Two comments first. I think I was shocked to hear that items as related to the ethics of the kind of, let's say, very significant intervention into the human body as a data problem only and equating the problem in the same way as how do we deal with the data like TikTok or Facebook. I mean, let's consider what we are really doing here.
OK? Very frequently, that's irreversible intervention to the human body, and we really don't even know what will happen tomorrow. We may know what happens today, and we don't really know what happens tomorrow. One. I think it was Ben and you mentioned that very, very critical element here-- namely, advocating for a patient consent to be a living document. OK? Especially from my perspective, watching the experience that Ben had for the last year, I think that is critically important.
The reaction of the patients keep changing. The reaction of the family or the people around the patient also keep changing. OK? It is not constant.
It's not steady. It's a very dynamic process that needs to be readjusted and adjusted continuously. And let me go back to the question that king of Ben addressed. What he demonstrated and we have seen all of it on the screen-- the DBS in his case has a profound impact on his physical well-being. The dystonia, the tremors, and so on.
Even the way you speak can be affected. What is not visible-- that also his behavior is affected as well, and we really don't understand it very well. Sometimes more, sometimes less. Sometimes he's happy, sometimes he's less happy. Sometimes he's very agitated.
Sometimes his filters are less-- let's call them social filters-- and sometimes they don't seem to be changing. And we really don't know how to deal with it, OK? They create a number of profound ethical questions. How do you deal with all of this? We really don't know. We don't even have a good language.
We don't have vocabulary to address it very well, but the people that are looking into this, especially from the ethics point of view, I think have to involve patients. OK? He is one of very few in this field that is looking at the issue of kind of a neurointervention. OK? From the patient point of view, from the subject point of view.
Everybody else is looking at it from the different point of view. Treating the patient, research into the therapy, but I think that has to be a very dynamic and very involved kind of input from the patients that are affected. I don't know if this helps. [APPLAUSE] Thank you for that. Again, I think we've heard this from both Ben, you, and your uncle that there really needs to be more patient involvement throughout this process. I would say from my perspective there's at least now starting to be more broader recognition that this is a major issue, I think.
Anna. Yeah. Go ahead.
One comment because you're aware. I mean, Ben one issue is, for example, when we started at Boston Children's our epilepsy trial. I remember that-- I don't know how one of the patients in the study found my cell phone as a sponsor. Right? Which I don't know how they did, but they call me, and I totally remember this patient. She was a nurse in one of the hospitals in the Boston area and had a remarkable improvement in seizure reduction.
And she called me and say, can I buy your device? And I say, no. You are on a clinical trial with the FDA. I'm super sorry. We need to wait until this therapy is approved by the FDA. And then I remember she said, OK. I've seen that online on Amazon there are some transcranial current stimulation devices that I can buy.
And so if you don't sell me yours, I'll go on Amazon. And I'm bringing this up as an ethical debate because I remember with Anna in some conversations we found out that in Reddit there were 11,000 people doing brain stimulation to themselves. And today I don't know how many, right? So I don't think we can look the other way on how ethically we can help patients that are in need whilst the technologies are being developed. And that's to me a very strong ethical issue. So I don't know whether Rachel, Ben, or Anna had any questions. Noninvasive is much easier to access other technologies than invasive, right? But this is happening today.
Yeah. There's actually one thing I want to say. So I assume, Anna, at the end of your trial that the patient had to give her device back and she wanted to keep it. But there's actually something that I find quite alarming that's happening in the DBS clinical trials, which is that-- Which clinical trial? The-- DBS.
--brain stimulation. Sorry. The deep brain stimulation trials, like what Ben has, is that at the end of the trials sometimes because the trial is finished, they take it out. And so sometimes the patients have had a remarkable response, and then suddenly the trial is over.
And sometimes the trial has been completed. Other times, the device company has just decided it's not a lucrative strategy, and they want to invest elsewhere. And so they're stopping a trial early for patients who've had positive effect and then taking the device out of that patient's brain. Then there's a lot of other permutations of things that can happen. It can be very hard to recruit for these trials, and so maybe they don't get enough patients enrolled. Maybe they don't get enough patients enrolled that they get the statistical significance to improve, that it has highest level of efficacy that they need to get past the FDA threshold.
So maybe if it's a trial of 10 people and Ben and someone else has a miraculous response but they don't have a high enough number. So then the FDA doesn't pass it, and the devices get removed. So obviously with Anna's device, that's external. That's really disappointing and frustrating to the patients. But if you imagine that if you were Ben and the machine that had changed your life got removed from your brain, that could be quite traumatic and feel quite unfair. I just want to say yeah.
I might go out and-- I don't know what I would do in that situation per se, but it wouldn't be good. I can tell you that much. [LAUGHTER] Ben, just speaking about informed consent and these post-trial responsibilities-- right? Like, what happens after the trial ends? Do you recall speaking to your researchers about that? What did they tell you about what's going to happen after the trial that you're in is over? So, in many ways, I'm a special case, but one of the best ones-- one of the easiest ways for me to explain is that I have a very close relationship with all of my doctors as well and my whole clinical team.
They basically are on standby for me. Now, that puts them in some ethical quandaries. It puts me in some ethical quandaries, but that's the relationship that I have with my team is that we're very, very close with one another. And I think that in DBS in particular, your programmer and the person who implants it, the person who deals with you day to day, they have to be very close to you.
So I know he would fight for me. I have no doubt about that. Back to Medtronic as well.
Like, I don't think that I'm in danger in any way of them removing anything from my brain. But I get how most patients out there wouldn't have that option for them. And most patients don't have that relationship with their doctor either, and yet it's been so critical for me at every stage that I have a close personal relationship with my neurologist and programmer because they've very literal control over who I am and what I am, which is a whole other topic that I don't know if we have time to get into now. But every patient must have somebody who they are so-- they have to be like your best friend.
Like, there has to be somebody who really gets who you are, and what you are, and everything about your personality, and everything that goes into what it means to be a human being, if they're going to be your programmer. And they have to be willing to fight for you, not for this device manufacturer, not for anybody else. They have to be on your side, no matter what happens. So I wouldn't have gone through this trial if I didn't have that relationship with my doctor. I was just going to ask why do you think that is. Why are you special or got this kind of treatment and whereas other people might not? Well, one is just my standing in the community.
I'm a very vocal advocate. I championed for myself the whole way through, and I champion for others as well. I think really that is the main reason why I'm getting so much special treatment is because of that simple fact that I'm a very vocal person in this community. I'm like a leader in the community. Yeah, that's basically it. I think obviously having a committed physician-- as a physician, I believe it to be hugely important to patients' outcomes.
But there's also a whole infrastructure that's beyond the control of the physician, right? So if it's a small trial, they don't enroll enough patients to get statistical significance to get passed by the FDA. So then you have a non-approved device in someone's brain, and then the insurance companies are going to say this was never approved by the FDA. I'm not paying for this. So say a line gets infected.
Every day in the ICU is $20,000, and an insurance company is going to say, why am I paying to maintain a device that doesn't have FDA approval? Right? And so maybe you have a beloved neurosurgeon who's like, OK, I'll do the case. But operating room costs easily $100,000. And so who is covering all of these costs? The clinical trial sponsors, who have felt like it was a flop and are not interested in investing any more money? And what's their obligation? What if that company has already gone under? So people are left in these situations where no one's clearly responsible for these devices, and I'm sorry. I think I'm the naysayer here, but to me the lobotomy wasn't that long ago.
And so when we start to think about doing these deeply invasive things to people's brains, what is our responsibility to that human being in the future and the potential repercussions? Another good reason, though, to move to Canada, where-- [LAUGHTER] --we don't have some of these same health care problems. [APPLAUSE] Yeah. And imagine all the trouble-- [LAUGHS] we are going through in this discussion.
And we are mostly talking medical, so let's-- just to provoke the audience, what about the we're thinking consumer? What about if companies like Neuralink or implanted devices are used for brain-computer interaction or communication? Right? That's a whole different ballgame, right? Implanted devices for communication. So I think it's-- I don't know how the panel feels about these implanted devices or non-implanted for consumer communication applications and whether the audience today would feel that this should be reflected as a neuroright that some people are claiming, as a constitutional right to defend the autonomy and identity of your brain, which defines you as human or not. So I'm just throwing that provocation into the panel and the audience, of course. And I was going to say I'm know that Neuralink's going to get brought up tonight. I am shocked that it came from you and not the audience. [LAUGHS] First of all, we should recognize that we are very medically top heavy right now-- that there is a whole consumer perspective, non-invasive, especially non-invasive but also invasive, that we're not necessarily giving a lot of light to.
And I think that there's not something that's necessarily better or worse. I think they're just different. I think if you think about vertical reach versus intensity of a problem, that's a loose way to characterize medical versus consumer applications. But there are a whole bunch of ramifications. We still don't understand the implications of giving your data to 23andMe or having systems that are tracking your accelerometer on a daily basis. We don't know if that information is going to be sold to insurance companies to make it so that if you're predisposed to disease they're going to increase your rate.
So I think that neuroethics or data ethics-- whatever ethics you want to talk about-- should be considered holistically, both medically, where a lot of this comes from, especially because it is a very academic field but even more so to consumer, where it's not so commonplace. And so I'll draw this back to some of the questions that you guys submitted as part of the audience related to enhancement technology. So there's a lot of investment now in consumer devices that are currently marketed for enhancement or will be marketed for enhancement in the future. Right? And a lot of you in the audience raised questions about inequalities, right? So, how are these technologies-- if they are successful, right? That's an assumption here, right? If they do work to enhan
2022-07-16 18:16