Neuroethics Engagement Session from 2021 BRAIN Investigators Meeting
- Okay, hi everybody. I think that we are going to go ahead and get started with today's session. I wanted to welcome you all to today's session, entitled "Perspectives on the Complexities of Improving Neuroethics Engagement Among Writers, Neuroethicists and Scientists in Academia and Industry." This is a part of the Seventh Annual BRAIN Initiative Investigators Meeting, which is an all-virtual event this year.
For those of you who don't know me, my name is Nina Hsu. I am a health science policy analyst in the Office of Neuroscience Communications and Engagement at NINDS, as well as science committee specialist for the BRAIN Initiative Neuroethics Working Group. We're thrilled that you all have joined us here today for this discussion on neuroethics engagement. And ideally, we would love to be having this discussion in person, especially for this complex topic.
And hopefully it's not too long before those in-person interactions are happening again. Before we get started, I do want to offer a couple of housekeeping notes. So closed captions are available for today's session. To enable them, you want to click on the Closed Caption button, which is at the bottom of your Zoom screen, and select Show Subtitles. Parts of this session will be recorded and made available as on-demand content within the meeting virtual space.
The portions of the meeting that are taking place in this main room will be recorded. The breakout rooms will not be recorded, to facilitate a safe space for honest and open discussion. Each breakout room has a facilitator and a moderator, and the facilitator will be taking notes, and after those breakout rooms, we'll be coming back into the main session for report out and group discussion. And so when we think about neuroethics engagement, it can have a lot of different flavors. And broadly, we might consider it as how neuroethicists, neuroscientists, and the public and scientific communities share knowledge and raise awareness about the ethical, legal and societal implications of neuroscience and research.
But part of today's session goal is to also learn more about how each of us considers neuroethics engagement, based on our backgrounds, experiences, and expertise. So what it means, how we do it, and what success in it would look like. To help stimulate some of this discussion, I'm now excited to introduce our featured speaker for today's session, Dr. Kafui Dzirasa. Kaf is the K. Ranga Rama Krishnan Endowed Associate Professor at Duke University, with appointments in the Departments of Psychiatry and Behavioral Sciences, Neurobiology, Biomedical Engineering, and Neurosurgery.
His research interests focus on understanding how changes in the brain produce neurological and mental illness. His accomplishments are far too extensive to cover, so I recommend reading the bio that we sent to meeting attendees last week. He's also served on the working group for the advisory committee to the NIH director, on outlining steps for the second half of the BRAIN Initiative, and currently serves as a member of the BRAIN Initiative Multi-Council Working Group. And so with that, I'm going to turn it over to Kaf. - Well, thanks so much, Nina. It's a tremendous honor and pleasure to be here.
Thanks everyone for your patience in getting this going today. We've all dealt with an incredible amount of uncertainty in the world in the last year. And so certainly, challenges with Zoom are (laughs) nothing new to any of us. So as Nina mentioned, I am a biomedical engineer and a neuroscientist, and a psychiatrist.
And I've never actually thought of myself as a neuroethicist, but as a psychiatrist I consider an important part of my career essentially being a professional listener (laughs). And so I spent a lot of time listening in the last year. And I think some of the things that I've heard are really important pillars that we can use to frame how I think about this question of neuroethics. And so I'll tell you about some of the conversations that I've had, and some of the listening I've done in these conversations across the last year. And so, as you all know, our planet was struck with this incredible pandemic.
And part of our incredible national response was generating vaccines, which many of us have taken up until this point. And as I was talking to folks in different arenas and different communities, around taking these vaccines, an interesting set of themes came up. The first set of themes that I heard around some of the communities I was talking to, was this idea that there was like stuff in the vaccines, right? And so there was sort of this plan that there was technology in the vaccine, and that technology was going to be used for a series of things. And I'll mention here again, I'm just talking about listening.
These things (laughs) are not my perspective on the vaccines, but I think it's really important to share and frame around some of these themes that I heard around this idea of technology. The second theme that I heard, and it was repeated quite often, was that the innovation had just happened too fast, right? How can you trust something that we hadn't thought about in the last six or seven months, and all of a sudden there's this population-level or global-level innovation that has come along. And of course there's a great reason not to trust that. And then there was this third set of themes that I heard over and over again in other communities, which is the medical community and the research community has always taken advantage of people and harmed us in the process of creating things. And I don't want to be a guinea pig, right? And so I sat listening to these themes over and over again, in the context of this incredible innovation that had been created, this incredible timely innovation that was going to be important for our societies moving forward on the other side of this pandemic.
And in thinking about those challenges, those issues that were raised, right, there's some sort of mass scheme behind this technology. It happened way too fast, and there's this history of abuse. These themes really stuck with me as being important in the framing of how we think about neuroethics as well. And so neuroethics, unlike the vaccine, it does have some unique features there that I think are really important. And the first one is, we're essentially fundamentally wrestling with what it means to be human, right? And so you can take all of those challenges that I've raised that would underlie what we call vaccine hesitancy. And then when we sort of integrate this with, now we're actually talking about things that may impact what it fundamentally means to be human, and across all of the societal ranges and historical ranges, and religious contexts of how and why and when people decide where the boundaries of what it means to be human, where they lie.
And you can certainly appreciate, if I take those same three examples that I raised in the context of vaccine hesitancy, you can see exactly how they would apply (laughs) to the challenges we're talking about with regards to neuroethics, right? And so it certainly wasn't long before I got into residency that I repeatedly heard my patients say that somebody had implanted microchips in their brain, or nanotechnology, and were stealing their thoughts. I certainly have heard about the pace of innovation that we're going in neuroscience, and that it's going way too fast. These are sort of themes that you can hear not only in the neuroscience space, but also in the artificial intelligence space.
And then certainly, there are many communities which will articulate clear histories of abuse, and how that intersects with thinking about the neuroscience enterprise as well, and how neuroscience in and of itself has been used in the past to justify many of the abuses that have gone on in society. So these three challenges have been there in the context of how we think about vaccine hesitancy. Certainly they're there with neuroscience as well. I sort of became a neuroethicist by accident.
So (laughs), as Nina mentioned, I was encouraged to join the BRAIN 2.0, the work group that was thinking through how to develop this next round of technological innovation to understand the human brain. And for me, I was extremely passionate about the BRAIN Initiative and its promise. As I mentioned I'm a psychiatrist, but I'm also a family member. And so for me, I am extremely excited about these tools that will help us to understand the human brain, but also to treat the devastating neuropsychiatric disorders, things like Alzheimer's and depression and schizophrenia. And so that's what brought me to the BRAIN Initiative, right, this idea that we can understand the human brain and treat illnesses like the ones that so deeply affect my family.
And I'm a biomedical engineer and a neuroscientist, so it would seem like this amazing space that all of these tools and technologies could come together to give us a new understanding of the human brain. But of course, as I started to think about the promise of these technologies, I also thought about the potential risks of these technologies, right? And we could frame this in the same way we think about the vaccines, right? We could think about our population, having challenges with things going too fast, the potential for abuse, and that people might have reasonable concerns about the negative side effects of those technologies. And like the case that I mentioned with the vaccines, I think what I quickly appreciated was that as we were developing these technologies at a rapid pace, we needed to do a better job of, one, bringing the population along with us, and then also integrating the broader population into making the decisions about how that technology would emerge. As I looked across the room of my colleagues, both on the 2.0 Work Group side,
so the side of us developing and thinking through framing these technologies, the leadership of the BRAIN Initiative, at the time it was led by the directors of the National Institute of Neurological Disease and Stroke and the National Institute of Mental Health, and also looking across to our partner group working on neuroethics, one of the things that was quite clear was that there was a filter mechanism for who ended up in those groups. And it was typically people who had at a minimum graduated from high school, likely graduated from college, and likely spent even more time in school after college as well. And when I thought more broadly about our population in the United States, certainly it's a minority of people who graduate from college, right? Like we're talking about a quarter to a third of the population that finish high school, go off to college. So there's something happening in these groups that were being formed, and were thinking about how to frame this challenge of what the brain is, where the boundaries are, and how far we should push this, in which the broader amount of the U.S. population were in some ways separated from those sitting at the table and framing this problem.
And so for me, I immediately became aware and wanted to really take time to think about how we as a group of scientists could make sure that we were creating a process that would constantly hear from our population, and that not only would that feedback come in, but that the population would also take an important role in developing the technologies, such that they were brought along and the outcomes were framed. So I define neuroethics broadly in that way, right? How can we bring individuals along in the process, both for generating a better technology, but a technology that our population is more ready to access and to utilize, that reflects all of the diversity of thoughts and opinions that are present in our country. Certainly if we get to the other side of developing amazing technology, and 51% of our country thinks it is bad, or presents a greater risk than reward, the implementation would be a major problem down the line.
So I've constantly been thinking about, how do we bring new populations into the process of developing technologies, in other words, that our filtering mechanisms for generally getting on these panels presents. I'll talk a little bit about one project that is described in the BRAIN Initiative, and then encourage you all to join our session on Thursday to hear more about the details surrounding that framework. So, as I was sitting in this room, and I'll say this with a bit of jest and humor, right? I always am sort of left thinking that perhaps, you know, the generations that brought us climate change shouldn't be the best ones setting (laughs) the boundaries for every new technology going forward, and thinking about the implications there. So I became really interested about the idea of bringing in younger people, into developing and thinking through some of the ethical frameworks for our technology being generated. And one of the things we built into the BRAIN Initiative was this framework for engaging high school students, right? Certainly this is a generation of young folks who grew up, unlike us (laughs), with both cell phones and iPads.
They have an integration with technology that's very different in how our brains developed. And many of us thought that there could be a really useful framework from hearing from these young people, both in getting better ideas of how the technologies should be developed, but also engaging them and getting them excited about the future or the presence of the field as well. And one of the things, through wonderful work done by Nina, and Samantha, and Lyric, and other individuals, was building in a BRAIN Initiative challenge. And the idea here was to reach out to high school students throughout the country to have them weigh in on neuroethics issues. And I'm hoping you all will tune in on Thursday. It was extremely exciting to see these young folks, but the challenges that they were raising, as well as the potential solutions, were spot on with some of the issues that the BRAIN Initiative neuroethics group came up with.
And again, I'll say this in jest, but my colleagues on the BRAIN Initiative Working Group, having spent an extra eight years in school, figuring out (laughs) how to come to those conclusions. So, for me, it was just such a clear example of the importance of engaging the population more broadly, right, bringing together science communication, public engagement, neuroscientists, all into one space with the population, such that we can drive better solutions. So I'm extremely grateful for you all being here today. I'm looking forward to the opportunity to break out into sessions. Again, we're constantly thinking about ways to engage the population.
If you will look at the BRAIN 2.0 Work Group report, there's also examples there that we, through conversations with the American Science and Technology Centers, then at the time led by Cristin Dorgelo and now led by Christopher Nelson, thinking about how to use our national infrastructure through museums and science technology centers, to engage the population more broadly. So we're excited for all of you all to be here, and we're gonna hop into some breakout rooms, to really dig into this idea and framework around neuroethics, and what it could potentially look like, to really enhance the work of the BRAIN Initiative.
So thank you all again for having me. - Awesome, thank you, Kaf. I think before we transition, we may have a couple of questions. There is one question in the chat, asking about persons with disabilities. - Yeah, and what is the question with regards to persons of disabilities? - I don't know if the individual would like to unmute or turn on their camera. - Hi there, this is the,
hi there, I'm sorry. Can you hear me all right? - Yeah, yes. - Yes. - [Attendee] Okay, thank you so much for taking my question, and I apologize, I didn't frame it exactly.
I just kind of like threw it up there, but I work with an organization that largely represents disability advocacy groups, and we have a neuroabilities initiative that I'm supporting right now, to where we're trying to combine different stakeholders, both in neuroscience and disability advocacy organizations. And I was wondering how the speaker saw that integration as well, because I loved the idea of involving the high schoolers as a younger population, but then I'm coming from this space of disability advocacy organizations. So I was wondering if the speaker could comment, from maybe that perspective or maybe, I mean, I know that conceptually, we want everyone, so I just was wondering about the speaker's experience, maybe from that angle as well. Thank you. - Yeah, no, it's such a great point and perspective. So we built examples into the BRAIN 2.0 document,
to frame the types of engagement that we think would be really important, but certainly those weren't comprehensive. It certainly wasn't a comprehensive list. For me as a physician, I think about patient group, patient family members and advocacy groups all of the time. And it is certainly the case that we would want individuals with disability also framing how these technologies were or should be developed, and how they could be optimally developed and integrated into our population more broadly. So I think the general rule of thumb is that we want to hear quite broadly from the population, both in terms of framing the technologies, deciding where the legal and the ethical boundaries are, what it means to be human, and how to enhance integration of the technologies that we seek to develop down the line.
- [Attendee] Thank you so much. I really appreciate you taking my question. - Yeah, absolutely. Thanks for asking it, and thanks for the work that you're doing.
- [Attendee] Oh, thank you, I really enjoyed your remarks. Thank you so much. - I think before we transition into our breakout rooms, we, I think, do have time for one or two other questions.
But, if anybody would like to pose one to Kaf. - Hi Nina, I have a question if that's okay. I was just, thank you so much for speaking with us. I'm really excited about this learning opportunity. I was wondering, how, do you have any ideas about how we, I know that this is a much longer, probably, discussion than right now, but how we target patients versus like the general population.
And even though I know these intersect, I have done a lot of work with like the Parkinson's Foundation as a volunteer when I was a scientist and what-not. Everything that you just said, I felt firsthand from the patients, you know, especially about DBS and how, you know, "What are you putting into my brain," and "I'm scared," and "What else is gonna happen?" And then there's the general population that might not have an illness that we know of, or it can be classified, right, by the medical community. So do you have any ideas how we can maybe bring them together, but then help them individually? Or would that separate too much? - Yeah, no, no, no, no. I think we really need to be thinking about how to bring, and, I mean, I certainly don't want to say this in a way that by any sorts of imagination, dismisses the need to target, right? I think our goal is to include everybody, right? And that means those that generally have not been included, needed to have increased targeting. I mean, I certainly think about myself as a physician, but I'm also a family member.
And you it's not, I'm also a patient (laughs), right? Like all of us, at some point in time, end up on the other side of the doctor as well. So I think about the importance of bringing all of those stakeholders into an arena and an environment where they can weigh in on the technologies being developed and where the boundaries are, right? This is of particular importance in our society because it's a democracy (laughs). And so in some ways we all weigh in on the legal boundaries of where tools and technologies go, right? So it's extra important to do that in a democratic society, because we can certainly see the risk to the whole society if everybody isn't seeing the value of science and helping to shape and frame science as it goes forward. Even as we could think about things with regards to the example that I used around vaccinations and vaccines and masks, right, I think the challenges in the neuroscience space are that much greater, because in this case, we are literally framing what it means to be human. And we all have a right to participate and weigh in on that.
So, yeah, I agree. I think the patient advocacy groups need to be brought in. Part of the reason for convening this forum is for us to actually strategize about how to do that.
So I think that's just perfect framing for the breakout sessions that we'll be doing. And I really hope that some of those strategies become built into the readout reports that we're generating from our breakout sessions. So I'm really excited about that.
- I think we'll take one more question before we go into the breakout rooms. This one is from Ricardo. "How do we ensure that this engagement is sustained throughout the research and innovation life cycle?" - Yeah, so, I mean, I think here again, it is, so let me make two comments, right? I saw something in the chat that said, "Nothing about us without us," or something along those lines. And I think that's a really important principle for all of us to take on with regards to the BRAIN Initiative, right? We should not be developing things that are technologies for the human brain, without thinking about broad human involvement in that. And certainly as the NIH wrestles more broadly with how to make things more ethical for all members of our population, and overcoming classic barriers that have prevented that, things along the access of gender and race and disabilities, we should be thinking about how to target those populations and building their inclusion in these processes.
Secondly, I think sustaining that framework, right, that brings the population in as part of the innovation cycle is going to be really important. This is easy to explain it's importance in a democracy (laughs) where the population keeps weighing in on the legal framework. So our society is built such that the population continuously weighs in on the direction of our country.
And I think we should really be thinking about that as we're developing technologies, particularly those that impact what it fundamentally means to be human. So it's a great question. - Great, thank you Kaf. So I think now we're gonna be transitioning, you know, take all of these initial remarks and thoughts on this. I can already tell that it's gonna be a great discussion.
We're gonna be transitioning now to small breakout rooms. So in our initial plans, each room was going to be led by a subject matter expert in neuroethics, science communications, or neuroscience, as well as as a discussion facilitator. I'm just going to share my slides briefly to introduce them to you. So for our neuroethicists we have Timothy Brown, Ricardo Chavarriaga, and Anna Wexler. Our science communication breakout room moderators are Caitlin Shure, Elaine Snell, and Jamie Talan.
And our neuroscientists, excuse me, breakout room moderators, Kaf, who you've heard from, Russell Poldrack, Stephanie Naufel, and Karen Rommelfanger. And we sent backgrounds of these individuals for those who registered for the session last week. Because of our current session turnout, we will be condensing a handful of the rooms. So you might see one or two of the moderators and one or two of the facilitators in your room. We really encourage you to be, it's a non-recorded breakout room, so to facilitate transparent discussion. And so to kick things off, we have a suggested question for everybody to think about as you're whisked away to your breakout rooms, in that what helps to promote the success of neuroethics engagement, and what barriers exist? And so with that, in a moment you should be transitioning to breakout rooms.
And we'll see you back here in the main session shortly. All right, so it looks like everybody is back into our main session. As a reminder, this report out and discussion portion of the meeting is being recorded and will be made available as on-demand content within the BRAIN Initiative Investigator Meeting virtual space. So I think what we'd like to do now is to hear from each of the breakout rooms. I believe that we had six in total. So I think what we'll do is we'll give each group, let's say two minutes to report out sort of the key themes and takeaways from each of their rooms.
And then if there's some time at the end, it'd be great to kind of collectively have a group discussion on what people heard from the different rooms. So in no particular order, I am gonna see if Anna and Jay's room would like to go first. - Sure, and Christina has offered to report back. - Hello everyone, okay.
So we started with who are the people to engage? Well, that (laughs), everybody, everybody who considers themselves a human. And then, you know, within that, you know, those buckets, there are of course scientists, neuroethicists, lawyers. I would like to expand the discipline of neuroethics to also include historians, especially historians of science and medicine, sociologists of science, disability study scholars, feminist scholars of science and technology studies, the corporate world, private companies, people who are developing tech, users of tech, the younger ones, the older ones, advocacy groups for patient populations. I can't remember if I mentioned regulators already, but regulators and funders, of course, really important groups.
The press, science writers, scientists, and what, you know, some of the barriers that exist. Well, I think, you know, sort of the prime sort of number one issue is like getting all on the same page about what is it, are we all speaking the same language? What are the important questions that we, you know, how do we want to sort of pursue thinking about neuroethics moving forward, and how do we engage with members of the community, especially people who historically have not been easy to reach, to really understand what they think about, you know, there also, just the imbalance of power between the biomedical community and just the, average everyday citizen. Those are some of the things that we need to make sure we don't reproduce and be better at, moving forward.
And another barrier would be metrics of success. What is a successful engagement? How do we create those metrics together? You know, and also just realizing that some people value some things differently, just thinking about engineers versus neuroethicists, different sort of pages, perhaps, that they might be on, and do we need to necessarily get on the same page, or just, you know, realize, how we all talk to each other and how we do that. Strategies, certainly, in incentivizing funding of collaborations for scientists to include neuroethicists and other people. I'm thinking about that sort of at the beginning of the research endeavor, and maybe bringing in billionaires (laughs) as a source of funding, and of course, you know, Congress and sort of the other more traditional funding streams.
Maybe doing surveys like a Pew, a large Pew-type survey to sort of get at, try to get a societal understanding of what our values are, and move from there. There's a lot more, but I'll stop there. - Great, thank you so much, Christina. And it's interesting to hear, because I think some of those themes also came up, at least in the room that I was a part of, as well.
So hearing some of the similarities. So I think we'll now switch to the room that had Tim and Ricardo. - Hi there, since we had a lot of facilitators and moderators in our room, I'm up to summarize what happened in our room. So we spent a lot of time thinking about the barriers to neuroethics engagement, in particular, the possibility of coming up with frameworks for thinking about how the brain sciences interact with the humanities, and communicate with a broader public, or how the brain sciences conduct work in the first place, will be laden with values.
And those values will be different across different disciplines. And so coming together to create things like neuroethics frameworks, we'll have to take account of a broad number of values that are not only the values of people within the disciplines working on those frameworks, but values that might reflect or not reflect the values of the different publics that we engage. We also thought about a lot of the differences between the disciplines and with the broader public beyond values. We thought quite a bit about this question of where are we all speaking different languages? How do we come together to communicate? One example that got raised was just coming up with the, some way of talking about agency (laughs). So the way someone in the humanities, in particular, philosophy, would come up, would talk about the agency, is different than the way someone in the brain sciences would talk about agency. In particular, there may be more reductive ways of talking about agency in scientific contexts that might not even be broached in the, in philosophy, or anthropology, or so on and so forth, but lots and lots of differences there.
We talked quite a bit about the frameworks that underlie our language. And so when we talk about agency, we have theories of agency, and then all of these disciplinary commitments and such. We talked about different problems with engaging a variety of publics. So we thought about the classic framing of the problem of engaging publics that are distrustful, some of the things that Kaf raised earlier, and how some of those seem like the result of racial, racialized disparities or gender-based disparities.
And we thought about the possibility that that might be too thin of a way of thinking about it, right, that there is a power differential between the folks who are giving information and the folks who we think of as receiving information. Just the way that's framed as giving and receiving is a little bit indicative of the power, that's a power dynamic there, and that to move forward, we have to rebalance that power differential. We have to give some of our power up, and all of us in the session have that kind of power, or most of us do.
And so rebalancing it means, you know, giving a variety of communities the choice, or listening to them with regard to how they want to be engaged, or what role they want to play in that engagement, ceding the floor, giving people a seat at the table. And yeah, that's, those are the things that came up in our session. - Wonderful, thank you for recapping that, Tim. Up next, Caitlin and Elaine. Hi there, this is Cassandra. I was in Caitlin's group, and I elected to be the person reporting.
Is my audio okay for everyone? Okay. - Yes, we can hear you. - Okay, great. I saw some nodding heads. Okay, I was in the group with Caitlin, Elaine Taylor, Carl, Jessica, and Carla in Peru, although I think she might have had to leave early. So one of our, one of the things that we talked about was listening to other people, rather than, 'cause the traditional model is experts giving the facts, which was an older model of public engagement in communication that Elaine mentioned.
And we still rely on experts for the facts and expertise, but we want to be able to listen more to the population, and maybe you have a different sort of paradigm for how that takes place. And certainly with more dialogue events like the ones that we're having today, we're listening and communicating with each other. But again, echoing from the keynote speaker's remarks.
It's a growing field that many of the people in our group already knew each other. And so there are a lot of different barriers, in terms of making sure that everyone is involved from different aspects of our population, both nationally and globally. And then we talked a little bit about digital media and non-digital media, and there's always different people who use different venues to communicate and get information, whether it's digital or non-digital. And then if nothing else, the pandemic has taught people no matter if they, no matter what they actually might agree or disagree about, they understand on a fundamental level how important it is to communicate about science, and having these complex ideas, having to be turned into news blurbs and social media blurbs and plain language and that sort of thing. So we talked about that a little bit, and just being proactive in communication and then taking into account how, you know, science with people who being human, there's an emotional and certain faith attachments that people have, because it is their bodies and their families, and such. If I missed anything from my group, please let me know.
But I think that was basically the gist of it. Thanks very much. - Great, thank you so much, Cassandra. Up next is Jamie and Nina L's room.
- Hi, I'm Rachel Wurzman. I'm gonna be reporting for the science communication room. So, you know, we really just started with the premise that, really, for engagement it's about forming connections, bridging gaps with the public, and that involves a two-way exchange. So we also determined really there's a need for language and common frameworks between the people who are being communicated with and the people who are seeking to communicate.
And there also needs to be that breaking down of that power differential of who's giving and who's receiving. There needs to be more active listening, and as much listening as speaking in exchanging these ideas. And part of that, we talked about as needing to a certain extent there to be genuine community membership of various neuroethicists in these different types of communities with different perspectives. We thought the comparison with vaccine hesitancy was incredibly apt, to appreciate the identity and particularly the social identity of any neuroethics authority, so to speak, or information, you know, gatherer, kind of at the same time, is important to appreciate how that person will be received by the the individuals or communities that are being engaged.
So one example of, you know, genuine community engagement and membership, Anna Wexler, with that particular group, but also with some of these things that science tends to be a little bit more hostile to communities that have world views that seem to be in conflict with science, there's a need for actual, genuine, authentic connection between neuroethicists, neuroscientists, and these communities. We also talked about the need for science communications preparation to be included and funded in any kind of tool or drug development. For instance, with the issue with advances in Alzheimer's treatments, and nobody having anticipated the thought of, okay, well, who's gonna have access and who's actually gonna want to pay for this, if it's probably not actually going to help people? And there are physicians, and sort of those intermediate dispensers of these technologies who are and are going to continue to be asked questions, and it would be very useful if neuroethicists and science communicators could work together and come up with a briefing for these things, as part of a product during any funded initiative. One of the things we mentioned were, you know, yes, online discussions or public forums with real-time social media platforms, there are some new ones.
I've seen some interesting neuroethics stuff happening on things like Clubhouse, and a question of, you know, where are physicians, where are neurofeedback, you know, admitting professionals, right, these people who are sort of at these levels where they're actually interacting with people and actually getting asked to these questions? Were are they communicating with each other? And can we as neuroethicists become part of those communities as well? From a pure communication standpoint, having access within these communities to dissociate things like hype versus truthful information, and having specific funding for experimenting with different types of messaging to remedy oversimplification and misinformation, and countering misinformation. I think we can learn a lot from the examples of, sort of the vaccine information warfare, so to speak. And yeah, that's about, I think that covers what we talked about.
- Wonderful, thank you so much for that recap, Rachel. Next up is Karen and Kaf's room. - Okay, so that's me, I'm Stephanie. So we spent the first bit talking a bit about what neuroethics engagement is, what it should be.
I think we all agreed on, it should be a bi-directional discussion between all the stakeholders, and really help guide progress, guide where the money is spent, and really, you know, involve sort of communication and getting all the stakeholders' input. Kind of the model that we liked to think about stakeholders was sort of concentric circles building out. So for example, you know, scientists would be sort of in the middle, those developing the technology. But you would also have a circle that would be the patients that would be most directly impacted by any technology or science, you know, patient advocacy groups. And that maybe the outer circle would be just everyone in the public, because we are all, I think Karen used the term "patients in waiting."
We then spent a lot of time talking about sort of neuroscientists and how they engage and don't engage with neuroethics. There seems to be a broad spectrum of how scientists think about neuroethics and, you know, incorporate it in their, how they do their research, and that we discussed that maybe the value proposition for the scientists just wasn't clear, right, that neuroethics should enhance the work of the scientists and not be just compliance or regulation. It shouldn't be a burden. It shouldn't be thought of as a negative that would slow down the science, though we did discuss that there is probably a bandwidth issue that needs to be explored. But to help, you know, develop that value proposition and combat that view that neuroethics is really just gonna slow things down, that scientists like data. They like evidence.
So maybe we do need to be better about highlighting examples and end results on how you can get better impact of your technology, of your science. If you think about neuroethics, right, that will help you get a better product in the end. And that, you know it can be used to make your science better. We talked about incentive structures for scientists, which is papers and grants, and how maybe things can be framed differently to incentivize people to think about incorporating neuroethics into their technology development.
Maybe there would be enhanced opportunities for funding, and there's definitely opportunities for messaging there. And I guess the one thing I'm just gonna end on, which we talked a little bit about but I think would be really cool to discuss further, is sort of thinking about the ethical framework in terms of equity, and how important that is to sort of give all the stakeholders a voice, and how that, I think scientists also, maybe some of them would be more enthusiastic if things are framed a little bit more in terms of equity and not just scary, you know, ethics. So I don't know Karen, if there's anything you want to add. - I think that was great. I don't know if any of the other group members have something they wanted to add. - Thanks, Stephanie.
- Thank you so much, Stephanie. Last, certainly not least, is Russ and Steph's room. - Hi everyone, I'll introduce Joyce Liu as our representative. - Hi everyone, in our group, we discussed a few questions.
And so the first one is, we discussed a balance between minimizing risks and maximizing benefits. So we discussed how often in research we are focused on the risks and just the potential harms that may occur, but sometimes to the exclusion of, or not really to the exclusion, but we don't focus that much on the potential benefits that a project may produce. And so a few questions that may be helpful for us to consider is that, why is like a certain project important, significant or beneficial? And additionally, also just knowing that managing the risk is really important, but so is making sure that the research that we conduct actually has it's real life, it's beneficial for others in terms of just its effects. We also talked about just sharing data and how often people have privacy concerns over sharing their data. And so they will raise up some ethical issues related to that. But we also see that on the other spectrum of sharing data, that's quite essential to what a scientist does.
So this is kind of just a fine line that we need to balance, and just knowing what is appropriate and what is not appropriate to share. We also discussed a question that was raised earlier here in this group, breakout room, on what does it mean to be human? And I think an example would be disorders. And so an example brought up was just kind of like the balance between correcting a psychiatric disorder versus how much that disorder defines one person. And if we correct that disorder, would that, how would this person's personality turn out into, and would their personality still be intact? So there is that balance between those two things, and especially on how can we protect the population in terms of this? I believe a study was also mentioned here in this discussion on how, for example, I think biogenetic explanations will impact on psychiatric disorders. They will often cause people to blame the individual less for having a psychiatric disorder. While on the flip side, they will also tend to think that the individuals are less able to change or to have their disorder to be corrected.
And so this question on what does it mean to be human in terms of just, this balance between attempting to correct a disorder and still having this patient's personality be intact, is a question that we discussed about. And lastly, we talked about the collaboration between scientists and ethicists, and how important it is to work together, to conduct research. And even before beginning a project, it's also important for us to consider the ethical implications that the project will bring. So for example, finding an answer to a question, if we are actually able to find that answer, it is quite powerful, but we need to consider, how will these answers be utilized, and whether or not it is actually better for this question to be answered or not. And I think a quote that was mentioned that quite summarizes this last issue on ethical foresight quite well is that, "You can use an eraser on the drafting table, or a sledgehammer on the construction site." And so those four things such as minimizing risks, maximizing benefits, sharing data, what does it mean to be human, and finally, the collaboration of scientists and ethicists, were a few of the points that we covered in our discussion today.
- Wonderful, thank you so much, Joyce for recapping the remarks from your room. I see that we are just a couple of minutes over. So if there are folks who need to leave or have other obligations, please feel free. I do want to mention that we'll be keeping this Zoom room open. So if others would like to further discuss on some of the really rich themes that have emerged.
It was really interesting to hear how some of the breakout rooms independently discussed some very similar themes, and also some of the sort of unique perspectives that we heard from the different rooms on neuroethicists, neuroscientists, as well as the science communicators. So I think, yeah, we'll leave the room open. I think for those who might want to stay and discuss, I know one question that I posed to our room as the breakout room had maybe 10 seconds left is to think about, you know, as we've identified what promotes engagement and what barriers also exist, there are cultural changes. There are power differentials.
In the end, what would be a measure of success? I know a couple of breakout rooms discussed this, but for those who would like to stay to discuss or chat about other things, I'd love to hear your thoughts. For those who might be leaving us now, again, thank you all so much for taking the time to join us for the session today. - Maybe if I take your question, metrics of success.
We came to a comment that was made earlier, the need to show the added value, because there is a strong assumption here, is that neuroethics engagement will bring impact. But we, and I truly believe that, so we seem to be lacking in the numbers to reflect that or the structure processes to support this argument. And I think this is something that could be interesting to look at.
When the strategies for large projects in research and development have introduced these mechanisms, what have they looked at? And there are some examples in emerging technologies on, for instance, with the stakeholder involvement and the adoption of technology. So acceptance of certain interventions, based on their use of community participation, based on methods for (indistinct) and how these has the benefit. Speaking of (indistinct) be interesting that we take similar pathways to try to better document the engagement that we are already doing, not only research, but also in tech translation, and development of products, to try to make this more tangible. We think there are some, there is some recent data in the science that is not well-structured, is not well documented. And probably is scarce, just like the data that we have about the brain.
So we need to grow in more, more reform and open ways of collecting data, not only for the assigned research but also on assessment of the impact of this neuroethical intervention on some patients. - Can I, is this just everyone can hop in whenever they want (laughs)? So I think that's a great point. I wanna pick up on that and just add something, which is that I think when we think about engagement, I think thinking about the goals of engagement, and specifically related to who the audience is is really important, 'cause we might have different goals for engagement with different audiences.
And so I think that's a needed step towards measuring the impact, right? 'Cause you can't define impact without defining what your goal is, or what you want to get out of that to begin with, right? So I think having some kind of goal or target or something very practical and concrete defined for engagement, I think is important. And then I think the impact is also important, and that'll help us take that step. - I just wanted to jump in and say I think I want to push back on the idea that neuroethics engagement is this sort of (sighs), this sort of, you know, people meeting in the street and having a collision.
And I don't want to think of neuroethics engagement as a collision-type model. I want to think of it as more of a participatory collaborative model, right? So instead of thinking about, okay, so a neuroethicist meets a scientist, and then they, one comments on the other's work. And then at some point, the research reaches the public, and then that's a collision. We really need to be thinking about how to, before the research begins or before the projects begin and throughout the entire process, how all of the stakeholders are deeply engaged with one another. So I want to push us to think about how success could be framed as more: What are the products, what are the possible products of this deep, deep collaboration, or these deep, deep collaborations? Karen, sorry (laughs), we collided a little bit. - I think that was really well said, and I'll just build on that, 'cause absolutely, and that also builds on, and I'll echo what Anna said, that if you're going on metrics, you have to set up some goals.
And the missing piece in between the goals and the metrics of course is you have to have a methodology that makes sense with those goals, kinda like what Tim is talking about. And there's actually a lot of good resources out there that people, this is the same thing I have with all the neuroethics guidelines that no one reads, you know. There's over 20 of them. I've been involved in a lot of, no one reads them.
The people in public engagement have been doing this forever, and we don't read those either. So there's this thing called "Action Catalog" that I only learned about a couple of years ago. And that's because of my ignorance for a long time and saying, "Oh, we need neuroethics engagement." And I didn't know what that meant. And I also (laughs) didn't really know what engagement was, barely know what neuroethics is because everyone defines it so differently. But there's this catalog, "Action Catalog" that was created in collaboration with the Danish Board of Technology Foundation, who, as many of you know in Europe, have been pioneers in a lot of public engagement.
And you can go in that catalog and there are just hundreds of, there's matrices and hundreds of ways, that you click on what your goal is. And then there's a method and there's all these pieces. But I think the take-home from all the ethics guidelines that no one reads, and the take-home from maybe this great catalog that no one really uses (laughs), might be that maybe there's something else we need to devise that's much simpler and tailored. I'm not exactly sure what the answer is, but part of it involves what Anna and Tim says, which is thinking very carefully about what our specific goal is other than broad engagement, and then match it with a methodology that experts have developed, and then think about some metrics that would be meaningful for us, that actually translate into what we think is impactful. And one of the things I said would be really sad as if impact was translated into, now we have a nature paper instead of a neurosciences paper. It's such a low bar, looking at the peer-reviewed paper.
I would hate to say that my impact in the world was a paper that maybe very few people read. And in the (laughs), if we're talking about the society being a broad public, and to Kaf's point, how many of them actually have a PhD? This group is so rare already. - Yeah.
This is actually something I worry about a lot, as someone who is trained as a philosopher. Our journals are the least read and the most prestigious (laughs). So it's like, writing something that nobody will read is always on the forefront of my mind. - I think that Ricardo left a comment in the chat about developing your ethics engagement plans.
Ricardo, I don't know if you wanted to elaborate on that a bit? - I guess it's linked to this, the point that was mentioned before, on having the goals and the methodology, where this thing when we plan projects, the activities and the tasks and devices we have now as a common practice, how are we gonna manage the data across the life cycle of the project? And we don't have some sustained neuroethics engagement. I think a part is often most reduced to IRB approvals, and that's pretty much it. So I think we should go beyond this, this conception, at least in research, of ethics being the approval around experiments, on to thinking, what is the end outcome of this research? And of course, taking into consideration that it is still the more fundamental and applied research, but the outcome shouldn't be papers. And I think maybe when I first shared this, this common emotion before, we don't want to give as a legacy just a bunch of papers, because we know that this has little impact in the world. And, but you can take this, the knowledge that is in this paper that should go, should ascend.
And even if I'm doing fundamental research in neuroscience, there are many ethical aspects that need to be taken into account, early on in the process. We need to foresee, when we start doing the research, even if we're not thinking about the product right now. So just to go for methodological approaches, (indistinct) there is a need for incentives. So the funding agencies need to ask for these type of plans, as a sort of requirement, not just a checklist, but a proper requirement to serious projects in neuroscience. - So I'll just jump in.
And while you were talking, I was just thinking about, right, thinking about the end plan. And I was thinking that on NIH grants, and I think on other grants, there's typically a dissemination plan at the end. And so it's interesting to think about the word "dissemination" as related to the word "engagement," right? Dissemination is like, you have something and then you just put it out. It's like, it's usually like journal articles, conferences.
It's disseminating usually to the bubble of people who you're writing to in the same journals, right, that not a lot, it's not disseminating to the general public. And so this idea of engagement is much different, right? So it's like bringing in people and actually like there's some notion of sustainability that I get when I hear the word "engagement," whereas "dissemination" just sounds to me like this one-way street, whereas engagement feels like it's more of a continuous engagement with people. So, I don't know, maybe that's something to think about. - I also want to share a worry that I've been mulling over for the last few months, or maybe more likely the last three years. So say I agree, Ricardo. I think that neuroethics engagement should be, using Elaine's language from the chat, integral to the development and funding applications for neuroethics projects.
I really do believe that's the case. There should be a section that people should have to write for an R01, for example, or any of the other funding mechanisms through NIH. And this should be common practice through, you know, any grant-giving organization. It should be standard practice. But one of the things that I've seen in lots of people's applications is that a lot of the language about like the protections of human subjects or inclusion of minorities, it turns into boiler plate. "We'll get a representative sample this way.
We'll make sure that it happens," with no real plan. And certainly this is up to teams of people who evaluate these applications to enforce some standard, and hopefully a high standard, for the language used to describe neuroethics engagement for specific projects. But a lot of times boiler plate cuts it, you know? Boiler plate is adequate, and people will get grants through their boiler plate. And one of the things I want us all to think about is, how do we keep people from doing like cookie cutter boiler plate? The same way you would get through your IRB is the same way you would get through your, get the funding. How do we encourage that deeper engagement, right? And how do we encourage that deep sustained ongoing engagement, that Anna says is implied by the word "engagement?" And I'm not sure, but I think, grants that target that kind of engagement specifically to where if you're not doing neuroethics engagement you needn't apply, making that kind of money available, making those kinds of grants available, I think that's one step in the right direction. - There's, if I may jump in, there's a few comments on the chat here about the funding of engagement, and I agree it needs to be done adequately.
But I would also say that it, a well-thought through plan doesn't have to be expensive. It doesn't have to be big glamorous events in order to do it. A very good newspaper campaign, a very good social media campaign, done within your university press office departments, or something like that can be very, relatively inexpensive. And, but to go back to previous points, I certainly agree with your point about dissemination.
I review grants looking at them from a, from the clarity of the lay summary. And I often see this word. "Yes, we will disseminate." And first of all, to go to Karen's point, that dissemination normally means speaking at conferences and publishing papers. And then I always add something about public engagement, when I'm commenting, and certainly dissemination is different to public engagement. So I think it would be very good to actually include a plan in there as to what you intend to do, how you intend to do it.
And it would be a great way to bring in students to introduce them, the students who are involved in the project and maybe students from other disciplines as well, maybe law students, for example, to be a great way to bring them into neuroethics engagement and get them thinking, working in a cross-disciplinary way. So they're been, the applicants for funding, help the applicants for funding work in a more cross-disciplinary way, by involving other people in other disciplines. I know that requires a big extra box on an application form, but again, it doesn't have to be, doesn't have to be complicated. It doesn't have to be time consuming, but a plan needs to be there. - Yeah, just add onto that, just in helping prepare and helping review certain applications that have commercialization requirements.
So collaborations between academia and industry, some of the things that I've seen have been things like deep engagement with people with disability, to disseminate a device of some sort. And they've mentioned things, like you've mentioned, Elaine, things like sustained social media campaigns, public talks for specific advocacy groups, so on and so forth. And it was really interesting for me to see that once we think about it in terms of, and this goes back to the language point raised by our group, not only our group, but several of the breakout groups, that underneath the language that we think is different, there are just different goals also, right? So people in industry, if they're thinking about sustained engagement, they might not think of it in terms of neuroethics engagement, but they might think of it as necessary for the commercial interest of a product, right? So like as a marketing strategy, of course you would reach out to people with disabilities. I've got this thing that's gonna help people with spinal cord implants, I mean, with spinal cord injuries. Of course I'm gonna reach out to those communities and bring them into the design process.
So part of it, I think, is identifying those shared goals that are hidden behind different language. Part of it is also, you know, becoming more, like for those of us who are more theoretical, like thinking of it in terms of the practical. I don't think a lot of the philosophers I know think about things in terms of like design or commercialization.
But eventually like, yeah, this is a capitalist country and in a capitalist world, for the most part, people are gonna buy products and they're gonna go through the medical enterprise. And so a lot of the ethical issues are gonna come out of that process, a translation process. So maybe we need to think of neuroethics as more of a translational, part of the translational process, as well as an upstream process of thinking of, well, is it okay to even embark on this kind of project, or a downstream, let's clean up the mess of this project. Thinking of it as a structural, systemic, ongoing thing, right, like as a deeply engaged thing is important. - You know, part of the challenges, I think there's an assumption that there is neuroethics engagement, and that means that the neuroethicists and scientists are engaging the general public and having a conversation with them. But scientists are a general public when it comes to neuroethics.
They don't really know neuroethics. And why would they? And a lot of ethicists don't know a lot about science actually (laughs). So the first step, you know, one of the things we realized, and Khara was involved in this too, in our several-year effort to try to figure out what neuroethics engagement was or could be, was that the first engagement project of bidirectional dialogue has to be the neuroethicists and the scientists.
And to even agree what those goals for communicating neuroethics topics, whatever those ones would be relevant would be. That step hasn't been met, and a precursor for that is actually something that is quite expensive, and it's time. And so in our surveys of talking with a scientist and engagement specialist, it's clear that there is nobody, there are very few funding agencies that are actually gonna fund the time to bring scientists together to build trust. And to build trust, you don't want like a formal mechanism either. Like really what works best is having just a community around so you can have those casual, spontaneous conversations that happen, where you can say, "You know, I'm thinking about this." And that kind of, to facilitate something like that is expensive.
You have to pay for faculty. You have to pay for, you have to buy out someone's time more than the .5 months that people buy out ethicists for, you know, (laughs) like you just have to do.
And I think until the value proposition is, there's something that Michelle Jones-London said in her paper when she was talking about DE&I efforts. And it was that there is a hidden commitment that keeps people from prioritizing considerations like diversity, equity, and inclusion. There is a hidden commitment that keeps people from prioritizing ethics. And people need to be honest, including myself, you know? What are the things that are keeping us from prioritizing these? And sometimes it's easy to say that it's funding, and certainly we need incentive structures. But there's something else in there that's culturally missing, that's worth exploring, and maybe the wrong incentive structures drive us.
Like that's why I keep talking about papers, you know? So anyways, it's not, I think there's other levels of things to think about. So rather than there's a Band-Aid approach. We probably need that too, but we need something else as well.
- I'm sure NIH will figure that out for us. No, I'm just kidding, well (laughs). (Tim laughs) Oh, I was about to, so I see, Tim, did you want to bring up that question that you placed in the chat as well, or- - Oh, oh, I've been, you know, blathering for the last 30 minutes, so I just wanted to give the floor to someone else, but I just, to underscore Karen's point, the point that Karen just raised, this is something I brought up in the group as a mismatch between people working in the brain sciences and people working in the humanities, who might want to be in neuroethics or might want to, whose research might intersect with the brain sciences somehow.
The goals are just so different between us, and the obligations are so different between us, just at the level of, you know, just basics, right? Like Karen just said that, you know, the most expensive commodity is time. And so thinking about the different schedules between someone in a philosophy department, where they have a nine-month appointment, and the STEM folks, they all have 12-month appointments. And that often means that we have to get grants that give us that extra three months. It sorta constrains our time in a certain way. We have teaching obligations.
We have to have our time bought back from the teaching obligations, but also the, like for junior faculty, and this is less of a problem for me because now I'm in a bioethics department, thank goodness. But if I were in a philosophy department that would be a question of like what kinds of publicat