Ethical, Safety & Equity Considerations for Mobile Technologies Research | 2024 MeTRIC Symposium

Ethical, Safety & Equity Considerations for Mobile Technologies Research | 2024 MeTRIC Symposium

Show Video

Much, Cathy, for the warm introduction, and thank you so much, everyone, for joining us. I'm really excited to be here at the second Metric Symposium. So today, I'm going to be talking to you about safety considerations and mobile health technology research, and I really feel like PEGA during his talk, queued up a lot of the things I'm going to be talking about as well. And my goal for this, this is only 5 minutes, and this is a huge topic to cover. My goal is um that the things that I discuss will promote the conversations that we have after the three panelists talk, and that we can maybe formalize some of these safety consideration and safety experiences that people have had during their studies to formalize that knowledge and make it more broadly accessible to the metric community.

Okay, so a few working definitions to get us started off. By safety, I'm talking about freedom from unacceptable risk where risk is defined as a combination of the probability of occurrence of harm and also the severity of that harm. Harm is defined as injury or damage to physical or psychological health of people, and a hazard is a potential source of harm. And so when we're thinking about safety considerations in mobile health research, there's a lot of different things that we might consider that can certainly not be covered in 5 minutes. But some of these things might include novel symptoms or data breaches or privacy or even symptom exacerbation or deterioration. So what do we know about digital text safety? So a recent systematic review of digital mental health interventions that included studies that addressed any form of safety risks, or negative effects or harms showed that 57%, so just over half of the studies collected verse event data and that there was little consensus across these studies on what counts as a safety event, and that the main method across all of these studies included in the systematic review for mitigating risk was excluding high risk groups, which Kula for the next talk on equity.

And the authors imply that there's need for more thoughtful consideration as we move forward in this research domain for consistency and adverse event classification across digital health technology studies, as well as thoughtful consideration of how when and how often we're measuring and, uh, reporting on adverse events. Uh, so here I want to dive a little bit deeper now and talk about whether there's ways to also mitigate adverse events as part of an intervention package. And so one specific example that I'll focus on now is self monitoring and feedback, which are common behavior change techniques used in many mobile health interventions. But they also have safety relevance such that these intensively collected data can be used for safety monitoring and that feedback can be provided about adverse events when they're detected.

And so here's one example of this pocket path study for lung transplant patients where when their blood pressure went above a certain rate, they got a message indicating that their blood pressure was high. Asking them to measure it again in 5 minutes and then to contact their healthcare provider if it remained high. That's just one example. But before we get too comfortable here, we also need to consider that these particular intervention components may also have their own safety concerns. So when we're doing self monitoring and feedback, one of the things we're focusing on is self awareness in order to build self reflection among participants, which is the process of observing, interpreting and reaching insights based on data collected through self monitoring. Prior research has shown that increases in self reflection tend to also co occur with increases in self knowledge, personal growth, and behavior and symptom change.

However, there's also another component to self awareness, which is rumination or repetitive and passive cycles of negative thoughts and emotions, which tend to relate to poor outcomes in terms of self knowledge and behavior change or symptom change. Rumination can be triggered by self reflection. Engaging in self reflection might turn into rumination or it could just occur on its own. And there's several types of different adverse events that are linked to rumination, including symptom exacerbation or deterioration, novel symptoms, perhaps for people who didn't ruminate, but started ruminating during the course of the intervention, and then also adherence and response.

So people who ruminate more tend to put down these uh uh mobile technologies or stop participating in interventions. And so some of you who have some of the older models of the Apple Watch, may remember that in these versions, you can't actually pause your rings. So you have streaks of meeting your activity rings, but there wasn't a pause feature. But one thing that Apple has done is they've recently implemented the pause rings feature, which is something that Peja was also talking about for times when you don't want to actually engage in self monitoring or receive feedback about your behaviors.

And similarly, the or ring has a rest mode. There's other devices, however, that don't include these features and don't let actually have a pause or a rest mode. So, you know, when I take off my whoop it actually sends me a message, like, you're going to miss out if you don't put this back on right away. So I think thinking about some of these features and how they might enhance or detract from participants' experience is an important safety consideration. So overall safety considerations are complicated, in mobile technology research and require thoughtful consideration throughout the research process to ensure participants safety and well being. And to end on a positive note, there are some initial recommendations from some recent research in this domain, specifically for digital health interventions, and also some tools for measuring adverse events in mobile health research. Thank you.

All right, so you probably have figured this out. So what we're going to do is each of us is going to do a little primer on a topic, and then we're going to open it up to questions just so you know. I'm just going to encourage as you're hearing about these different talks to think about if there's situations with work you've done or work you've heard about that might bring up topic areas like this because we're going to have a good amount of time to be able to chat about different situations as we go here. So I'm talking about equity today, and I just want to acknowledge before I get started here that I'm definitely not a health equity expert. Rather, I strive to promote health equity in the work that I do and to encourage my work to come from that sort of framework.

So again, to start with the definition here, there's several ways that we talk about health equity. The one that I'm providing here is from the Centers of Medicare and Medicaid, and that is that health equity is the attainment of the highest level of health for all people where everybody has a fair and just opportunity to obtain it. And if we think about that in the context of MHLth, Halth equity is equal access to the benefits of digital health tools and ensuring that these tools help reduce rather than widen health disparities. All right, at a very basic level, it's key to ensure that everyone has equal access to digital solutions, right? And that access can include access to the technology, connectivity to use that technology, and also the digital fluency in order to be able to use it.

So we can think about some situations, one being if we're providing an app where someone needs to have a smartphone. For those who don't have a smartphone, how we're going to grapple with that? Are they just not eligible, are they not able to get it? Or in some cases, if we're able, are we able to provide that? The same goes for cellular access, and then also thinking about people who have different types of access plans. So can we provide digital solutions that don't use as much bandwidth in order to use, that don't drain the battery as quickly? Can we use ones that have offline functionality? Considering these upfront because oftentimes these are decisions we need to make when we're actually developing digital solutions.

Also want to think about it from a digital fluency side. Because someone has access to a device and has access to connectivity, doesn't mean that they prefer or know how to use the device in the way that we're asking them to. Sometimes we have digital solutions that involve a bunch of EMAs or a bunch having people use different passive or active devices and just because someone has the capability to do so, actually working with the target population to understand what fits, what doesn't fit for them to understand where we might be setting up for health disparities. So to address equity around accessibility, we can prioritize engaging and collaborating with communities and people the digital solution is in service of. So the goal of this type of participatory approach in the design process is to ensure that the digital solution will be accessible, will meet the needs of the people that we are aiming to serve, and that this approach can also help to ultimately promote adoption and trust within that target population. So this may include conversations early in the conception of a project to identify areas where service really are needed and to elicit ideas on acceptable and appealing solutions.

And then continuing to engage with the populations really from conception through completion of the project. So an example of equity focused considerations for inclusive design include using simple language that is readable to be accessible to individuals with varying levels of health knowledge and health beliefs. You can also consider translation and interpretation when thinking about this, digital solutions for non native speakers or ensuring that your digital solution is available in the languages that are spoken among the people it is intending to serve.

That means culturally appropriate translations, not just word for word text conversions. Another consideration to consider with inclusive design is the use of multimodal formats, including audio, visual, and text based formats to accommodate people who may have reading difficulties or just prefer non text interactions. Another way to promote health equity and MHLth platforms is to acknowledge and potentially try to address some of the social determinant health that might be adversely impacting people's ability to interact effectively with the digital solution. So I do work in addictions and oftentimes things like housing instability, unemployment, and access to care are major barriers that people are facing and that our digital solutions actually can take an attempt at trying to help to Some of those social determinants of health, of course, we often fall short of going all the way. And this may look like, for example, in a platform that we're developing right now called incentives to quit working with the target population, which in this case is perinatal Medicaid beneficiaries to actually develop resource guides that are available within the app platform that both talk about resources that are available within their local communities and provide easy access to those to try to mitigate those adverse determinants of health that may not be central to the actual target of the intervention, in this case, smoking cessation, but certainly impact people's abilities to engage within that intervention.

Finally, I just want to highlight on this one, attention with inclusivity, while, inclusivity definitely promotes health equity in general, we should not sacrifice fit and cultural centeredness in the name of inclusion. In fact, sometimes when we do so when that pendulum goes quite far, we can actually promote health disparities. That is to say that no app or digital solution is for everyone, nor necessarily should it be.

So user engagement and retention is another space where we can take a health equity perspective. This includes incorporating strategies to engage and retain diverse users over time, including personalized content and user centered feedback loops to adapt the app or digital solution based on the person or the community's evolving needs. Also, for those apps that are using AI, which of course is just incredibly popular right now, considering how the training of the algorithm might impact health equity, for example, if the algorithm is trained on diverse and representative dataset then not just then trusting that all is good, but evaluating if there are biases and outcomes or AI based recommendations that disproportionately benefit or potentially harm specific users or groups of users. Finally, I just want to note the need to continuously assess the appropriateness and usability and effectiveness of any digital solution across the target population with a health equity framework in mind. This can include using quantitative data to inform iterative improvements that ensure the app serves intended populations equitably. It can also include interviews and other forms of qualitative data collection to understand and have a nuanced understanding of ways in which the design and use of the app may promote or reduce health equity.

For example, identifying who is or who is not using the app, who continues to use it, who benefits from your digital solution, and really why that might be the case. Really getting curious about those things and getting creative to slow down and problem solve is really key to making sure that we're developing digital solutions that serve the populations that we're intending to serve and serve them in an equitable manner. So I'm going to wrap with that and pass this Baton. Thank you. Alright, I'm ready? I've got my timer. Because otherwise, I'll just keep talking because I love this stuff. Hi, my name is Rima.

And I am here to talk about ethical considerations and MHLth which of course is safety and inclusion and justice and everything else. So it's like, Rima, you're kind of repetitive, and that's why I'm not going to be repetitive. But I want to stress that this is not just the blah, blah, blah, and IRB looking at Monica over here will tell us what to do, and I've done all my stupid trainings, Rima, I'm going to be doing ethical research. But I want to challenge everyone here. We can do better.

So if you are not familiar with the concept of soap box, it is a platform to stand on on a street corner and just go off on a topic. So warning. So the NIH publishes seven main principles.

I'm not going to talk about all seven because I remember I only have 5 minutes and I'm at 4 minutes and 12 seconds left. I'm going to talk about three of them, the ones that are start here. So the definition here, we're going to be talking about sharing and collecting for the purpose of useful knowledge.

We love our technology. We love playing with it, we love wearing it. I am naked right now because my fitbit is over on Gabriel's table so that you guys can see what it's like to use a Fitbit Inspire three.

But again, I love my technology, but we are in service of knowledge that is useful, both social and clinical value. And so let's talk about that clinical piece. Where are your clinical partners? They need to be with you at day negative one as you're putting together. Your grants and your proposals. These are the folks who are ideally going to help you move this technology into a clinically relevant space.

You need to be publishing even when it's negative. I tell people all the time that I'm working with that my negative trials show us all of the ways we shouldn't do something, and that's just as valuable. We need to pile it for three months, but as Pej's work truly demonstrates, we need to be doing our work over 12 plus months. We can get anyone excited about that device for three months. Can you do it for 12? As Laura mentioned, bring your own device that relies on privilege.

Acknowledge it and figure out how you're going to mitigate that and do not collect data. You shouldn't. Don't collect data you don't need and recognize the data you're collecting could be used in truly horrific ways. That's just not a political climate question. That is just in general. Recognize what you're collecting if it gets out of your hands or even if it's in your hands, could be used in ways you don't necessarily realize. Informed consent back over here again to Monica.

Don't use jargons in your informed consent forms. You participants that can't understand them, and your informed consent and your process should lead to understanding, not a signature or a checkbox for a digital. And I just want to note that in one of the studies I'm working on, I have 2 minutes left, one of the studies that we're working on, we did qualitative interviews on our tremendously typical dropout rates. And we asked people who dropped.

Talk to us. Wh Why? Why did you drop? What are you know, what are some of the aspects of the study that may have led to that? One of the number one reasons they said they dropped, they didn't understand the consent form. That kills me because it wasn't even a U ofM consent form.

It was a much smaller one from another study or another institution. Describe the risks in the consent form. They're not the scary ones.

Like, for example, that fit bit may cause irritation to your wrist. How silly is that, but let's talk about it or the fact that you may at some point in the study lose pairing on your Bluetooth device, and it's going to frustrate you. Let's talk about how being a part of this kind of study, that is a risk that you may, at one point become frustrated, but the study team is here to help you.

Use the process for consent that makes sense, and informed consent must be pervasive every time you interact with people. Are you checking in to make sure they know what it is they're doing for you, what you are collecting from them? And respect respect for our participants and enrolled subjects. And I loved, again, from Pega's presentation, this idea of, Hey, is this working for you right now? Do you want to pause? Do you want to turn off this entire component of the intervention? Because it's just not relevant to you? Yes, people put all kinds of scary things on Facebook. But that's their personal life. We need to recognize how what they share with us you know, we need to protect it.

Passive versus active withdrawal. In a lot of these studies with sensors, they're just going to keep sending us data. The participant may not even remember it's still sending us data, and we rely on them to keep sending us data, but should we, in fact, reach out and say, would you like to withdraw? And then when my timer is going off, when we use vendors where their terms of service change, are we doing our due diligence to make sure participants know that? And then I will say, one of the biggest things we need to do, especially, again, in the context of our populations that are marginalized that we're trying so hard to be inclusive and equitable with, are we communicating our research back to them? Are we building trust both in your research team, in our institution, and in the field as a whole? So why are we doing this? Because it's the right thing to do? These are humans. We love their data. We love our devices, but these are humans, and we need to take care of them. That ethical research is actually good science.

It's not just because we have to. And I'm sorry. I'm I'm wearing the T shirt. We're leaders. We need to lead. Thanks.

Yeah. I can kick it off. Okay. So is there anything at either baseline or during a study run in period that would let us know that people are going to be more vulnerable to rumination and that they're going to have an untoward effect of this self recognition? Is this on? Okay. That's a great question. And yes, I think this is part of what I talked about how in that systematic review, a lot of the studies were mitigating risk by not including vulnerable groups or high risk groups. And so I think that's an exact point to what you're making and also to what Lara talked about is that, um, you know, who we're designing these interventions for matters a lot.

And so, you know, who the population you're trying to reach is a baseline characteristic, right? So that's one. You know, I think you can certainly collect baseline measures on rumination and maybe you're checking in more frequently, um, but then also, I think, Rima was talking about, you know, checking in during the course of the study, right, to ask if people want to pause. And I think you mentioned this too during PEG's talk about whether you're maybe prompting that so that you're um, you know, being actively involved as you're collecting data is another opportunity. This is up to. Yeah. Okay. I'll just ask because I think that's so true. I think this thing we have to grapple with when we're developing some sort of M Health tool that's going to go out is thinking about, is this made for people that we can identify as super susceptible to rumination? Because if so, then we should be developing a tool that we are piloting, we are looking at early on to understand if it's appropriate in this population.

If it's not, if it really is a tool that's meant for people who aren't particularly susceptible to rumination in this example, and then that is okay as long as we have a plan where when we identify people who are, we have a referral process, we're connecting them to the appropriate care. Our tool doesn't have to be the tool for everyone across that spectrum of risk, but really knowing that upfront, not just sort of winging it. I think one other consideration on that, too is how we're actually designing our self reflection intervention content. So there's some research about how you frame feedback in ways that sort of evoke self compassionate self reflection versus, you know, self reflection that might really easily lead to rumination.

And so I think it's from both a person level characteristic perspective, but also your intervention content. And I cannot stress enough that that is, like, a perfect plug for one of the posters for one of the projects that I'm affiliated with with Michelle Seeger, it's all about the power of the message, power of the message during a reflection and planning process and how you help people frame things in a positive and not necessarily, Oh, shoot, I didn't make it. I didn't hit my goal. I didn't hit

my eating target plan. So it's over there. I have some we talk about health equity and equity, but by the very nature of digital tools, it is inequitable, right? Mm hmm. So I mean, we can only so it's a very already biased population. Yep. Okay. Comment. Maybe you want to add to that or say something about that.

So, I mean, absolutely. I mean, in being in the space of digital health, mediated behavior change, one of the things that I have been tracking for years are the pew studies that talk about how many people have cell phones, how many people have and the number, it's going up. We're starting to stabilize a little bit. It's not gonna be completely 100% universal, but everybody's gonna have a smartphone. And then, obviously, how much it's integrated in their lives is going to be very different. But it's not going away, either.

So it's not going to be not everyone is going to ever be all in on digital, but a whole lot of people are. So for the people who are, let's do the best to make that inclusive space. We're not going to be able to completely get away from your pamphlet type approach in a doctor's office. We're not going to be able to get away from telephone based, you know, interventions because some people will not have a smartphone and wear a smart watch. It's true. It's true.

If I can just add a tiny bit because I agree. I think it is both the digital gap is real, which I think you're referencing and also something that I always reflect on is that just because we build it doesn't mean they will come, even if we had 100% of people with smartphones and smart watches and smart rings, which I wasn't even super familiar with until today. Um, that doesn't mean that everyone wants to be using it to receive care. Making sure I think sometimes people have concerns that digital health or M health is going to come in the way of our more standard brick and mortar models of care and I don't see any future. People like human to human interaction, and I think that we need to understand where digital health fits within this landscape, not digital health being its own separate. Continent. And so understanding where that integration,

which I know is a lot of people assume they're doing a lot of that great work of understanding for different folks, for different populations, different cultures, there's going to be different fit of the degree to which digital health platforms are going to serve to fill gaps as opposed to create disparities. So I think it's a great point. I'll just push back on that a little while acknowledging yes to all those things. But I think that it depends on what kind of device we're using, right? So something like a Smart ring may not be um, you know, maybe a luxury item that some of our low socioeconomic status individuals may never, you know, own.

But I think smartphone usage is very common. And there are some people who don't want to interact with face to face healthcare systems. And so I think that, you know, the type of device that we're choosing to build our interventions on, you know, that matters and may be able to reach people that traditional healthcare or face to face settings wouldn't be able to reach. And so I outstanding, yeah. You know, totally.

Yeah. So I have two. I guess there are more points than questions, but we'll be curious for the panel to comment on. And they go back to Libby's talk about safety concerns.

So the first is, you know, I do all of this work in people with bipolar disorder who are at risk for rumination, you know, regardless of mood state. And one of the things that we've found in our different studies, you know, in some people say that self monitoring increases daily negative effect, whereas it doesn't in our control samples. But what we found is, again, how we ask the questions, but more importantly, how often and when we ask the questions has a really big impact.

So that's something for people to think about. You know, in kind of qualitative follow up, people said if you ask, you know, it's particularly the questionnaire at the end of the day right before I go to sleep, that induces the most amount of rumination for me that makes me reflect on my day and what I did well and not well. And so it was really this nighttime effect that we were seeing. The second piece is, you know, in our priority study where they're doing passive speech analysis and EMA. I would say that one of the things that we've learned most about the adverse events that come is people's baseline digital literacy before they enroll in the study. You know, people want to be in this study.

Like, you can ask the coordinator Victoria later at her poster, but, you know, we have no problem recruiting, but not everybody really has the digital literacy to engage, and then it becomes, you know, just constant contact of, well, this isn't working or that isn't working or I don't understand this. I don't understand that. And so I think we as a field, need to do a better job of both assessing that prior to content and also building in systems to help improve digital literacy before we put people in these mobile interventions or just observational studies.

I have to I have to make a note about the digital literacy and thinking about how do we assess, how do we help people through that, you know, from a tech, as a staff member. So, I mean, I can't tell you how many phone calls I have been on about how to sink your FPI, how to get the blood pressure cuff, Bluetooth to work. The number of times where I've, you know, been called, where, you know, research doing recruitment calls, and someone has said, I'd love to be part of your study. What do you, you know, tell me more about it. Oh, it uses a smartphone. I'm like, Yeah, I have one of those.

My nephew set it up for me. All right, yeah, that's great. But that's not going to be a good fit for us.

But at the same time, your research assistant who is hearing from you as a PI about how they need to be enrolling, seven people a week are going to try to figure out how to get Grandpa, you know, to get in the study. So just as a reminder to everybody that as much as we say, we want to not have people without digital literacy in our protocols and our recruitment scripts and our processes often encourage them in because we're trying to hit our numbers. True. Very true. I think you also make a really important point, Sarah, about it's not just whether we have pre post change in our interventions. It's what happens in the middle and how we can better understand those processes of what participants are going through when they're in our interventions and how we can use that to refine our interventions for the next round of science. So I think that's super key.

And I think also not making assumptions, right about when things are going to happen like rumination or, you know, um, I remember we had a research event a couple of weeks ago at the DTC and one of the investigators is an expert in suicide research or suicide interventions for suicide prevention and was saying that, um, you know, some people have the conception that you shouldn't be asking people who are prone to suicidal thoughts and ideations about them frequently, but that that actually didn't turn out in the literature to be related to increasing suicidal thoughts and ideations. And so I think it's a really important point that we need to be not just assessing pre post change, but also really assessing these intensive data and what they can tell us about what's happening to participants. Hi. I think that the point Rima, that you made about, you know, only collecting the information that you need is very important. I'm struck, though as someone who has worked on studies of in time adaptive interventions that were not very adaptive and so we're not very effective.

I think that the data that you need to collect depends on who you ask. And I know kind of from our experience and from Pega's talk, you know, the types of strategies that you use to make these interventions truly adaptive, you know, I think For the people who are designing these, for the programmers or for the people, you know, if you want your algorithm to really learn, you know, I've heard suggestions of, you know, integrating people's calendars so that, you know, when people go back to school, like they don't or integrating and syncing with the weather app, you know, just as examples. So, when it's cold or raining, you won't get prompts to go outside and do activities. So I'm curious what your thoughts are on how do you balance the need to be adaptive in these interventions versus kind of what are we? I love it because that's exactly it. I mean, I can't say in this age of AI, where we're trying to dump as much information into these models as possible as we're trying as you said, what are the factors? What are the things that influence change? If we don't collect this data, we won't know if it has an impact, and I get it. I get it. But there are going to still be class areas of data that are probably still not necessary for what you're doing.

For those who are not sensitive to this, you know, the Dobs ruling has really made anything around women's healthcare related to menstruation and pregnancy a very, very challenging environment. There are literally clinical partners that research team that I've affiliated with where they were basically shut out of a future collaboration because they were terrified that the online community forums where women were talking about miscarriages or loss of pregnancy could lead to someone finding them and, you know, again, does your app need to know that someone is menstruating or not? Does your app need to know if someone is pregnant or not? Do you recognize that, you know, there was a wonderful, very happy, friendly, warm and fuzzy article about someone wearing a smart watch where the sensors basically triggered, Hey, are you pregnant? And she was so happy because, yes, she, in fact, had been trying to get pregnant and hadn't, and then it was just everybody was smiles and everything else. What, if that wasn't a smiles situation, right? So, be aware, again, of what your sensors could be collecting. Be aware of your study measures the surveys you're asking, the interview questions. Again, I'm not saying you can't collect what is meaningful. I'm saying I'm not saying you shouldn't do what you think might be meaningful or, you know, tertiary useful.

But if there are things you really don't need, then don't collect them. And just be aware of what you're doing and the safeguards you can put in place to really protect that data. I love that because I think what you're saying is like it's so important, both of the investigators are aware of all the data they're collecting also the participant. And so, you know, I think oftentimes there's just like mounds of data coming in and so making sure that on both sides, they're informed and there's really trans Trans transparency.

Yes, thank you. Yep. I'm a pediatric cardiologist and we're doing interventions to improve exercise and physical activity in adolescents and young adults with complex heart disease. We're using the sensors to inform conversations with exercise physiologists in a digital format.

But very commonly through the intervention, these people are asking for a virtual space to meet other participants because they've never met somebody with their heart disease before. Right. I would love to hear your thoughts because I'm terrified. I mean, I see the potential benefit, but I'm really nervous about the bad, you know, the worst case scenario. And I'd love to hear your thoughts.

And if anybody else has had success or failure in this realm, I'd love to hear about that, too. So we have in interventions that we've run that are physical activity based, but not in a pediatric population. So I can't speak to that specifically. But the online Community Forum is a feature that we feel pretty strongly I mean, it relies on a critical mass of folks, where the vast majority are lurkers and then a small number of people are engagers.

But we see the value even to the lurkers of even having this community because of the aspect of, oh, look, there are other folks like me, right? I would say that you're going to want moderators and whether that moderation is humans or whether it's AI or a mix I mean, it should probably be a mix of both if there's AI involved, I would say don't flinch from it because that feeling of, I have this really awful you know, rare situation. I mean, that is one of the magic pieces of digital interventions is that you don't have to be in Ann Arbor Michigan to connect with someone just like you. You can be anywhere in the world asynchronously. And the barriers, although again, you have to be in our digital world, but once you are, the barriers to finding and communicating and connecting with someone is removed from time and space. I think you're going to want to absolutely educate your users as well. So have materials for those teens and tweens about what is appropriate to share? What's not appropriate to share.

Have your dummy ways of filtering things, moderation. But I think you should give it go. This will be our last question.

All right. I just wanted to get your thoughts on how do you make sure that you're not excluding populations based on language for non native speakers and neurodivergent populations. I love the data so I looked at a lot of these studies going on at UM and about 55% of them having exclusion for English language proficiency. It doesn't say who decides whether you're proficient enough or not. Same thing with neurodivergence.

I think it was about 40 to 45% studies. Exclude people with ADHD, autism, Aspergers. Yes. Yeah, language is such a big thing from IRB or a consent form perspective, you know, obviously, I can't take an English consent form in English materials and put it in front of someone who cannot read anything in English and expect to be able to get proper informed consent and have them understand the study. And, of course, it's completely unethical to even consider that as an option, right? But also, it costs money to get good materials in all these various languages.

But you need to do it if that is the population that is going to be served. That is the population that is benefiting from this, that needs to benefit from this because they are a marginalized population that is not being reached with our largely English speaking Western world. So that is part of the inclusion, and here you go. Thank you. And just agree, this is an area where we need to get much more sophisticated, but it takes, I'll say like the offering platforms in multiple languages.

It takes resources while you certainly can do translations, you know, for nothing nowadays online, they do not maintain the sort of heart and soul of many of our interventions, especially message heavy interventions like Pejos you're talking about, where, you know, it might be founded on a motivational interviewing basis or something. If you just do word for word translations, you lose the entire sort of the heart of it, if that makes sense. And so actually thinking in budgeting into grants for meaningful translation where you're actually working within the community, having people that are not just fluent speakers, but people with lived experience of what you are targeting, giving feedback on sort of how you are adapting and re centering your intervention for that. So it's just something we need to be intentional about. We need again, this sort of idea of thinking about these topics, safety, equity, and ethics across the the arc of a project starting at conception at that first idea, I want to develop a digital tool for X and then bringing in the partners that you're going to need and thinking about the ways of minimizing exclusion. That doesn't mean we will still be certain populations.

It may be that you develop something that's for English native speakers first, and then you adapt, but we need to actually think about this if that's what we're going to do and the pragmatics of it, the resources that go into doing anything well. Things. And one last thing, too, from a safety consideration perspective is that I think as researchers, we need to keep the scope of our project manageable, right? So that, you know, requires some trade offs in who we can include or what languages, you know, we can offer. Things like that is just part of the practicality of some research. But I think that there are opportunities to think through in intervention or technology packages that maybe do go to market, right, thinking about some of the messaging or language around who these technologies or interventions were developed for and thinking through possible safety concerns for the people that weren't included as part of the development process. It's also an opportunity.

Are we excused? Yeah. I one comment on that. Like one thing I've seen missing, you know, when you get over the counter medications and they have a drug fax label, I think that we're going to need that with mobile interventions because of the specificity and the risks. Yeah. Fantastic. Thank you for panel one.

2024-12-22 21:40

Show Video

Other news

IT Academy Tech Talk: Shaping the Future of AI at HBS 2025-01-16 06:05
Tech Talk - Hydrogen On Tank Valves - Hydrogen Components Testing Machine - Hyfindr Harhoff 2025-01-15 02:39
Want to SEE the FUTURE of Glass Technology? Watch This Now! 2025-01-13 19:09