Who or What Do You Trust: The Boundary Between Interpersonal Trust and Technology Trust

Who or What Do You Trust: The Boundary Between Interpersonal Trust and Technology Trust

Show Video

JADE LEE: The end of the presentation, I'll be happy to attend this. Let's get started. This-- oh, oh. AUDIENCE: [INAUDIBLE] JADE LEE: This is my Google Mini when I'm all cozied up at night in my bed.

I say, OK Google, good night. From this command, this little device can turn off all of my lights, my smart lights actually, set up my alarm to a preset time, and even turn on ASMR for me to sleep into. So as we see, technology now is being more of an integration than an isolation. I no longer just sit in front of the computer to retain seat time, but instead, technology integration is becoming more interactive and ongoing. How many people use Apple Watch? Apple has things like that, right? Could any of you share what you use your Apple Watch for? AUDIENCE: Exercise, activity hours. JADE LEE: Yeah, thanks for sharing.

Exactly, but this is your exercise routine, not anybody else's. This is your heart rate, your stress level, and et cetera. As exemplified in this responses, the past decade has seen a huge influx of personalized technologies in the commercial market that tries to create this really customized data for you. These technologies passively and actively collect and make use of those personal data and ask you to trust these devices in you and deeper ways than traditional computers once did. So the role of trust is especially really prevalent for technologies that offer these embodied experiences, such as wearable tech or GPS-enabled devices because users physically involve themselves in the data.

In fact, you are the data. So keeping that important role of trust in mind, this presentation uses the term embodied technology as an umbrella term that really encompass various types of technology such as wearable tech, fitness tech, quantified self-tech, GPS-enabled mobile games, VR headset, anything that really involves your bodily movement. So, why study this in the first place, though? Because technology tells us things about ourselves. We're entering an era where technology knows more things about ourselves than we know ourselves.

About how productive we have been. About our health. Suggesting what you might even like before you like it by looking at your patterns. Given that information, we're supposed to make decisions accordingly. Even if you feel like you're productive that day, if you get your screen time report that says you use your phone more than 10 hours, that day you might change up your schedule or alter your behavior.

But in order to do that, you have to trust the technology that gives you correct information about your own life. What happens when the ostensibly objective data, such as your screen time report, is different from your subjective experience? What if you are answering a lot of emails on your phone that day because you're really busy, but then your report says otherwise? What do you trust more in that case? So it really make sense to think about what goes into that trust, why people trust, and how people determine what they're going to trust in which circumstances. So we see a huge influx of the money technologies, and literature started to define trust in application to technology.

Thus, comes about this phenomenon called technology trust. Technology trust is a belief that technology may offer users assistance with awareness and decision-making in uncertain circumstances. However, current literature really lacks consensus in two areas. One is the conceptualization of technology trust.

So what factors really make up this phenomenon, and which one are really important? The second consensus that it lacks is its relation to interpersonal trust, which I'll often refer to as IPT. This is the trust between humans to humans. And because trust really originates from trust between humans before technology was invented, how is that now being translated into this field of technology? And as you saw, because technology is becoming more interactive like my Google Mini and more personal like your Apple Watch, it doesn't really just relate to technology itself, but is that trust is happening between people and technology. And it's beginning to mediate all of those relationships together. Some theoretical frameworks that I'm using are mainly using two buckets. The first bucket is trust theories.

So it really depends on which lens or perspective you're looking at trust from. If you're looking at interpersonal trust from the psychological development theories, they define trust theories and factors like competence, integrity, benevolence that are happening between humans to humans. Researchers have expanded the IPT construct into technology.

Thus, TT really uses the factors important as functionality, reliability, and helpfulness. And within these two types of trust literature, is quite divided, and they hold a lot of tension between these. Some studies say that IPT resembles TT, yet other studies say they're fundamentally different and that they should be treated independently. I'm also using trust development stages to look at trust at different time points. The initial trust is your first inclination to trust even before you interact with your technology, whereas the knowledge-based trust is the trust that happens when the relationship you've already had that relationship, and now you're able to predict the technology's tendencies in certain situations. And literature suggests that trust doesn't just follow a gradual development over time.

Rather, IPT trust is formed and developed at different time points and based on various factors. So thus, using this early understanding of IPT, this presentation applies the initial and knowledge-based trust time points as notable stages of technology trust and how they form to look at what factors might be most prominent. In order to look at what is most prominent, I'm also using the prominence interpretation theory. And this theory really comes from a computer interaction field, and the theory defines trust calibration in two elements mainly prominence and interpretation.

Prominence is your likelihood to notice certain elements, whereas interpretation is how you assign meaning and add value to those elements. For example, when you are playing a game on your phone, and you notice like an obnoxious advertisement that comes up, you noticing that element is the prominence part. If that advertisement keeps coming up every 30 seconds, you might be really annoyed by it. You judging and making your own decision at a meeting to that is the interpretation part. So this theory really helps me to understand how users go through the process of noticing different kinds of prominent elements and how they add interpretation and add value meaning to that element eventually resulting in one's assessment in one's trust in the technology. Finally, this comes to my research aims.

My first research aim is to examine what technological factors were prominent at each of the trust stages by proposing components outside of the traditional ones. Research aim two is to investigate the integration of technology trust and interpersonal trust. For the data collection the content really takes place in a first year seminar course in the fall of 2021, where I ask 15 freshmen undergraduate students to choose an embodiment technology of their choice, and to engage with them for over 10 weeks. As seen on the right table students were organized into four thematic groups depending on what kind of embodiment technology, they chose.

Based on their selected technology type four students' use Oculus Rift that were available to rent out outside of class time. Two students were in the mental health group apps that use mobile apps like Calm, Breathe, or Sleep Tracker. Among the five students in the fitness group students' use various devices such as Hydrow rowing machine, a running app called Strava. Many students' use Apple Watch, Samsung Watch. And lastly, four students where in the recreational groups that use mobile games a GPS mobile games, such as Pokemon Go or a Nintendo Labo. For the specific types of data collected, I think it might make more sense to look at the timeline to situate these data.

During September until December those 10 weeks students engaged with using the technology. During September students have to turn in one declaration of embodiment technology. So that assignment what that asks students to do is to choose a technology and explain why they chose it. For the six weekly journals happen from September to December. That is just their weekly interaction with it.

One product review happened at December. And that is their personal individual report with their technology. Whereas one summit a report is happening from the thematic group where they did a presentation at the end based on their similar types. And in January, after the course was over, I did a post-course interview and six students agreed to do that. How I analyze these data where they were compiled in MAXQDA using grounded theory approach or the mainly use qualitative analysis of the graphic data.

For the written reflections that really happen during the course, I noticed broad usage points of students interaction with technology such as before they used it, as they used it, and after they used it. I use open coding axial-coding approach to form the initial categories. For the interviews that happen post-course, constant-comparative method in selective coding were used to create a more solid theme as it starts to create more core categories. And I'll show you what those like. So this really organizes what my data looks like and I will break this down for you.

The time points that I just mentioned from written reflections before use really happen from week 1 to 4 because on week 4 they were supposed to turn in that declaration of embodiment technology what they choose and why they choose. So this is before they interact with it. Usage point refers to week five through 10 when they were actively engaging with it. And after use really refers to that interview data. Overall codes goes back to the literature. As you recall my theoretical frameworks, the trust development stages, the initial trust versus knowledge base trust are kind of organized into this table, and this is really the meat of my data.

For the initial trust, this is before students actually got to engage with any of their technology. Factors like public credibility and privacy were most prominent. For public credibility, students' use other opinions to validate their initial approach. So in their assignment, they wrote, for example, a student who chose a sleep tracker decided to use and download that because they saw app ratings, app downloads, other people's opinion about it even before he engaged with it.

A lot of students held privacy concerns before they engage with it because it asked access to a lot of stuff. For example, Pokemon Go especially asked access to microphone, GPS, camera, and because it links to health apps, it also asks for heart rate, age, weight, your calorie burn, that has really nothing to do with the gameplay. So they were being a little conscious and suspicious about why they need so much access in the beginning. As students engaged with their technology, the usage point factors like accuracy and feedback were most prominent. Or accuracy, a lot of students were disappointed that their technology was inaccurate.

For example, a student who was using a running app called Strava they noticed that the recorded distance was being a little suspicious so they put it to themselves to a test by comparing it to other GPS apps like Google Maps or On The Go Map, and they found out that the Strava app rounded up actually because it only records in a straight line. Whereas he was going through all the shortcuts and the little streets that the maps didn't really recognize, so that really impacted his trust. For feedback, students complained that it was getting constant feedback, but especially to the lack of contextual awareness that really hinder their experience. For example, a student who was using a smartwatch was sitting in class a two-hour lecture and it kept getting notifications that they were sitting way too long, that there is living the sanitary life but it was time for them to move, yet even though technology offers and advertises as this personal experience, it doesn't really take into consideration of all your contextual awareness.

Lastly, after use, this is code to see whether they continued using it or not. The factor such as commitment was most prominent. Students encountered various forms of commitment. So for mental health apps, are you willing to use that app during finals week when you have so much things going on? What are you going to prioritize? For Oculus Rift students because they were renting it in class, are you willing to purchase that afterwards? Or if they were using certain features if they wanted a more advanced feature, are you willing to pay for this subscriptions or those premium access? So various forms of commitment or access afterwards. So, what does that really say? That really answers the first question: what technology trust factors were most prominent each of the stages? The findings indicate that trust in various time points may invoke different factors of trust as users make first impression as they engage with it, and then ultimately decide future usage tendencies. More interesting question, how is TT integrated with IPT as these trust characteristics interact to influence an individual's trust when using this? So we see that technology trust is not just an enclosed phenomenon.

It just doesn't happen between you and the technology. The relationship isn't solely that. But instead, the boundary between TT and IPT may be more overlapping with each other. As students encounter their devices, it's evident that TT is foreign based on other people's opinions like peers, online reviews. So the layers of TT and IPT are kind of muddled together.

And they're not as separate. Really interesting part of the study that was unexpected is the mention to trust within themselves. So when you see the discrepancy between the objective data versus subjective experience, which one are you able to choose? Even though I didn't ask for it, students mentioned their internal dialogue between what happens when I see a difference between the two and whether I trust myself or the technology more.

So it's not just humans to humans, but even within oneself. And this is really relevant for future research because I wonder how this might be relevant in a more higher stakes setting, such as online proctoring, when you're are unsure whether you made a mistake or whether the technology is detecting suspicious behavior of you. So that discrepancy continues to happen even outside of recreational or informal settings.

Comes to my conclusions, this research sheds light on how technology is ubiquitous and emphasizes the importance of understanding complex relationships, and this is especially important in educational settings because schools are increasingly using in educational embodiment technology tools that use personal data to offer more immersive learning. So, like a lot of students' use fitness trackers nowadays to provide a real-time feedback during PE class. I have some references, if you would like for some of the examples I will show you. Virtual reality headsets to learn those complex learning concepts, especially education, especially smartwatches are being used to detect movements to track students with ADHD or even autism to generate alerts for pre-determined boundaries and when they're crossed to create a more safer school environment.

However, not all of the students respond to these technologies the same. Some students might find these technologies in classroom engaging and helpful, while others might feel uncomfortable or irritating. So without fully understanding how students experience this in classrooms, educators could fall into the risk of blindly just inserting these technologies into these settings.

So it's really critical that this study examines how these features affect their experience and how trust is involved and what information they decide to choose later on. Thus, universities and schools can really utilize the study's findings to formulate tips and suggestions in ways that students can fully explore the educational technologies to their furthest potentials. That is the end of my presentation.

And thank you so much for listening. I will take your questions if you have any. AUDIENCE: So, can you tell us why you did this? Is this part of a qualifying exam premium, where does this fit into your larger research? JADE LEE: So this is part of my qualifying exam, the empirical study.

Yeah, and quickly I mentioned online proctoring as future work, I'm doing that for my dissertation to see how this in emotion and trust might play at a higher stakes situations. AUDIENCE: So, I can see why the trust construct is really important. So, I wonder if trust in these embodied technologies is different than my trust in technology for other purposes. So what are your thoughts about that? JADE LEE: Right, and so I have gotten some questions about the variety of technologies that students' use. However, I wanted to mention the importance of having students choose their own technology.

That was really important for this study. Because we need to, first of all, learn about what happens after they were required to use it after the course, so whether they wanted to adopt it or not, that was really important for us. And also, it's really important that students have a choice in choosing certain technologies. The big critique of my study says stop inserting these technologies into classrooms without really knowing it in authentic natural settings. And so your question about how is it different from this technology to other technology that is completely true. It's going to be different from everybody depending on which technology choose and depending on the user as well.

And so that trust is quite different, yet I feel like there is something fundamental within trust where these time points are really showing what those patterns are showing. AUDIENCE: When you were doing the study, how are the technologies introduced to the students? Because I wonder if the language that is used as well adds a layer of trust to that. So even just like the word smartwatch. You assume it's smarter than me.

So I'm going to trust it. Or you think about artificial intelligence as well. And just learning and bringing in information. You're like, oh, so that's definitely going to be something that I can trust if it is associated with it. So I'm just wondering how the students were introduced to these technologies and if that also impacted how they perceive them and experience them.

JADE LEE: Right, to answer your original question like, how were they presented? So we gave them the option to choose whatever. We didn't give them a preset list of existing technologies. We just said, you can go ahead and choose any games, any technologies as long as there they're embodied technologies.

So I don't think we had any like pre-determining in their choice. However, you point out a really interesting question because the connotations behind those words already established certain respect or things like that. And that is something I also mentioned in my paper, whereas like health apps, it's almost replacing now that we're entering an era of like telehealth and health between technology that already transfers like doctors respect into these technologies. And so what you talk about those words having impact in those trust and technology is a really interesting part that is similar to what I was noticing in those health apps. AUDIENCE: So, how does the trust idea related to the students? So presumably we're hoping that the students can learn something from using the technology. So, how does the trust piece play into their learning? JADE LEE: I believe trust is very fundamental before learning comes because even if the technology learning software or device is really great if students don't really like it, they dislike certain features, as I mentioned, if certain concerns aren't really met that really make up those trust, they can't really reach those learning potentials that these devices offer.

And so it really sets up a really strong ground before students even engage with it, and as they engage with it if we learn about how to establish that relationship a little more stronger, I feel like they can really grasp the potential that educational technology holds. AUDIENCE: So, should we trust technology? [LAUGHTER] JADE LEE: I feel like it's really important to set up a healthy boundary as well. Instead of just asking students to use a certain technology, maybe ask them, what do you prefer? And be really open and flexible about what is educational technology versus not. I feel like that boundary is really permeable. And that even video games, I did a study prior to this that looked at how technology really helps with mental health, and you wouldn't really connect video games with mental health and things like that. But really found out that Pokémon Go really helps freshman students to learn about the campus as they walk around, and so there are these connections that I feel like educators are missing because we have maybe pre-determined stereotypes or so.

I feel like trust really helps in asserting the connection between students and teachers and the designers too. So it's something that is very abstract. And we don't really think about it. And we don't see a direct connection to learning.

However, it's something that we really need to pay attention to. Anybody online that have any questions? Yes, Dr. Hart. DR. HART: So, I'm-- this might be a little bit off-field from what you were actually doing. But I'm just curious to hear a little bit more about the ways that the schools are using.

And I'm not even sure if these would be necessarily embodied technologies. My daughter's middle school just implemented basically like a hall pass, an electronic hall pass system. And they're supposed to be able to track who's using the restroom and are people basically using it at the same time constantly and meeting up and things like that. I guess I'm just curious about what your thoughts are around the way that-- what types of technologies do you see schools increasingly using and sort of tying in some of those to the implications of your study around trust and the balance between those having a useful function potentially and possibly kind of undermining or expressing distrust of students. JADE LEE: Right, I think that's a really interesting question. And that opens up a huge door to a lot of studies too.

So it really depends on which technologies you're talking about. The questions that you said because schools are now implementing a lot of new technologies in these settings without really knowing what it is. And so the bathroom pass, I've never heard of it actually, but it's really interesting to go back to why they were using it in the first place? Why we need to track students using the bathroom? What's the purpose of that technology and how we're using it? So I'm kind of going back to the original question of what are these technologies being used for? So are you trusting that the student is, in fact, going to the bathroom or not, or how many times they need to go? Maybe that invokes another question for me actually to question why we need to track so many students using these times, right? And whether technology in those cases are really necessary. And so even though technology has both good things and bad things, maybe it should be really implemented in every way.

Before it gets implemented, maybe we need to really see how that really affects students before it's integrated. What's interesting is I feel like this example, like I'm absolutely dying with school, this is, I need to talk to these people, but like it relates to the same questions that you're looking about what happens if there's a discrepancy between the subjective and the objective record, right? So like a student who's digital clock pass says, or you were gone for too long, and you didn't go straight to the bathroom or the shortest route between your classroom and the bathroom. So, therefore, there must be some kind of something nefarious going on here, right? And maybe you had to go to your locker first because you're a girl and you're in middle school, and certain things are unpredictable that might happen before you have to use the bathroom, right? But then, so that's your subjective experience. But then the objective data makes it look like you did something you weren't supposed to do. And so how does the school then reconcile those things, right? And whose record is actually, quote unquote, trusted.

Right? I think it fits in with this same notion of my watch doesn't reflect all the steps that I took today because I forgot to charge it last night, so then it died. Or I forgot to turn off the email client during my proctored exam. And so, at a certain point, I clicked out of the testing window, but that doesn't mean that I was cheating, right? The whole question of what the technology itself can produce a false report or give us noisy data that doesn't align with somebody's subjective experience.

And so like, what's the tension there? Like, I don't know. I think it fits in you're saying framework [INAUDIBLE] stuff. Thank you so much for that comment. I think when consequences are involved I think emotions and tension also get involved too.

So, just going back to the question when the technology is used for surveillance because students may feel surveilled as well, and then they may have that distrust in the technology. So also, like you said, it might be just tracking the time or it might actually like GPS tracking your route. We don't know that for sure. Yeah, my little cousins have just learned that-- DR. HART: Oh, wow. JADE LEE: This bathroom pass thing.

So they so they added that they scan in with their ID cards, but now they're implementing they have a hall pass, and they can see how many people are out in the bathroom. But they're like, what if I just really have to go and there's five people that are gone? How are you able to control that? I mean, the thing that they're trying to stop because it is a really chronic issue at the high school and the middle schools, it's drug use. And people dealing drugs, and/or obtaining drugs, and/or leaving class to go to the bathroom, or someplace else on campus. And so they're trying really hard to find ways of making this not happen without putting a security guard in every hallway and having the bathrooms camera-monitored and whatever because that's a whole other aspect of surveillance and privacy and gross, right? So I think it's a legitimate problem that they're trying to solve.

But it introduces all these other noisy data sort of issue. And, Dr. Hart, one more thing I just came to mind with researching about online proctoring. I have also run into aspect of what type of messaging does that send out to students too. So even just having a bathroom pass sends a certain set of a certain type of relationship between students and teachers. And it's not just do you trust about that the bathroom pass is going to after recorded.

It's really mediating the relationship between students and instructors as well. And so thank you so much for that example. I will definitely check those literature out. Yes, Sierra.

SIERA: You touched on this a little bit already, but I'm thinking about the relationship between costs and choice, especially with things like proctoring technology and also things like turnitin.com to look for plagiarism to try and detect plagiarism. I guess I'm wondering about how that may affect students, both like around the messaging, like what it's conveying in terms of students being trusted, and also how that may affect student behavior if they're trying to kind of adjust their behavior in order for something not to get flagged as cheating on the test that's being proctored or not getting flagged as plagiarism. So, do you I don't know, address the way that they're writing in order-- with the plagiarism detection software in mind? I'm wondering about your thoughts about the relationship between student choice if they don't have a choice to opt into this and how that may affect their behavior and their learning.

JADE LEE: Yeah, so I think it's really complex because even if the setting says you do have a choice, some students might not have a choice. The setting that might be really accustomed to that. And so for this study, for this recreational informal setting, it was really important to offer that choice in choosing which technologies type it is. But you referencing those more higher-stake and more really tends more technology, such as online proctoring, things like that, even if we say can choose whatever type, when you want to take it, how you want to take it, because technology offers that, interviewing those students, it's really not the case. Students don't have a stable Wi-Fi; they don't have a quiet space to take the test.

So even though it looks like students have choices, all of the choices might not be same or fair, which is really important. And so I think it's really important to look at how you frame these studies. And so not to blame the students and say, I gave you a choice, you decide to do such and such. And so you deserve certain consequences. But it's not as linear as that. [INAUDIBLE] even in the freshman seminar with the body technologies, there were students who didn't necessarily have the information they needed to make an informed choice, right? So the students who tried the Oculus really wanted to play it and discovered they couldn't play it in their dorm room because they live in a triple and you have to be able to clear a certain amount of space around you in order to have it work properly, right? Body movement in the visual space.

And they just couldn't do it. Or the student who really wanted to use the sleep tracker but he also lived in a triple and discovered that it was recording his roommate snoring and not his snoring. [LAUGHTER] Yeah, so I mean, I think that the whole-- yes, you have choice, but how much does that commercial documentation of a given technology allow you to figure out how well it's going to align with the constraints of your actual life, right? That's why it's so important to look at students in their authentic settings, their natural setting in a reality, rather than just putting them in a lavatory and doing either testing or intervention model because that is really different from your actual experience. AUDIENCE: So did you teach that class or did you teach that class? JADE LEE: We try it again. [LAUGHTER] If nobody online or in person have any more questions, I'd be happy to stick around after presentation as well to talk.

And I will just stay online as well. And sit in here and let me know. And then thank you so much for coming to this presentation. Congratulations! [APPLAUSE]

2023-06-14 08:09

Show Video

Other news