Using Sensing Technologies in Classrooms to Better Understand Children’s Experiences & Development

Using Sensing Technologies in Classrooms to Better Understand Children’s Experiences & Development

Show Video

-: Okay. So I'm here to talk to you all today about some pilot work that I've done on the early childhood playground using speech and location, wearable sensors. The funding for this work came from the University of Kentucky Early Childhood Research and Development Initiative, and the Kansas Intellectual Developmental Disabilities Research Center.

The main collaborator on this work was Beth Rous, Ying Luo, Justin Lane, Joanne Rojas, Haley Bergstrom and Christopher Cushing also assisted in the planning and implementation of this project. So today I'll just be discussing this, some of our preliminary results from this work. So similar to the early childhood classroom time spent on the playground in early childhood settings provides opportunities for young children to interact with adults, peers, and these interactions can result in improvement of language, social, emotional, and motor development, as well as physical activity. Some children with special needs may experience limited opportunities for age appropriate interactions with peers on the playground.

And these missed opportunities could slow some of the amelioration of developmental challenges. And we also know that there are contextual factors that influence peer interactions on the playground. So one study found that peer verbalizations were higher in a cozy dome, which is like an igloo shaped climbing dome compared to a roller slide. So basically what we're seeing here is that we have both child and contextual factors that influence interactions children have on the playground. The way that we assess these interactions are typically with direct observational tools. This is the most common approach.

So one tool is OSRAC-P, which captures physical activity levels, social groupings, and the types of activities that children are in. Another tool is the POPE, which measures peer engagement in outdoor natural environments, so child initiations, parallel play, engagement and games. Some of the drawbacks to these direct observational approaches are that they require some intensive training, reliability checks, and they are typically focused on one child or adult at a time. Another difficulty arises when trying to get to places on the playground where these interactions are happening, that that may be somewhat difficult.

So getting under the slide to see what's going on between children without interrupting that. So there's some advances in sensing tools that have been taking place. I would say for the, maybe the past six or seven years, and what these tools do are capture real-time behaviors during typical activities and contexts without the need for an observer. Some of the work that that I've done has looked at using speech and location sensors to capture classroom adult talk to also look at playmates, interactions between playmates, to get at a way to better understand friendship development. Some of the recent work that I've done has also looked at wandering in the classroom and other panelists on this presentation have done this work as well. To my knowledge, this work on the playground is the first of its kind using these tools that I'll describe right now.

So the tools that I use on the playground are LENA and Ubisense, and LENA is a speech recorder and a processing tool that captures adult word count. It captures child vocalizations, peer vocalizations, as well as adult child turns. And Ubisense is a real-time location system that captures, the movement that can capture the movement and the location of multiple children and adults in a space multiple times per second. It's an objective approach, there's no human observer and, the estimates that come from the tag can be within a foot of it.

So it's pretty accurate. So some of our research questions, the two that we have are: How much time do children spend in different areas of the playground and on and slash near fixed equipment? and How much talk took place in these different areas of the playground, as well as on or near fixed equipment? The setting participants, there were 11 toddlers, eight had, were of typical development and three had special needs. The three children with special needs, two had speech delays and one had significant respiratory and abdominal organ issues. Children's spent approximately 42 minutes on the playground.

There were three areas of the playground. The first is a canopy area, which was 29 feet by 35 feet in length. The grassy area contained a sandbox and it measured 56 feet wide by 110 feet in length.

The edge of the canopy area was created in order so that we could more precisely locate children within and outside of the canopy versus grassy area, because children tended to, there were some children that tended to hang out at the edge, and there is a little bit of error that comes along with Ubisense. So we needed to do this to be able to place the children in the correct area. It's not something we really run into in the classroom because we don't have children moving in and out of the space.

Behind let's see the next. So this is a picture of the canopy area. And just to give you a sense of what went into to setting up Ubisense. There is a sensor over here on this pole, and we set up the four sensors using zip ties. These were connected with CAT5 cord into power over ethernet switch and laptop. We calibrated the tool to ensure the sensors were capturing the tag positions correctly.

We used portable batteries to charge the system, to keep the system powered, as well as a laptop. We used the individual location tags. We placed them around, some of the larger pieces of equipment you're seeing in this picture in order to then look at when children were inside these areas. And to also examine the talk that took place.

Behind this structure with the different color walls, it's like red, yellow, green, we have a table set up and that's where we had our laptop and our PoE switch. So educators were told to go about their typical day. The children wore the LENA and Ubisense tags in a shirt you can see this is a child demonstrating where those tags went. In terms of data analysis, we developed a location clustering program through MATLAB to determine when children were on near individual pieces of equipment, as well as in the canopy and grassy areas, then we use SPSS cross tabs to determine when children were on equipment or in these different areas talking or receiving talk from others. So our results on average children spent 30 minutes in the canopy area, 10 minutes in the grassy area, and two minutes at the edge of canopy.

Children with special needs spent more time in the grassy area than outside of the canopy than their peers of typical development. And under the canopy, children spent a majority of the time near or on the Hippo and the least amount of time was on or near the Oval. So our approach yielded over 28,001 second speech and audio and location estimates. The speech estimates are a rate per minute. Children with special needs, Well, encouragingly children with special needs received more talk from peers and adults in the areas where they spent the most amount of time, which was the canopy area, grassy area, compared to kids with typical development. They also received, children with special needs, more talk from adults at more pieces of fixed equipment than children with typical development.

And for those children with special needs, the most talk was took place at the Whimsical Tree and the Hippo and for typical development, it was the Oval and with Whimsical Tree. So similar to our past studies using LENA and Ubisense in early childhood classrooms, the talk or lack thereof was directly linked to where children were spending a majority of their time. So there's some research that suggests that children with special needs are more likely to interact with adults on the playground compared to peers. And this may be because adults are more likely to respond positively or reliably, I guess, to bids for attention. And then there's also some evidence that, children with special needs have fewer interactions with peers relative to kids with typical development. But as I just mentioned, our results are encouraging and that we found that both children with special needs received more talk from peers and adults than children with typical development.

Some of our limitations, this is a small sample size. These are just preliminary results. So we have only 11 children. This took place on one day, and I should have mentioned it took place during summer. And let's see, our tools that we're using don't pick up on gesture.

So that is something that we hope to look at in the future using perhaps cameras. We also had some overlapping speech and noise, which is not uncommon with these speech devices. Overlapping speech was 49% and then noise took up about 3%. So that those are the things that factor into, or limit the amount of meaningful data that, we have to work with. So some future directions, we know that physical activity is related to children's social engagement. So whether they're alone or with peers, this is something that we hope to look at in the future on the playground using LENA, Ubisense and some type of accelerometer.

We also are currently doing some work around keyword spotting in the classroom, looking at WH words in science and books. So in the future, we hope to look at some of these words and other types of words that are common on the playground that support interactions and physical activity. And also our hope is to look at some of the peer mediated interventions that are used on the playground to support interactions with children with special needs using this tool to provide ongoing data of speech and location. So the combination of LENA and Ubisense provides a method for capturing the talk of children with special needs, same age peers, with typical development and adults on the playground all at the same time without the need for an observer.

And our novel approach provides a means to capture information that could be used to potentially positively impact children's long-term outcomes, as well as inform the designs of playground contexts in order to support more interactions between children and peers and adults, their peers and adults. Thanks. So I'm going to hand it over to Daniel. -: Welcome everybody to our symposium, Using Sensing Technologies in the Classroom to Better Understand Children's Experiences and Development. I'll be talking about interactive behavior in schools.

My name is Daniel Messinger from the University of Miami. So interactive behavior in schools is IBIS. Here's our little IBIS. Our IBIS is wearing some of that tech that Dwight was talking about.

These folks are our co-investigators and I'll be introducing them over the talk. One of them is me, you can guess which one. Basically Dwight has done the great introduction about the Ubisense and the LENA. We use these vests. And what you're seeing here is kids moving around the classroom. And that's what I want to talk to you about.

How we can harness these objective measures to understand how kids are interacting and how they're developing. Mostly in inclusion classrooms. So we're in the community.

We're really trying to do this objective. We call it objective data, data acquisition measurement, using these devices. Here's a little guy wearing the vest. In our vest. So we use the vest rather than the tee shirts. We have the LENA recorder and we have two of the Ubisense recorders and we'll use that to get orientation, which where the kids are, which way the kid is facing.

Okay. So I was minding my own business. This is the story of how it all happened. I was minding my own business and Udo Rudolph over there on the left, says, "You should check out these cool devices. Here's how they work."

I've already measured a classroom full of kids, typically developing kindergarteners, and you can see them bouncing around here. And the red guys are our girls and the blue guys are boys. I guess I said that kind of funny.

And it's kind of evocative like, the red circles are hanging out more maybe, but there's some mixing over there on the top left, the blue boys are there three blue boys and one girl. And we needed help in what to make of it. And we called on Chaoming Song, over there on the right of the slide. Chaoming is a Professor of Physics at the University of Miami. And he said, "Yeah, I have ways to analyze those kinds of data to better understand when children are actually in social contact."

When maybe this will work like maybe these guys are close enough, given where everybody might be over the whole course of the observation that we say they're in social contact, but these guys, I'm not so sure about. I'll talk more about that. But we applied that in three observations. We made little social networks. And again, here are the girls and here are the boys.

So we kind of validated the objective measures that tech by showing these kinds of homophily with respect to gender, to some degree in the classroom. We then moved on. We have an ongoing collaboration with the Debbie school at the University of Miami, one of their programs. That was one of the pictures you saw. I think in the third slide, was the Debbie school, the Debbie Institute, and one of their programs, is a program for kids with hearing loss.

Now these kids with hearing loss, you can see them here in the legend. They have a cochlear implant or a hearing aid. So they have some access to the speech signal. Oh, they stopped, but you can see them moving around and remember that they're wearing two of those sensors. So that's why it's a triangle. So the triangle is telling us which way the child is is facing.

So this guy is facing this way and this guy is facing this way. And these are kids with hearing loss. And when they're filled in, it means they're vocalizing to one another. So what we're doing is we're synchronizing the location from the Ubisense, with the audio from the LENA. And then here is our, we call it g(r), the Radial Distribution Function.

But just to remember, the kids are all moving around. So you could imagine all the potential possible distances between all kids by chance. But if two kids are wow, these kids are really close, are this close, let's say within an area here, it's going to be between like what we just call it between zero and one and a half meters. That's more than you'd expect by chance.

So you'll see these graphs. They always mean, we're trying to figure out whether or not the children are really in social contact. We also said, well, they have to be in social contact kind of closeness wise, distance wise, radius wise, but they also have to be facing each other. So within 45 degrees of one another more or less. We did this in this hearing loss classroom. And these methods here they're going to inform the next two slides, which are really what we did with the data.

So first Dr. Lynn Perry, my colleague, our co-director of the IBIS project says, "Well, that social contact that you found, that's great. Maybe vocalizations are particularly salient for kids within that social contact. Let's look at all the times, one child talks to one of their classmates in one observation."

So like the first time we did this, I think there were seven or 10 observations they were like weekly intervals. Sometimes they're longer. It turns out it doesn't really matter if child A speaks to child B this much in one observation, child B speaks to child A because they're both in social contact.

That's the surmise here, this much in the next observation. And you can see you get a pretty nice line there. This is a log scale. So, it goes up by like powers of 10 or something.

We also find that you see the hearing loss in blue, the typical hearing in red, no difference between them in the frequency, of their vocalizations, but there's this cool pattern again, where how much I talked to my peer predicts how much my peer talks to me. It's a kind of dyadic process running through the classroom. More over if we kind of divided up into, I like to say semesters, but it's really the first half of observations and then the second. So how much A speaks to B in the first half of the observations, let's say several months predicts how much B speaks to A in the second half of the observations, which in turn mediates, predicts the expressive language ability of the kids. And we use the PLS-5 at the end of the year.

So this is kind of cool. So interactive pure speech is predicting language abilities. Yes, it is.

Next step Sam Mitsven, my graduate student in the IBIS project, asks is there more we can do with that speech signal? And she wasn't interested in social contact, so we'll forget social contact until the next project. And for her, the key was phonemic complexity. My take is this, this is just a way to tell us something about the complexity of the child's speech, the number of speech sounds in English that are present in each vocalization on average. What she finds is that the adults, which really here means the teachers, mean number of unique phonemes predicts, no is associated with, this is the same observation, the number of - the complexity of the child's speech.

So you get this nice little association. And one thing to note here is that typical hearing kids have more complex speech phonemically than do the kids with hearing impairment than do the kids with hearing loss. So we're seeing the difference in phonemic complexity there. Again, we do this moderate, sorry this mediation results, so here teacher phonemic diversity predicts child phonemic diversity, which predicts perceptive language ability, it also predicted expressive, exactly the same association and that association looks like that where we're taking the mean of the child's phonemic complexity of these vocalizations that they produced over the course of the year and we're correlating it with or using it to predict the assessed language at the end. So what we've learned is both the frequency of children's talk with their peers and the complexity of their talk with teachers predicts language ability. We then move to classes that it contained inclusion classes, all the classes so far have been inclusion classrooms, except for that first typically developing kindergartens.

And these guys around these are preschoolers between like three and five, probably modal and mean age four. So now we're in ASD classrooms, autism spectrum disorder, of course they also include other children, they're inclusion classrooms, we're looking at the social contact again. What Chitra Banarjee finds, an undergraduate at the time working on their project, is real evidence for homophily that is similar kids tend to hang out in social contact and here we're just looking at the movement, but just looking at the social contact. So you can see the TD kids hang out more with TD kids.

DD kids who didn't have ASD with other DD kids who didn't have ASD and ASD kids with other ASD kids. And they do that more than do the various combinations of what we call discordant dyads. So a lot of evidence for homophily there, in fact, more evidence for homophily than for straight up disability effects, which in some ways is I think encouraging news, although these bars, the interaction effect is such that these bars, the DD and the ASD, even in the homophilic dyads - the concordant dyads - were less than the TD-TD. The other cool thing is she looked at social approach. So I'm really taken by this movement data as you can tell I think.

So we're asking what kinds of kids approach each other? Here you see it with respect to ASD and TD. I think I hopefully always the green those are just adults hanging out. So they might be a teacher, or it might be an observer, making an activity log of their class. So we can look at the top part of this graph. And the issue is that again, we see homophily. So typically developing kids down at the bottom are more likely to approach other typically developing children at a higher velocities, DD kids approach other DD kids at higher velocities and children with ASD approach other children with ASD at higher velocities.

So we're seeing these robust effects for homophily in the preschool classroom by eligibility type. These are from the children's eligibility categories, perhaps more so than we'd like to see, but that's why we use the objective measurements is to see what in fact is going on. Pulling in the vocalizations is Regina Fasano, one of Lynn Perry's students on the IBIS project.

She's looking at all these, what she does is she makes social networks, out of when the kids are in social contact you remember that when they're speaking to each other. So it could be A to B or B to A, both of those constitute the frequency of vocalizations and social contact and networks are cool because you can use them to characterize the entire classroom. And that's really the idea I think of this objective measurement is, you can see everything, or at least that's the hope. Maybe you don't see that well, that's for you to judge, but you see everything. You're not just focusing on one kid, you see all the kids.

So this is a social network. There are five classrooms, but I couldn't fit them here so I scattered them around the slide. What we find is lower autism or ASD within group modularity. That is, if you look at the blue kids in those networks, they speak less to other kids, to other kids with ASD than you'd expect given the kinds of the way speech is distributed throughout the classroom. So really this is plays on an observation by Laura Justice and her colleagues in which they're saying the kids with ASD are excluded or excluding themselves from classroom resources, from language classroom resources. You can really see that here now we're looking at the number of individual connections a child has that comprises our measure of degree centrality, key thing here is it's still vocalizations in social contact to and from one's peers that level of reciprocal vocalization predicts again, the children's end of year assess language on the PLS.

So taking this forward, we're doing some future directions, some areas that I think will be important and one of them is machine learning. So here we make no assumptions. We just know, we got this cool video, but video displays all the kids moving around. Let's take a look at this kid first, of course.

We always take a look at every kid, but let's take a look at this kid first. This is where this kid is. This is where all the kids' peers are.

This is how much this kid has vocalized in the last several minutes and how much their peers have vocalized in the last several minutes that's coming through here through the LENA vocalizations. Let's use that and use deep neural networks to predict how much the child is going to vocalize in the next minute. And here we see a pretty good job of our ability to predict how much a child will predict in the next minute. These observations come from the hearing loss classrooms, but of course we could implement them in any classroom. So the key thing here is we're not saying social contact, not social contact. We're saying, well, you know, speech may happen in different ways, maybe for kids with different diagnoses, let's learn from the data, how we can best predict how much a child will speak, because we know that the amount of a child's speech is a potent predictor of their language abilities, and maybe also their language development.

So what is the IBIS? So now you can see that the IBIS is wearing their little vest and it's got a little pocket for their tech. So the IBIS goes to school. What we're beginning to find is that this kind of objective measurement is scalable in inclusion classrooms, where we're currently really in at least one classroom a day. Let me just say one classroom a day, four or five days a week.

We can use it to look at the classroom structure, to create those networks of how the kids are interacting, maybe throwing the teachers there. It gives us a window into these dyadic relationships in which how much A speaks to B, predicts how much B speaks to A, which of course, you guessed it, it's going to then predict again, how much A speaks to B. Because we're doing this in weekly or monthly observations currently more monthly.

And fundamentally, we're trying to get, use this tech to get a sense of the interactive contributions to language development. How children's interactions in classrooms with peers and with teachers facilitate their growth. Both kids who are more typical developmental trajectories and kids who are more atypical developmental trajectories. So with that lots of thanks, it takes a village. In this case, it takes two lab groups.

So this is my lab group. This is Lynn's lab group. It takes IES and other funders that include the Spencer Foundation. Tiffany, will talk about that project next, and resources from the Autism Science Foundation, the National Science Foundation. And currently we're being funded by NIDCD to look at the classrooms with children with hearing loss. I'd love to talk to you about this. You saw me call for your collaboration and also your questions.

I can't wait. And let me now turn it over to Tiffany to talk about the Spencer Foundation project. -: Thank you, Daniel. My name is Tiffany Foster, I'm from The Ohio State University. And today I'll be discussing an ongoing project that focuses on Using Sensing Technologies to Understand Longitudinal Peer Social Networks and Early Language Development. To begin, we know that the preschool years are a time of rapid language development driven by the talk children are exposed to during interactions with others.

Child-directed talk has been found to be particularly important for the acquisition of language skills. With an increasing number of children spending large portions of their early childhood in the preschool classroom. More attention is being placed on understanding how the interactions children experience in the preschool classroom contribute to their language development. Importantly, during the preschool years, children spend large portions of their time interacting with peers. By some estimates, children spend about half of their day in free play with peers. Due to the large quantities of time young children spend interacting with peers researchers have started to consider the role these interactions may have in children's language development.

In the peer effects literature studies tend to find that children who are in classrooms with peers with higher language skills on average, tend to show greater language growth. A recent study also found that peer effects operate independently of teacher effects and peers may in fact have greater influence on children's language growth than teachers during the preschool years. A limitation of this literature is that the mechanisms through which peers have an influence on language development have not been thoroughly explored. One study by Chen and colleagues that aimed to address this limitation found that the language skills of peers with whom children interacted most frequently, had the greatest influence on language development. In other words, who a child interacts with matters. The social network approach can help researchers understand the relation between who children interact with and language development.

In a social network analysis, the links or ties among children in the classroom can be examined. And the figure on the right of the slide shows an example of visualization of a peer social network. The dots represent children and the links represent their social connections. The child indicated by the solid green dot would be considered to have a large social network comprised of the peers within the circle. And the child indicated by the arrow would be considered an isolate with a very small peer social network.

A child's position within the network is not to have a direct influence on a child's experiences through constraints and opportunities created by the network, which in turn can have an influence on development. Concerning language development specifically, individual differences in peer interaction patterns likely shape language development. And a peer language network can be used to understand how language skill flows among the children in a classroom. Children who interact more frequently with peers with stronger language skills, likely have more accelerated language development than children who interact with peers with lower language skills or children who are isolated from their peers. As I mentioned, some children are isolated from the classroom social network, meaning that they have access to fewer peer language resources than the more well-connected children.

As research highlights the importance of child-directed talk for supporting language development, these children may display slower language development as compared to their peers. Understanding who these children are and how they're positioned within the classroom network shapes their language development has important implications for the educational practices teachers may adopt in the classroom to help peer language resources flow more successfully. Now I'll briefly discuss some methods that can be used to study social networks in preschool classrooms. Traditionally, researchers have relied on direct observations as the gold standard for understanding children's individual experiences in the classroom. And this approach may involve collecting and coding video or audio recordings or conducting in-person observations.

However, these observations have multiple limitations. They're often impractical and costly in terms of time and money. Observations can also be biased by an observer's personal perspectives and beliefs leading to potential difficulties accurately representing the social experiences of diverse groups of children.

Observations can also have difficulty accurately representing children's complex social interactions. Language development is often influenced by dynamic and fleeting interactions that provide input to children's developing language systems, which aren't always captured by observations. Furthermore, during in-person observations, typically only one child can be observed at a time for short portions of the school day, leading to a loss of understanding of the interactions that are simultaneously occurring within the classroom. As we've been discussing, an emerging alternative to traditional observations are sensing technologies.

And these are again tools that can be embedded in classrooms to collect and analyze data about multiple students simultaneously. As we've seen during the session, two common sensing technologies are, location tracking devices and audio recorders used in combination with speech processing algorithms that can code recordings for information related to the quality and quantity of speech that a speaker produces and is exposed to. Some of the benefits of sensing technologies include the ability to capture continuous and potentially real-time information about individual and dynamic classroom experiences and interactions. Sensing technologies can also help researchers collect more objective data about the experiences of diverse groups of students, which is necessary to more accurately understand how peers may have a different influence on the language development of different children. Recent studies, employing sensing technologies support the importance of reaching a more accurate understanding of children's individualized and dynamic classroom experiences.

Using information on multiple children's location and orientation towards one another. Researchers have created estimates of social contact among peers and generated a classroom social network. In another study, 680 hours of recorded audio files in one classroom were used to illustrate that the quantity of children's talk with peers over time related to growth in language skills. Other research has focused on refining technologies for use in the classroom. For example, algorithms have been developed to separate speech from background noise, which is important in noisy classroom environments. Software trained using child speech is also improving the accuracy of speech quality and quantity information that can be extracted from recordings.

Now I will discuss an ongoing study of peer language networks in preschool classrooms, that's employing the use of sensing technologies. Study focuses on the objective and continuous measurement of children social and linguistic interactions with peers in the preschool classroom, at multiple time points over the course of the year in order to reflect the changing dynamics of interactions within the classroom network. The overarching goal of the study is to precisely represent individual differences in preschoolers' linguistic experiences and identify salient predictors of these individual experiences. And more specifically, we plan to focus on identifying the social interactions that are most influential to preschoolers' language development. We aim to test the hypothesis that talk among children drives language development, and that there are significant individual differences in the quality and quantity of talk that children are exposed to.

Our goal is to have 30 classrooms with a sample of about 500 children. And these classrooms will be recruited at three different sites. The Ohio State University, the University of Miami and the University of Kansas. In each classroom we aim to have active or passive consent for 90% or more of the children in order to accurately represent the peer language network. Children's language skills will be measured at three time points in the Fall, Winter and Spring of the preschool year.

Teachers will rate children's functional communication skills on a four point scale ranging from never to always, using the Descriptive Pragmatics Profile or DPP. And the DPP includes three subscales: nonverbal skills, conversational skills, and skills involving asking for, giving and responding to information. Children's Expressive Communication skills will also be directly assessed using the Preschool Language Scale, fifth edition or PLS-5. The sensing system we'll be using is called the interaction detection in early childhood settings or IDEAS system. Children and teachers will be wearing small non-invasive pouches as shown as in the top picture on this slide.

And the pouch will be a small Bluetooth beacon and a voice recorder. And the Bluetooth beacon will work in conjunction with Bluetooth antenna as shown in the image on the bottom right. The antenna will be placed around the perimeter of the classroom to capture children's locations relative to the locations of their teachers and peers. And each beacon will be uniquely identifiable, so we'll be able to determine which peers and teachers a child spends the most time near. The voice recorders will collect the speech that individual children produce and are exposed to in the classroom.

Used in combination, the location and top data will allow us to estimate who children talk to and how often as well as the quality and quantity of that talk, including the total number of words, the total number of utterances and the lexical diversity or the total number of different words, to determine what is said and when. Open source and pay as you go automatic speech recognition software will be used to segment and automatically transcribe the recordings and to identify the type of speaker of each utterance cluster grouping algorithms will be used to label each utterance. So for example, we'll be able to tell if it's a child or a teacher. The sensing systems can also capture information about interactions among diet, as well as larger groups of children and teachers who may be interacting simultaneously based on the location data. Interactions are considered to be occurring if a child is near and oriented toward a peer or a group of peers. And the data from the audio recorders can then be used to determine whether verbal interactions are occurring within the group of peers.

Traditional in-person observations will be also conducted using the Social Network Observation coding scheme or SNO. A child will be randomly selected from each classroom to be the focal child of the observation. The child will be observed for 30 minutes during free play. And the SNO captures information about whether a child is communicating with peers, the type of interaction that's occurring, such as cooperation or rough and tumble play, and whether the interactions being experienced are positive or negative. And these observations will help us understand more about the validity of the IDEAS system when it's used at scale. Finally, teachers will rate children's peer interactions on the teacher rated peer interactions measure or TRPI using a five-point scale ranging from never to always.

And the image on the slide illustrates how each pair of children will receive a rating and both play and conflict interactions will be rated for each pair. A few additional measures will be collected throughout the study, including demographic information about the children, their caregivers and teachers. Teachers will report information about their classrooms, such as the number of boys and girls and teachers will also complete the teacher-child rating scale, which focuses on positive and negative aspects of a child's social, emotional school adjustment, and the student teacher relationship scale, which focuses on teacher perceptions of their relationships with individual children.

This slide provides an overview of a simplified study timeline. In the fall of 2021 we're taking baseline measures of language skills and teachers will also rate children's peer interactions. Starting in the winter, sensing technology observations will be occurring for two days per week at four to eight different time points and observations will be conducted for the full school day. The in-person observations will occur on six randomly selected technology observation days for 30 minutes. And then in the spring of 2022, outcome data on the language measures will be collected as well as another teacher rating of the peer interactions.

Using the data from the IDEAS system, we will be able to create a peer language network with the direct talk, to or from individual children as the ties linking children within the network together and teacher talk will also be accounted for. Approaches for longitudinal social network modeling and evaluating changes in networks over time will allow us to represent the social interaction groups that children are part of and how these groups influence language development taking into account the evolution of the network over the course of the year. Considering our future analyses at the classroom level social network indices, we can consider, include network density, which represents the average language input associated with each child in peer language network. Network centralization, which represents the degree of unequal distribution of language input among children in a peer language network.

And network assortativity, which represents children's tendency to associate with similar others. At the individual level, we can represent each child's weighted peer language resources. Differences based on child characteristics, such as gender based sign language skills and social skills will also be examined. And to conclude with a few implications of this work, one of the long-term implications is the development of technology mediated instructional tools that will allow teachers to monitor and better understand children's interactions in preschool settings. With sensing technologies embedded in the classroom, teachers could have access to real-time information about children's daily peer experiences. For example, technology may show that a child is being excluded from the classroom social network and has access to limited peer language resources, signaling the need for the teacher to intervene and such intervention could potentially improve the child's classroom experiences in developmental outcomes.

Sensing technologies also have the potential to help researchers collect more objective data, to reach a better understanding of the language development of diverse groups of children whose social experiences in the classroom can vary widely. We view children's spontaneous language within the classroom context to be a strong, positive input to other children's development. Spontaneous language captured using sensing technologies can reveal strengths in children that are not evident on more de-contextualized tasks, which may be biased against children from minoritized backgrounds.

We also acknowledge the importance of not relying exclusively on the use of sensing technologies to reduce bias in our research, which is something we will consider as we move forward with our work. So that concludes my presentation. Thank you for your attention and thank you to the investigators on this project. If you have questions after this session, you can contact me and now we will move on to the live Q and A portion of the session. Thank you for joining us for the question the answer part of this panel, I'm Laura justice and I am going to serve as the moderator.

I'm encouraging you to jump in and ask questions about the work shared with you. And we can do it via the chat. So we have a question from Sarah serene question is about language environments and dual language learners in early education settings, and so I'm curious, Dwight Tiffany Daniel does any of your work, look at the dual language learner in classroom environment and their interactions with others and language exposure. So you could grow.

One of my dissertation papers. the child's DLL status as a moderator and found some evidence that in some instances. dll children, appear to benefit more from exposure to peers, with higher language skills in terms of their own English language development specifically and this was looking at the peers, English language skill. And your I think you were going to jump in. So, that sounds like an interesting result, Tiffany.

Yeah, so we're in Miami, so we have plenty of two language learners, and it is an interest, interesting tapestry of languages. And I guess we're quite interested in investigating these effects, we don't have answers yet and I'm not even sure how to best phrase the questions yet. It would be nice if we could distinguish distinguish languages from children's individual recorders automatically so we knew whether they were hearing for example Spanish or English, and speaking Spanish or English, And we're out work on that problem. Right. It's interesting. I'm wondering. This is such an interesting thread, thinking about young kids who are bilingual in an early education setting. The other children are some of them maybe modeling will in Spanish modeling what English or very emerge.

I legally. I'm curious about the technology itself and how far the technology is in terms of contending with a multilingual environment so for instance, Daniel Dwight when you're using the Lena recording devices or the new ideas system is, is it language General, can it capture say exposure to complex language and differentiate across languages. So the recorders, be the leader recorders for example the Sony recorders were using, and then new Spencer project, or just recorders. And then we submit them to whatever software seems you know most up to the task. Lead should be language agnostic shouldn't really care which language. There were questions raised about that by the folks who introduced new technology that goes by the name of house.

But I think that was more from a kind of dramatic Scandinavian language question more about adult speech. And then there is technology. Like, for example from from Google to distinguish what languages being spoken. But I don't know. I don't know its reliability for, you know, the noisy classroom environment in which now, in which we work. Sorry.

Oh no no i i think it's a it's a really interesting question I don't know of folks who have done that work. And, yeah, I would have to talk to the, my, my speech processing colleagues I think to ask about whether that is possible I mean I know they've done. They've looked at multiple languages, to see accuracy in terms of outward there that you know algorithms to process the speech but whether it distinguishes a language, it seems possible, but I think I read that Alice paper you're referring to Daniel and if anybody wants to shoot me an email I think I have it handy but the Alice, technology is processing language out putting on metrics of language that we all discussed in the presentation and I do recall that that is language independent. So there was a question that I wanted to put out there from Katie Zimmerman, and she asked about expanding on the technological and system requirements. And I think, Tiffany, we have a new collaboration across these three research groups, I said oh you Dwight at Kansas and Daniel and his, Florida.

And we are, we have a new Tiffany talked about called IDAS, and work with wonderful engineers, and I'll get us started and then any one of you can jump in and talk a little bit more about the technology but we were actually using. We're using microphones right Daniel that kids are wearing the wearing microphones, I don't think they're terribly expensive. And then we have became set up in the room to track movement and, but the recorders capture kids their own voice, as well as incoming talk from others, and we have a pipeline that processes that talk, both one's own talk as well as incoming talk long processing system. And then you get the output of such things like how much a child is spoken to things like the lexical diversity of that incoming talk and even outgoing talk, and there's no manual work, whatsoever. The only manual where I know what you're getting at. I mean, sort of the way we do this work as we said we videotape we transcribe, it's a nightmare. It takes 92 years and so what we're doing now is it that whole manual element is completely gone it's totally automate ties, and the only manual work that we do is to calibrate and validate the system.

Daniel Dwight Tiffany you want to add to that anything more specific into the goal is definitely can use kind of off the shelf sensors. You have their different choices to make in terms of this answer so one that we've been talking about is to use a Lena recorder which is like really two or 300 bucks or what is in fact a technically superior. For example, Sony recorder.

I think are about 100 bucks. So, in future projects and the idea of project we're going for the, for the Sony's. You know the Sony's weren't built for the leaner algorithm so we're that's that's what occasion, I think choices to use Lena in the past. And then you have choices with respect to how you're going to attract children's movements. So, one choice is movie center ultra wide radio frequency identification so that's very cool and that it tells you where everybody is all the time. We'll kind of space, but cost a lot.

A more economical solution is using Bluetooth based badges which tell you when two people are in contact, but not necessarily where they are all the time. So they're kind of create offs along the way but I put this in the chat, and I think the idea of all the investigators, is that, and I think all of us, we want something that's pretty turnkey right so you can more or less kind of buy it. And with the right personnel implement it and get automated data collection.

I love this question from Mike, Dwight I'm really interested in you perhaps expanding on the work you've done on playgrounds, perhaps here so the question is can you speak to potential instructional activities and behaviors that can support peer interactions, or the potential benefit of peer mediated peer mediated interventions, Daniel well and Dwight. You both have done some really interesting work looking at context sort of cross sexually with some of the language environment, work we've been doing can either of you and Tiffany Of course you can always joke about this too, especially since were testing and period mediated intervention here at Ohio State. So, all of you. Yeah, I think one of the exciting things about this piece, you know this this sensing equipment is it does allow us to look at here mediated interventions, on the playground in the classroom in ways that may be well in ways that we haven't been able to see how kids are interacting previously continuously across multiple days across multiple peers and a simultaneous manner. I haven't done looked at any kind of evaluation of those interventions it's it's really up to this point just been descriptive in terms of trying to get at kids interactions within the space, and the specific features of the space or whether that's pretend play, or, or, you know, out on the playground. On the slide these kinds of things. But I think the potential is there for that.

And it does seem like, you know, particularly with some of the equipment, we were currently using that is that it's scalable that we could start to think about setting up sensors, I don't know in and outside of the classroom. And to be able to provide teachers with feedback on some of those. The level of interactions that the children are having so that if, if, well I guess during a pure immediate intervention so if that to basically see how effective it is ongoing and afterwards to see how sustainable. It is and whether those, those interactions continue when we're when, when it's done. Also with these sensing technologies being able to provide us with so much individual level data going forward as they become as these technologies, become more affordable and accessible I think it'll be interesting to be able to address questions of how these types of interventions might work differently for individual kids and how the question of, again, why some interventions appear to work really well for some kids and not so great for other questions along those lines and even as this data is available like in real time whether like adjustments can be made to improve, an intervention for a specific kid more, I guess that would be more at a practical level, like for teachers, being able to use this technology in real time, and make adjustments, but I think it just opens up so many new questions for us.

I mean I some of the I've done a little bit of work looking Daniel's done this to looking at children friendships, but I mean we can, we can start to look at the time that children are spending two together and the talk that they have and how that how that develops children's, you know, how it moves them from being playmates to friends I mean this and this same idea could be used, I mean in terms of I think what when we're thinking about pure immediate interventions and how those, those interventions are supporting, I guess more. I don't know meaningful stronger relationships with kids. There's a good question here from Adrian said can I jump in. I didn't go, I'm just doing it, no permission needed some from Adrian I hope I'm pronouncing that right. Um, I am, has anybody found that have we found anything about upper limit in number of participants. Think so far we're well within the upper limit. There are some, there are technical limitations here's a funny little technical limitation, because of where the movies and centers are place. We're better at knowing kind of more like how you think about where you are in the classroom then knowing how,

up and down, you are your protocol, location, your height. But that's not that's not what you're asking. Yeah, it seems like we're fine in terms of number of students. And I think there are different approaches to this to this issue of consent at Miami if you, if you don't consent we don't we don't. Well, you don't wear a vest, but we might hear your vocalizations That's true. We don't want, we don't know where you are. So if a child doesn't want to wear their best. For example, just saying on Monday for breakfast then they don't wear the best until they were in the best.

Yeah, I would say. I totally agree with Daniel in terms of the, the number of limits for its dependent I mean, how many sensors, you need to set up is dependent on this space so like on the playground if we want to get larger spaces and have, you know, we have bigger playgrounds, they all tend to bury, which makes it a little difficult. It requires more more sensors. But in terms of the number of individuals that can wear the tags I don't, I don't think there are limits I mean this at least with Adobe sent system that I've worked with in the past is pretty robust and I mean it, it's developed for industry so they're, they're using it to track, like planes and cars and storage yards and movement of people in factories and. And so that there. And so that they're, they're able to handle it, it's able to handle a lot of individuals, wearing the tags.

And I haven't heard of any restraints with our constraints with the really active system that we're working with. Now, so, some really good questions about the logistics of doing this. I will start by saying that the question about kids who are wearing the best not wearing the best. Tiffany Have you been out in the field collecting data right now, whereas we start we've been piloting this or something system, and not yet. It's really unobtrusive, um, you know we we use t shirts we use fast. You can even put these on necklaces or whatever so it's pretty unobtrusive and otherwise I don't get the sense of pretty obvious, who's wearing the technology Dwight has that been your case at Kansas. I'm sorry I said again where I was looking at the chat I was reading the question, and I should have been doing that.

So, when What are you wearing right now with our that we're doing what are they, how are our kids wearing a vest or a T shirt. I know we went back and forth. I think we just have it. It's like a strap, kind of, it's a holder and it goes around their, their shoulder, and that's that's what we're thinking that will that will be using, we have used the best and shirts in the past and that's been been totally fine. Yeah. I mean, it was just a couple years ago that we were strapping GoPros on kids hats and we wanted to get away from that and that's that's a, you know, kids wearing GoPros and kids who aren't probably did affect lots of things in the classroom, but with the technologies we're working on. We really have spent a lot of time on feasibility for lots of.

I mean it's a pretty big ass of teachers to say we want to come in and put all these sensors and classes so we've really spent a lot of time on this social validity element. I did want to comment on the consent issue, because we've been very successful at Ohio State we've been doing social network is supported social network work for a couple years now, and for social net work in a classroom you really need to represent the whole classroom, if at all possible. So if you've got 12 kids in the classroom. You really want to have participation of all 12 kids or your somehow biasing you know representation of that social network and so we have had a lot of luck with passive consent, consent, in the last couple years. And so what that means is, we'll get active consent, maybe for, you know, 10 of the 15 kids in the classroom and our human subjects panel will let us use passive consent where we will say to the parents, if you don't return this consent form. We're still going to collect certain data on your child passively, and we and I think ku you're, you're following in our footsteps around the past can set for the social network data.

That's right, yeah. Um, I just would think related to the biggest problem I've had with the shirts in the past is that I don't have all the colors of kids. So, we'll, we'll get we'll get the classroom but what of the mobile wallet. And then, we don't have it it's dirty or beat us in another classroom so this summer man asked a question about bandwidth requirements. And do we ever have to supplement connectivity I will say that all of our work in Ohio, with the technology has largely been at Central how more urban settings. Daniel Dwight Have you had any issues with connectivity, or bandwidth, with Uber since it's, it's not connected we're not connected to the internet, so that's not an issue I mean we're just, it's connected to a laptop and we are the data is just, and then that's connected to the Power over Ethernet switch that then links up to the sensors and the data just comes into the laptop, and then we go and pick it up. I have that.

I haven't seen any or heard about any bandwidth issues. So I think it seems to be working well in that regard, but you know it's based on our testing. Yeah, so that will be sent is like its own little now, but it has cables attaching the four sensors or however many sensors that are in the classroom or if it's a bigger classroom or the classroom with kind of two rooms, we have some of those.

So the cool thing is you don't need the internet. The bad thing is, you do have to connect those cables correctly. And then this is funny we have had bandwidth issues. So, the system itself doesn't require, it's not talking to the outside world was collecting data. But we take a lot of time to synchronize our devices and to do that we log into like it's time to cover something.

And if the team can log into time.gov because some of these schools like the internet is not what it should be made them by HubSpot. Yeah, the clocks are really important. When, when working with with multiple devices and trying to seek them up.

Are there other question

2022-03-11 02:08

Show Video

Other news