Empowering individuals through AI tools, technologies, and custom apps | Chris Engelhardt @ Gen Re

Empowering individuals through AI tools, technologies, and custom apps | Chris Engelhardt @ Gen Re

Show Video

Welcome back to the data science hangout, everybody. I'm Rachel Dempsey. I lead customer marketing here at Posit, and I'm one of the cohosts of the hangout. I would like to love to have Libby introduce herself as well.

Hello, everybody. I'm Libby. I am a, cohost here with Rachel, and I'm also a Pozette Academy mentor for r and Python. We're so happy to have you joining us today. If this is your first time joining us, the Hangout is our open space to hear what's going on in the world of data across different industries, chat a bit about data science leadership, and connect with others who are facing similar things as you. And so we get together here every Thursday at the same time, same place.

So if you're watching this as a recording in the future and you want to join us live, there will be details to add it to your own calendar below. We're all dedicated to keeping this the friendly and welcoming space that you all have made it. So thank you for that. And we love hearing from you no matter your years of experience, titles, industry, or languages that you work in.

And I I like to add this now because I know people really enjoy connecting with other Hangout attendees in the chat. So So I wanna encourage you to briefly introduce yourself, say hi, maybe include your role, where you're based, something you do for fun, or even share your LinkedIn in the chat to connect with others. And you can use that chat to also share open roles if your team's hiring for roles as well. I know I have a few that I'll have to go grab to put there in the chat in a bit. And there are three ways to jump in and ask questions today or to just share your own perspective or your own experience with a topic that we're talking about. You can raise your hand on Zoom.

You can always click that little button. We can call on you to jump in live and ask your question. You can put your question in the Zoom chat. And if you can't unmute, maybe you're in a loud coffee shop or something, just put a little asterisk, next to your question, and Rachel or I will read it for you. And we also have a Slido link that is gonna be in the chat.

Isabella just put it in there. If you wanna ask a question anonymously, feel free to do that. And we are so excited to be joined by our other cohost for the day, Chris Engelhart, who is the senior data and AI operations manager at Genre. And, Chris, I'd love to have you get us started by telling us a little bit about your role, a little bit about Genre, but also something you like to do for fun. Sure.

Well, thank you, Rachel, and thanks, Libby, for hosting and for inviting me to speak today. I could not be more excited, to be here at the POSIT data science hangout. As Rachel mentioned, I'm Chris Engelhardt. I'm a senior data and AI operations manager here at Genre. I work in global IT, which is the first time I've I've ever worked in kind of an IT environment. I've always kind of been more on the business side, but now have the opportunity to focus more in an IT area where I lead a team of engineers, data scientists, full stack developers, architects, and analysts to deliver on a variety of, data science and AI projects and initiatives.

And, thankfully, I also have the opportunity to oversee and govern the POSIT professional stack. So I oversee POSIT workbench, POSIT connect, and POSIT package manager, as part of our global deployment pattern. To give a little bit of sense of my background as well, I have a PhD in social and personality psychology where I studied the correlates and consequences of violent game exposure, with my good buddy who's on the call here today as well, Joe Hilgard. And one of the coolest topics that I studied was whether adults with autism spectrum disorder are differentially affected by violent game content compared with typically developing individuals. And the take home message for network is that they're not, nor are they affected by violent game content compared with nonviolent game content. Following academia, I worked as a lead data scientist at CARFAX where I managed and governed, once again, deposit products as well as the infrastructure to deliver on one of their flagship products, history based value.

So you may have seen some of those, commercials on CARFAX where the Carfox, like, boots a bumper off a car, and he says, why should you, you know, expect to pay? More for this vehicle, you can see it has a a rich accident history. So I was involved in, the infrastructure and support of that product as well. I've also worked at Farmers Insurance where I wrote the r at Farmers book, got r formally recognized and supported by security, which, if you've ever had the fortunate opportunity to do that at your company, you know, that can be quite a tall task because it's easier for security to say no than it is for security to say yes. And I also manage their telematics program there where we looked at predicting losses as a function of actual driving behaviors like distracted driving, for example, whether or not people are texting or swiping while they're driving.

And, Rachel, for fun, I enjoy, spending time with my family. And I enjoy hanging out with our two boys, one of whom is ten, the other is four. And they are very active in baseball and soccer.

And I have the amazing opportunity to be their dad and to help them be quality people in this world who are empathetic and care about people and, you know, hopefully, will make some positive contributions in the world one day. And, so that's a little bit about me. And and, again, really excited to, you know, meet with you all today and talk about data science and AI or anything else that might be on your mind. I love that. Thank you so much, Chris, for taking the time to join us today, but also thank you for all that you've done for the community as well and all the different organizations you've worked with that you've brought Pro POSIT to as well.

I was thinking to get us started to to learn a little bit more about your role today, it might be helpful to if you could tell us a little bit about how you're developing a vision for AI initiatives at Genre and also how you're starting to get people engaged in that. Mhmm. Yeah. That's that's a good question, Rachel. So we've had a a lot of focus on vision and kind of how we think about deploying and managing AI applications across our organization. And, you know, one of the ways that that we think about that is to incorporate kind of two facets within that.

One is obviously with a a deep and concerted focus on safety to really make sure that, for example, we're using AI in a way that doesn't lead to unintended consequences, and we also ensure that there is always human agency in the loop. So while we might look to use AI to facilitate a decision point, what we're not here to do is to use AI to make a decision. Excuse me. And so that's a little bit about the the vision and and how we think about AI more generally, here at Genre.

And, can you repeat the the second question, Rachel, please? Was it around how we do the the meetups and startups? Yeah. Well, it was also it was a little bit about how you've gotten people engaged in Yeah. New vision. Yeah. So what what we do is through kind of the the role in global IT is we also have a deep focus on enablement and self-service data science and AI capabilities.

So within our team, we do a lot of work around standing up, custom AI services, in particular in the Microsoft Azure cloud. And what we do is we kind of make those services available through a variety of means, including through posit workbench. So we have two packages that are internal to generate, written in r and Python to enable access to a variety of those AI services. We're really looking to kind of enable business with modern AI tools and technologies to help them get started on how do we start to think about, you know, reimagining business workflows with AI at the center of that? And it it's obviously a journey.

And the way that we've started to kind of think about that is with some very tractable, you know, kind of challenges that we think, and have found that generative AI seems to deliver on. Thanks, Chris. So I know in the Hangout, there there's been a lot of questions around AI, but I I think it's rare that we get to actually hear about the actual, like, use case and a few examples of how it's being used today.

Would you be able to share a specific example of with us? Sure. Absolutely. So the one that we have is the one that we kind of started with. It it's now running in production, and very happy to talk about that today. So, again, we started with kind of a a very tractable problem that really looked to use generative AI tools and technologies where they excel, which is mostly around kind of generating content, but also on its, you know, understanding of semantics and language. And the use case that we worked on and the challenge that we were looking to solve was that for our security compliance team, they receive a lot of questionnaires from external clients, mostly around our security posture, so, you know, things around encryption, etcetera.

And what they do is they receive those incoming questions, and then they'll respond to them by hand each time, or historically, that's what they have done. And so kind of over time, they've kind of accrued, you know, the set of question and answer pairs. And so what we did was we took that history of legally approved question and answer pairs, and we put them in what's called a knowledge base. It's really just kind of a list of questions and answers matched to one another.

And what we did, from there was we took each of those question answer pairs, and we embedded them. So we used a large language model to essentially convert that text to a vector of floating point numbers. So we're essentially just kind of representing text with numbers. So we did that for all of the question to answer pairs. And so then when the security compliance team gets a new question that comes in the door, what we we do is we embed that question as well with the same embedding model, and then we see which vector or vectors that one is most similar to. And so what we do is we see, okay.

You know, we see that this new question is most similar to these ten other, you know, question answer pairs that we've seen in the past. And what we do from there is we send that information off to a GPT model along with some prompting and have the GPT model reason about which one it thinks is kind of the best answer. And so this is a very tractable problem and I think a a very good one to kind of get started.

It it was relatively simple, you know, from kind of a a technology perspective, And we're also quite confident that we could deliver on it given our understanding of what, you know, some of these AI tools and technologies could do. And I think it's great to kind of get started in that way in part because you want to show kind of early success. And that's what we were able to do. Since then, we had really good customer feedback on kind of that product delivery and has also opened up the doors for us through other teams across the general firm. So, you know, all of this information is publicly available, so I'm able to talk about that particular use case today.

But, yeah, that was kind of the the first, kind of foot in the door, Rachel, for our AI development efforts here at Genry. Awesome. Thank you. Well, speaking of teams across Genre, we have a question from Nikita. Nikita, would you like to ask it? I have a question about how you manage directional key, especially working on, data science program. It's a little bit hard to hear you, so I'll I'll repeat it as well, but it might be. There you go. Try again.

Yeah. Yeah. I have a mic. Oh, a nice microphone. Okay. How do you manage international teams working on data science projects? That's a great question, Nikkita. And it's challenging. Right? Because we have, you know, people spanning multiple countries, multiple time zones.

We have one person who is in San Diego, California, and we have other people who are located in the UK, East US, Hungary, India. We are geospatially distributed, which, you know, does kind of force us to have a very concerted effort around how do we effectively work together asynchronously. And so one of the ways that I do that is is I try to kind of stack my morning meetings with folks who are in India and Europe, as an example, and try to save kind of the afternoons for folks who may be more US based.

But it it is definitely a challenge, to do that because, you know, we we try to be respectful of, you know, everyone's time, and kind of where they're physically located and try not to put too many demands on them from that perspective because everybody needs to rest and sleep. Right? And, you know, I think we've been pretty effective at that because we're we're very intentional with, you know, when we have meetings, when we get all the developers together at a reasonable time that is a good fit for, you know, all parties involved. Perhaps not an ideal fit, but I think it's, you know, kinda making accommodations from time to time to facilitate that global workforce. You. I see Neil has a question in the chat, and there's a little asterisk next to it, so I'll read it.

But it's if there was one AI related skill you would recommend analysts learn, what would it be? One AI related skill analysts learn. I guess it would be to learn more about the capabilities and limitations because I I think what often happens is, you know, in business, people see kind of new shiny tools and technologies. And they immediately pivot to, this is exactly what we need. We need AI here yesterday. And I think in some cases, you know, companies may not have data in a position where it is, you know, ideal to kind of, you know, send off to a GPT model, for example.

There might be other kind of preprocessing steps that may need to happen. But I think in general, it it's learning more about kind of the the limitations and also where, you know, like, a a generative AI model excels. So it's, you know, mostly around generating content and can help you facilitate having a conversation with a body of knowledge is another way that I tend to think about generative AI models. And, you know, seeing what kind of problem spaces these models, you know, excel in. And and I think it's really just around practicing with them and getting an understanding of what they can do and what they can't do to help inform, you know, okay.

Now that I have an understanding, what what business challenge can I maybe apply this to? That's how I think about it. Thank you. I I see Russ in the chat. You have a a question that builds on Neil's question if you wanna go next.

Yeah. Thanks. I I think Neil's question is really good. My interest is on the other side. What what would be a really good, like, people skill, communication skill thing that you would recommend people learn? Empathy, I I think, is a hallmark of kind of a well functioning team. And I think being able to kinda put yourself in the shoes of somebody else and understand how they might be thinking, how they might be feeling, or why they're behaving in certain ways to not make certain attributions about surface level behaviors. For example, if somebody's running late for a meeting, there could be dozens of explanations for that.

Right? We could make some internal attribution about that person. Well, this person is habitually late. I can't depend on them. They're unreliable.

They're not conscientious. Whereas, you know, maybe something external happened that isn't something internal about that person. And so I I think it's empathy, I would say, and, you know, I I think that that is a critical skill, for teamwork in general. And I I think that is something that AI may not ever be able to replace, and I think there's something kind of uniquely human about that. And so I think in terms of, you know, the human experience and and to pick kind of one kinda communication, dimension to focus on for me, it'd be empathy. Sounds like Russ agrees.

He said he's professional counselor in a prior career, and I agree too. I have a question for you, Chris. Sure. Mhmm.

How do you build a vision for your company around these AI initiatives and you to get buy in from both the leadership and from the technical teams that will be tackling these? Mhmm. Yeah. Good question. So I think that what we try to do is to kind of show early success, you know, with them. And that helps get kinda buy in around okay. We're maybe we're gonna start with kind of standing up some initial tools and technologies that are just out there for people to use. Right? So, for example, we have here, Azure OpenAI, which we make available through AI Studio, Azure AI Studio.

And what people are able to do with that is to go in and to experiment and to see kind of what the capabilities are. And, again, learn about, you know, how do I think about prompting? How do I think about system prompts? You know, how do I think about customizing, you know, some of the, parameters, that can kind of control the the outputs from some of these models? So I think it's it's largely about showing initial success and then kind of taking that on the road. But I think critically important to that is having executive sponsorship for it because it has to come in my mind, in large part from senior leadership.

So someone who kind of recognizes the value of, you know, kind of what generative AI models can do and then also discussing more broadly across the organization on, you know, some of the use cases that they perceive they have where AI could be relevant. And in some cases, they are, but in some cases, they're not. And what we would try to do is to kinda peel off the ones that we think, you know, would have, you know, kind of a, kind of an impact on revenue or a large impact on efficiency and think about prioritization of those AI projects in that way. Thank you. Kylie, I see you have a question in the chat. Do you wanna jump in here? Sure. Yeah.

So, of course, I'm sort of after picking your brain a little bit, because I've recognized that AI is being increasingly used. But I struggle to understand exactly what AI is and also to think about how it might be implemented in sort of my field and others' field. And so I was so I work in infectious disease modeling. Mhmm. And so we're all familiar with everything that happened during COVID.

Mhmm. And I just kind of think about the future and the next pandemic and sort of wanna brainstorm a little bit about how we could use AI to make the responses better or, just improve our modeling, improve our responses. But it's a bit overwhelming to figure out how to start because there's all these different models out there. There's chat JPT.

There's other versions that are better for certain things. And so I guess my question to you is just where's a good place to start, and what are important pitfalls to avoid? Yeah. Good question, Kylie. I I think that definitions of AI vary widely. You know, some people would even go so far as to say that linear regression is a form of AI. I don't know how I feel about that, you know, quite yet, but, I I wonder, Kylie, if, you know, we're we're kind of in the train of kinda more traditional statistical approaches, because a lot of the kind of GPT models, again, kind of what they specialize in is in generating content.

So you you think about, you know, things like drafting an email or drafting a paper. That's not to say that they can't help in other ways too because we're also using it to kind of pluck out, you know, certain pieces of information from a body of text. So there are kind of various domains where they can be applied. I think that, you know, at least in my experience, I've seen those models applied less to kind of the kind of challenges that you're, you know, talking about with kind of COVID response, but to the extent that it could apply to kind of generating content, you know, for people to kind of understand, okay. You know, how do I draft this message for particular audiences? Or kinda given these, you know you know, variables or characteristics that I wanna highlight in a message, you know, I can include that as part of my, you know, drafting of content through a GPT model, let's say.

So I think there's, you know, different use cases depending on the type of AI that that we're talking about here. The ones that we tend to focus, you know, most here at Genre is on kind of the the large language models and generative AI. Okay. And then just a quick follow-up.

So if they're really good at generating content, do you think they could then be applied to sort of, analyzing large bodies of text such as, tweets, scoots, whatever to see what to kind of get a get an idea of what people's sort of perception is. So could it sort of flip around? And rather than generating the content to share, could it take the content and then say, okay. This is how people are feeling about x, y, or z. Mhmm.

Yeah. I think that would be a reasonable use case, for tools like this. And so not only can it kind of expand, right, in terms of generating content, but it can also summarize content as well, and it's very, very good at that as well, arbitrary content. And I think where, you know, I would maybe get started with that is kind of, you know, seeing what set of instructions I can kinda pass along to the GPT model as well through prompt engineering. You know, people sometimes refer to that as.

But it's really just kind of a set of instructions that you pair with the content that you're sending to those models to kind of achieve the result that you're looking to achieve, which in this case, it may be, you know, some type of, sentiment analysis. It sounds like you're you're hinting at here perhaps, but, you know, it can be, you know, summarizing that content to kinda give, you know, its curious about. Cool. Great. Thanks. Mhmm. Sure. Good questions. Chris, I I know I got to hear a bunch of little snippets of different use cases on our our prep call of ahead of the Hangout.

And one thing that we talked about was a lot of the work the team has done lately integrating POSIT and Databricks. And I was wondering if you could share a little bit about that work with us. Sure. Absolutely, Rachel. So today, what we have are posit products and workloads that are running on, single virtual machines.

So just kind of a single instance, a single machine, single server. And these are running all over the world. And by and large, that's where the bulk of our work gets done around kind of these single, virtual machine servers. However, sometimes people do need to work with data that are stored in other locations. For example, we have a a lot of data stored in our data lakes, again, that are geospatially distributed, and we support, working with those data in the data lake through, two different options. The first is that people can directly read from the data lake and write to the data lake directly from posit workbench as an example.

But they can also pull data irrespective of whether they're aggregated, from the data lake, which, of course, gives them the option to push compute to big data tools and technologies. I don't wanna get into necessarily the the semantics of what big data is, so I just kinda refer to them loosely as, you know, big data tools and technologies. One of the more recent developments, of course, that, I'm particularly excited about is Posit's integration with Databricks' Unity Catalog. If you haven't heard of Unity Catalog, some of you on the call may not have, it's it's effectively a a data governance layer for granular control and managing data assets that exist within a Databricks environment. And a few of our teams have now proved that capability, where we've actually used Posit Workbench to connect to Unity Catalog and be able to query and work with data, and views that sit on top of our data lake.

And, Rachel, we're we're doing a lot more on operational reporting today on this. And so if if the data are, you know, kind of smallish, then what we do is we, you know, have workbench connect to Unity catalog directly. We just kinda specify a catalog, a schema, and a table, and then we can, you know, from from that perspective, have the opportunity to kind of write, d player code, against that, and that d player code can be converted to Spark SQL or other flavors of SQL. And we can pull those data back deposit so we can do compute locally on the server, and then we can, you know in some cases, what we're doing today is we're then writing that output to PINs. I know there might be some PINs aficionados on the call today, which ultimately in turn, serve some of our Power BI reporting that we do. If the data if the data are larger, however, sometimes what we'll do is, you know, kind of lean almost entirely on kind of that, Databricks specific workflow.

So we'll have kind of the, Posit workbench kickoff, you know, jobs that run on the Databricks clusters. So a lot of what we're doing kind of on the the Posit and Databricks front is around self-service reporting. And I really look forward to, you know, kind of the possibilities that, stem through that, perhaps offloading arbitrary our workloads to Databricks. So, you know, the way that we're starting to think about this is how do we kind of enable, you know, business using, you know, premier, data science tools and technologies with data that we always, as kind of a a matter of habit, store in our data lake or central data repositories.

Thanks, Chris. Something that has come up for me in talking with teams is regardless of what the tools are, but I guess in this case, Databricks and Posit, sometimes they're managed by different teams, and it's not both sides together. And I was wondering if you have any kind of recommendations for some of us who may be trying to do that. So when you say bring the teams together, Rachel, do you mean business teams and IT teams? So it I guess in this specific example, if there's one team who is managing the Databricks implementation, but a different team is managing deposit. Yeah. I think we're in maybe a fortuitous position because the team who manages our data lake and the team who manages the data science, tools and technologies, deposit stack, in particular, we all exist within the same IT tower.

So we all roll up to the same manager, which is great. So, you know, we have routine touch points. We we meet once a week to discuss, you know, anything that may exist at the intersection of, you know, management activities, but in particular and in fact, just this last week, we had a discussion on Pawsey at Workbench and the data lake. And so kind of from then, we met offline, you know, we clarified some approaches, and it was very easy to get alignment. I I think it may not be that way at every company though, which could be a challenge. But I I think at least in terms of how IT is structured here at Genre, it it is amenable to kind of facilitating that communication.

Thank you. Libby, I'll pass it over to you for the next one. Yeah. I see Michelle had a great question, and it had asterisks.

So I'm gonna go ahead and read it. And it's have you had tools that were deployed into production and ended up providing inaccurate or bad results? And if so, how did you regain trust in the tool with your stakeholders? Inaccurate or bad results? I mean, I I think that sure. Absolutely. I mean, we you know, in thinking back to kind of my days at CARFAX. Right? So we would, you know, try to predict the price of a of a used vehicle given its vehicle attributes.

And we never were able to perfectly predict, you know, what that, you know, car would go for ultimately. And so it's really just kind of projection. So there's an inherent amount of error, you know, kind of within that process, although we give kind of our best guesses.

So I think the way to kind of do that is is put some metrics around, you know, how much error there is and why that's expected and to try to manage that effectively. So to try to help, you know, have people manage expectations around, okay. You know, here's why we're maybe seeing, you know, some of these errors in some of our predictions. Here at Genre, you know, I I think that in some cases, we do see errors as well because we're, you know, not always going to, you know, suggest the right answer to a net new incoming question for a security compliance team.

But they're also able to see that although it's not one hundred percent accurate, we are doing pretty well. We're doing, you know, ninety percent accuracy, which in the grand scheme of things, if you can populate, you know, ninety percent of responses accurately, that's gonna save them a lot of time from having to kind of go through and identifying those answers on their own. And, of course, you know, they also have to vet kind of the veracity of those responses, so there's human agency within that loop. So, again, although we're kind of automatically, you know, suggesting options and facilitating a decision point, a human still has the ultimate oversight on kind of what goes out the door, door, so to speak, in an email. So I I think it's you know, although we're going to see errors all the time or we're going to see outright fabrications, but I I I think it's, you know, mostly around end user education and helping people manage expectations and showing them kind of how the positives kind of outweigh the negatives in terms of output.

Thanks, Chris. I saw there was a question on Slido, and I realized I accidentally went to last week's Slido. But for this week, the question was, how do you protect your own and your team's well-being at work? That is a great question. So let me let me think how I want to kind of respond to that. So, I'm a social psychologist, and I've long been interested in why people do what they do when they're around other people.

That's what I studied. And I try to take an evidence based approach to that to inform what I think and what I do. And even as recently as this week, you know, we had an all hands kind of team meeting where I've kind of called out a lot of topics related to what's referred to as psychological safety. And a lot of my kind of previous management ethos was guided by self determination theories. This was, you know, some early kind of motivational theories kinda back in the nineties, where they had kind of a a deep focus on, you know, facilitating basic human needs that we all share. You know, a need for autonomy or having a sense of control, a need for competency, you know, a need to feel like you are, you know, able to execute on something effectively, and a need for relatedness, you know, the the ability to to connect, you know, with one another in in meaningful ways.

And we had a lot of deliberate kind of implementations around that. Right? So I I try to give people high level challenges and be a collaborator rather than dictate to them what they should do or how to solve a solution, because ultimately, you know, they're super smart and very good at what they do as well. You know, I give positive feedback on their skills and abilities, and we also have, you know, dedicated coffee chats where once a month, we all get together for a half hour, and my only rule is we don't talk about work. We could talk about anything else but work.

And so, you know, it's it's really with an eye toward kind of helping people, you know, feel comfortable with one another. And, you know, again, more recently, I've had a a deeper focus on these ideas of psychological safety in the team. And the general kind of thrust behind this is that there's a shared belief held by members of a team that the team is safe for interpersonal risk taking. So in that context, when I'm talking about interpersonal risks, what I'm referring to here is is the ability to speak up, to offer ideas, to admit mistakes, ask for help, provide candid feedback, share worries and concerns without there being any fear of consequence for that.

And this is something that is supported by decades of research, actually. So if you if you go back to, like, even the mid sixties, there were there were some, you know, early pioneering work at kind of psychological safety at an individual level. But more recently in kind of the the late nineties, Amy Edmonson had kind of a a series of papers on psychological safety at at a team level. And as it turns out, it's predicted with a lot of things that you probably would care about. Right? It it's basically correlated with getting stuff done. So it's it's very highly correlated with engagement and task performance, information sharing, creativity, the desire for people to learn, how happy they are, you know, showing up to work.

And, you know, being in kind of the position that I'm in, I take it as a very serious responsibility to kinda protect my team in kind of that way and to make sure they have that space, where they can be themselves, where we can celebrate individual differences and, you know, show up to work and know it's okay to not be okay. Like, these are things that that I care very deeply about and, you know, it permeates, you know, a lot of the messaging that that I have within the team. So, I think that's a great question, and, that's some of the ways that I try to approach it. And, we we're excited for this journey. I just presented on some of these ideas to the team, and so we're gonna now start to kinda think through for our team specifically.

You know, how do we implement it? I think we've already started to do that, but how do we implement it? And then more importantly, how do we sustain it? Fantastic. I I was going to my next question was going to be, like, how does your background in psychology kind of affect the way that you lead? But I think that you fully answered that question with your answer. So I would love to actually throw to another one of my favorite social science people, Alan, to ask your question.

Thanks, Louie. Hi, Chris. Hey, Alan. Related to the sort of team development and team interaction angle from from the sort of people's roles perspective. Mhmm. Curious if as you've gotten into more of the large language model, the Gen AI stuff, has that resulted in distinct changes in people's roles? Has how is the balance of sort of pre LLM work changed if at all? Are you doing a lot of that same stuff, or has that drawn down as LLM related stuff has come up? And sort of overall, what kind of impact does that have on the way you design the team or describe your roles or let people specialize, that sort of thing? Mhmm. Yeah. I I would say, Alan, you know, kind of more similar than dissimilar.

You know, I think that, you know, the the idea of a, you know, kind of solution architect, full stack engineer, data engineer, like, these these roles have been around for a while, as has, like, ML ops, you know, for example, as a practice. And it's kind of a natural extension in some ways of some of those roles, but applied to, you know, kind of this domain specific area of, you know, generative AI. And, you know, there's still a a deep focus, you know, at least on our team on, you know, how do we get all of these modular independent pieces to communicate with one another? You know, how are we gonna stitch together this solution? You know, how are we gonna think intelligently about, you know, how do I, you know, implement, you know, the right chunking strategy or splitting up my document? How do I put in the right tags? How do I add the right metadata to make searching, you know, more efficient? So these are these are things that we, you know, think about, you know, quite a bit and we do a lot of experimentation around it. And I I think it's kind of a a natural extension in some ways of what, you know, a more traditional data scientist, you know, might do, but just apply to this other domain, which is ripe with uncertainty and ambiguity, which makes it both fun and, nerve wracking, you know, at times, Alan, because, we don't know exactly what's going to happen.

And, but it's pretty cool to kind of operate in that space, to be able to lead, you know, kind of a modernization within the company, generally, and and that's quite an exciting opportunity that we have. That's great. Thank you. I appreciate hearing that. Sure. I wanna go back to the idea or the topic of psychological safety, because I see, Darren, you shared a link in the chat that's getting a lot of love, and and I just wanna make sure that's part of the recording as well. Darren, do you wanna just talk about that briefly? Yeah. So, I'm glad to see that there's a lot of appreciation for this topic.

And, Chris, thanks for bringing it up. So I this was on my mind a few months ago, and I brought it up with my team, as it relates to pull requests because, you know, everyone feels a little bit differently about how their code is critiqued and the fact they receive. And it's not just how it's being received, but it's also how it's been given. Mhmm.

So understanding how people receive feedback and how people appreciate it in different ways is really important to understand. And I'll I'll have Rachel or Olivia reshare that link if if it gets buried in the chat, but it they're like a triangle of psychological safety when it comes to anxiety with code review, and how you respond to it, and how, the the result of how you behave as a result of of feedback Mhmm. Present itself in different ways for different people depending on how they, get that feedback. And so Mhmm. Creating resources for people or providing resources for people to to understand how to work through that and and and and work through in a positive way, I think it's really, really important for building a really functional team.

Yeah. Agreed, Darren. And and I think that's a great point. You know, I think there's an inherent trade off between what is best for the individual and what is best for the team in some cases. Not always, but in some cases. Right? So, like, if if I'm out there constantly admitting mistakes, I'm out there, you know, constantly saying, hey. I screwed that up. And I actually did this in front of my team, you know, when I talked about this the other day.

I went through, like, three mistakes I've made very recently. Right? And and talked about the consequences of those. And so that's that's hard to do, Darren. Right? Like, it's hard for people to to admit mistakes because the interpersonal risk there, right, is that people start to view you as incompetent or they view you as less capable in in some ways. But at the same time, I think it establishes trust too because we all know that we make mistakes. You know, maybe we don't celebrate them as publicly all the time, but we're all fundamentally human and we all fundamentally make mistakes.

You know? And I don't think that anyone truly shows up with the intent to sabotage, you know, a work team necessarily. But it it's kind of this idea, Darren, where, you know, if if we can get kind of increasingly comfortable with being uncomfortable, you know, kind of in those moments, you know, I think it's again, I'm not saying it's easy to do or easy to sustain. I I think it's a real challenge, which is why I'm excited to take this on with our team. But I think that, you know, coming from a leadership perspective, you know, someone can help kind of set that context and can reinforce, you know, certain behaviors of of what they want to see and have broader team discussions around it. Like, hey. How are we gonna deal with or how are we gonna think about, you know, how someone might react to feedback on a pull request? You know, how can we be more empathetic to like, how how might someone receive this message given whatever context or history there might be in the team? So I think those are are all important considerations and, you know, certainly something to be thoughtful about because there's, you know, definitely a lot of contextual factors that play there.

Yeah. And I would just add to, you know, the environment that you're in with your team, Chris. Like, you wanna make sure that it's before you even have this discussion of of brainxiety related to the pull request that the team feels comfortable talking about those things. And to do that, you have to talk about things outside of work. And and so we do something similar where we have a weekly catch up meeting with my team, and the first fifteen minutes is just, like, an open question.

Like, here's the question of the week. And the last week that we did, it was, like, what TV show are you watching recently? Yeah. Or, like, what's your favorite holiday? You know, that kind of thing. Just to kind of set the tone of having some personal relationships that are built as a foundation before you can have Mhmm.

Schedules related to anxiety, which is a very personal thing. It is. It is. And, you know, let's face it. We spend a tremendous amount of time with our teammates, more time than we probably spend with anybody else.

And so, you know, to the extent that I can help control, you know, how people feel coming into work and them finding work meaningful, you know, showing up with this expectation that I can be me. Right? And and I don't need to be you know, I can show up with my concerns, my worries, my insecurities, my anxieties. I can show up with all of that, and I know that I'll still be accepted on the team. So that's what I'm, you know, working toward. And it's it's a hard challenge, Darren, I would say. You know? And because, you know, again, I I think there's an inherent conflict there between, you know, maybe what's best for us individually and, you know, what ultimately leads to kinda better team outcomes.

Thank you. Mauro, I I see you have a follow along question here. I would love to have you jump in here then. Thank you. Thanks, Reija.

Yeah. Yeah. I'm excited about this topic. I think I'm a struggler, in a way, and I've seen other strugglers. So I can I can tell when I see a struggler in this in this area? It looks like, a great idea. I would like love to read more. But I've seen people, including myself, wanting to promote that psychological safety, but particularly across across cultures, it's very hard to give and predict how the person is gonna take what you say.

Mhmm. So I wonder, you know, what advice could you be could you give that is kind of general enough that in a international team, someone who wants to promote from can get there. Yeah. No. That that's a good call. And, again, I I think it's, you know, really important to be knowledgeable and insightful about respecting individual differences, you know, because we're all from different backgrounds, but that's what's unique about us and and makes, you know, everyone, like, so special and and can bring these unique contributions to the team. I know that Hofstede's had kind of a lot of work around, you know, kind of individual differences in culture. Since then, you know, I haven't really kept up maybe as much as I should, with some of the kind of cross cultural, you you know, psychological implications there.

But, you know, more around I think just being kind of knowledgeable about individual differences in the team, kind of what their values, you know, are, you know, how they think about things, like, what what, you know, maybe promotes their anxieties or makes them feel uncomfortable. And, you know, definitely, you know, and because we have that on our team as well. And, you know, we, you know, have people who are from India originally, and, again, someone's in Hungary, in Budapest, and, you know, we're we're all over the world, and we all come from different cultures and backgrounds and, you know, schools of thought and, you know, certainly something to be mindful about, I would say. You know, how do we kind of take that into account when thinking through this at a team level? You know, it's not a not a one size fits all, Mauro. I don't, you know, intend to, you know, kinda paint that picture here. But it but it's also interesting because there there was this study at Google, you know, because, you know, Amy Edmondson's her early work back in the late nineties.

This was on kind of a, like, an office, you know, manufacturing, store where she did this initial study and saw kind of the merits of, psychological safety. But, you know, Google also has poured millions of dollars into understanding kinda what makes teams effective. And in two thousand twelve, they had a study called Project Aristotle where they looked at, you know, over one hundred and fifty teams. They, you know, they looked at, you know, a multitude of factors to see what could, you know, create the perfect team. And they you know, the conventional wisdom was all around individual differences.

Right? So, you know, it doesn't matter that, you know, I pair people who are extroverted or people who have the same academic pedigree or people who, you know, enjoy hanging out on the weekends. It doesn't matter how long they've been at the company. You know? So all these individual differences and kind of what they found through that is that, you know, who they are does not matter in in terms of those individual differences. It's in what they found was that the single best predictor of team performance, believe it or not, Mauro, was psychological safety. That was the most important factor for team outcomes, and, you know, it just blows me away, you know, because it it's it's kind of an interesting idea though because in in some small way, like, it reminds me of, you know, just being kind of validated for who you are and, you know, having that kind of, you know, lead to better team outcomes, you know, and and it's kind of the freedom to be you.

And that's it's not that way everywhere. Right? So, and but yeah. So, you know, I kind of try to paint the picture of, like, here's why I think it's important, and then I support it with evidence. And and, you know, it spans both nontechnical teams, technical teams. There's a lot of research out there and agile teams where these ideas have been applied. And so psychological safety or perceptions of that actually predicts, you know, team level outcomes like speaking up and software quality initiatives above and beyond statistically, all of the other variables that you probably think would matter, like, you know, history with agile, years of experience, you know, all of these.

So it's it's it's a, a construct that I'm becoming increasingly more interested in and speaking more about, especially at the intersection of, you know, kind of software development and AI and AI safety. Thanks a lot, Chris. I think one aspect that is super important to kind of realize is the value that, you know, organizations like Google have put into it because I sometimes I feel that particularly in a fast moving environment of, say, a consultancy where you need to accomplish something within an hour. You know, people kind of really stress about, you know, filling everything in that one hour without realizing that maybe someone is not catching up to, you know, the speed that the the things are moving and inducing, not the psychological safety that would actually create the the most profitable outcome out of that hour. Anyway, thank you very, very much for your thoughts.

Yeah. Thank you, Mario. Good question. Love to see the discuss though, like, the discussion that we run into with the Hangouts. So we had no idea we were gonna go that way. This has been Yeah. Fun. Actually, I kinda wanna, like we have about twelve minutes left.

Take a technical turn. Jordan, if you're still here, I saw that you had a great question, and I have the same question. So would you like to ask that? Yeah. Was it, I'm gonna take a guess.

You want me to tell you? Using, pins and Databricks? Yes. Yeah. Yes. Yeah. Yeah. Yeah. Because our, my company uses Databricks.

I use pins a lot, but I don't use it with Databricks at all. So I'm kinda curious, like, what, how how you're using pins either within Databricks or how you're using it with it Mhmm. Before I'd spend, like, a week trying to build something new.

Yeah. Yeah. Good question, Jordan. Hopefully, I can I can save you a little time? So so what we do, at least in in some situations is, you know so our data exists once again in our, Azure data lake, and we can connect to that, you know, so to speak via, Posite workbench and, you know, some of the packages that Posite develops like sparkly r, pySpycler, etcetera. And in at least in the the use case that I'm aware of, you know, they would connect to it and then just straight up read, you know, those data back to workbench because these data are trivially small. So from there, all the compute on the server would happen locally. And from there, they would then publish out some end result to pins. And then from from pins, once those data are in pins, then we have kind of a reporting structure that would sit on top of that.

So there's not kind of this direct connection between, Databricks and PINs, although, you know, that could be on the road map for posit. But that's, you know, that's my understanding of, kind of what what's potential today, what's potentially doable today and what we're doing today. Yeah. Because it it I mean, it sounds like then it's kind of using the same source storage, for any of the other day, but just using in the sort of pins method. Yeah.

Because I know that there's lots of stuff to play around with in Databricks, including, like, file store and volumes and stuff that are still writable, but probably more embedded in Databricks right now. Yep. But Yeah.

My guess is that they're just pinning it back to the data lake. That would be my intuition here, Jordan, but, you know, that's, you know, kind of what what, I would I would think would be reasonable, for them to do. Cool. Well, that's I mean, that's very exciting.

They're they're two really great tools that, you know, when they work well together, they work well together. Yeah. Thanks, Jordan. Thanks. Hopefully, it saves you a week. Maybe. If anyone listening in has, like, specific questions about how the tools work together, I just didn't wanna miss the opportunity to put this link in the chat in case it's something that we can't cover in an hour long hangout.

Feel free to use the the calendar there to book some more time to chat with our team. Okay. I can't believe how quickly our time goes here. But a question I I always wanna make sure I have the chance to ask is, Chris, is there a piece of career advice that you've either given someone on your team or that you've received over the course of your career that you'd like to share with us? That's a good question. I think one of the things and then I think this kind of gets back to, you know, the the psychological safety and and basic, you know, social psychological needs that we all share is to control what we can control. And because there's a lot of things in this world that we can't control. Right? We we can't control getting COVID.

We can't control, you know, the weather outside. But what I can control is my effort, and I can control the choices that I make, and I control my behaviors, and I can also control showing up for my team every day. And these are all things that mean a lot to me.

And so, I guess, you know, among the best career advice, this goes back to kind of when I, you know, played baseball in college was, you know, control what you can control, and you can always control effort. You know, even if you can't control, like, what direction a ball bounces or, you know, the call an umpire makes, you can't control that no matter how much you might argue or wish the the ground was, you know, lying differently. But you can always control effort. And to have focus on things where you have autonomy, you know, I I think leads to kinda better outcomes. I know you you asked for one, Rachel.

I guess the other one that I would say is, to not undervalue kind of the meaning of connectedness in relationships because you never know, like, who you're gonna cross paths with or or who somebody knows. Maybe they can help you in the future, maybe you can help them in the future. And one of the ways this has come to fruition for me is with my current role.

So I was, you know, friends with Nick Rorba, who's at Posit. You know, we had been LinkedIn kinda buddies for a while, and he shared, my manager, Matthew Montero's post for the current position. And I read the position, and I was like, oh my gosh. This sounds like me. Like, this is a perfect fit for me and and what I want to do.

This is a chance to return to doing more with r and doing more with deposit stack and getting back more into data science, and I I could not have been more excited for it. So I reached out to him on LinkedIn. I said, hey, Matthew. You know, this this is, like, a role I I think I'd be a good fit for, and here's three reasons why.

You know, can we hop on a fifteen minute call? And And I think a lot of that started. My my role here and kind of growing, you know, kind of within the team and and, you know, working more in the AI space, like, it all kind of started with a connection that I formed many years ago. And and I think that, you know, kind of having a focus on, you know, kind of that aspect of as well is also, something that I would recommend. Like, establish those personal connections and reach out to people and support one another, because that's, you know, to me, fundamentally human.

And, you know, I guess, you know, again, controlling what you can control and, connecting with people. Thank you so much, Chris. I was going to try and grab Matthew's Hangout recording to put into the chat as well. Yeah. Totally. You had a good one too. So I'm glad I got to got to ask the career question. I see there's a few different Databricks questions that I think that are popping up in the chat now as well.

So we can go back to some of those. But, Travis, I think I saw some questions you were asking, and I wanna make sure we cover that. I was only dovetailing most recently on another question. Okay. The the case was Alan's question then. Oh, Alan's. Thank you. Yeah. Yeah.

About it was back to the pins, plus or minus Unity catalog and or, just regular file store in Databricks. Like, I'm I'm struggling to identify the the use case. It seems cool, but I but do you lose the user level of governance and access that you get from Databricks out of the box by doing that with pins? Yeah.

Not necessarily on the on the pin side a whole lot, Travis, but it it's really on kind of the data access part of it. So when we're connecting from POSIT Workbench, for example, we connect directly with Unity Catalog, and the data engineering team kinda manages the kind of row level, column level, management and governance kind of of those environments or the views that people have access to. So when you, you know, you connect, you immediately see in kind of your connections pane, like, what's available to you only.

And so that's the way that they've kinda managed and governed, you know, that process. The PINs part to me is best I know is independent of that, unless they were to further engage with the engineering team and, you know, have that formally recognized kind of within the data lake with a view or some other means. Well, thank you.

And I see Tarif just helpfully said he took a note to sounds like stay tuned, everyone. No. No. I'm not promising anything. I just Okay. Just so it is it is something that I've been intrigued by, the question of, like, what is what is the intersection of pins with, like, really large data stores, and how do you think about that, and what kind of pass through? And we we didn't when we were talking about building pins in the first place, one of the big concerns people had was the question was like, hey.

You can't replace a database with pins. Right? And that's correct. Right? So is there is there a middle ground there that makes sense, or isn't there? I don't know. I don't but I think it's worthy of at least having a conversation about it.

Nice. Thanks. Yeah. Thanks, Therese. Okay.

While we're talking about this, I can't not do my own advertising for every every month. So the last Wednesday of every month, we do a different workflow demo. And this month's workflow demo is gonna have the connect team joining us to talk about using Posit Connect and Databricks Unity Catalog together, so that you can inherit data level per permissions. So if you have an app where only one set of users should see specific data based on what's in your Unity Catalog, you can do that now. So I just wanted to make sure I call that out.

I shared the link in the chat too. Hopefully, I got got that quick summary right too, Tariq. Alan, I think you had one other question on it as well, and I know we have three minutes here. So do you wanna go next? I think I don't think I had an independent one.

Mine was the the one that that, Travis and I and, work We're good on that. Worked all that. So I think I'm good. Yeah. Okay.

Perfect. It is determined with needs more follow-up, I think. Okay. Thanks. Libby, is there a question that you see from earlier you wanna bring in? I was just wondering if we had enough time. We have two minutes left. I think a good one would be from Slido.

How do you encourage learning within your teams, and are there initiatives that you can share about? I I think just by virtue of the nature of the work that we're doing, there is a lot of required learning that has to happen. So I I I think just, you know, by the nature of the work itself, you k

2024-10-06 20:56

Show Video

Other news