In today's rapidly evolving technological landscape, global privacy, security and ethics have become more critical than ever. To discuss this, please welcome the UK Bureau Chief and International Executive Editor at Business Insider, Spriha The global Privacy Officer of ADP, Jason and the Chief of Data and AI at the United Nations. Lambert. Welcome Spriha Lambert and Jason.
Hi, everyone. Welcome to this amazing panel. We'll be discussing the power of data governance. I am Spriha Srivastava, International executive editor at Business Insider.
And with me, Jason from ADP and Lambert from the United Nations. Let's get straight into it. it's it's very relevant to be talking about the power of data governance.
Just in this environment. There's so much data. There is so much talk about how companies and governments are really trying to navigate it. I think what will be helpful to start with is why now? Why is data governance so important? What does data governance mean to you both? Let's break it down for our audience. Jason. If you want to go jump into it. Sure, absolutely.
So I think one of the key reasons people are focused on data governance now is obviously AI data is the lifeblood of AI. And as we use more and more data, whether to power machine learning or whether to use generative AI, people are really interested in data governance. But it's not a new topic. We've had it in privacy for many years around personal data, and now we're expanding into non-personal data. How does data flow? How do you use the minimal amount of data to achieve the insights you want? How do you make sure that that data is of good quality, so that it doesn't introduce bias or unwanted results? And so data governance is important because without good data governance, without good data quality, the very start you have things where you end up with suboptimal systems with poor predictions.
And so data governance is really what enables AI to realize its full potential. Lambert? Sure. I think sometimes we tend to think about data governance in a little bit of a negative way, like it has to do with complying and making sure that nothing goes wrong. And I like to think of a definition that is much more positive.
It is about getting value out of the data. think of a farmer, that is taking care of his crops or her crops and, you know, you want to get value out of your crops. You want to make sure that they're not going to be spoiled with, you know, diseases or something like that.
But I think that idea of nurturing and and growing is important to me. Yeah. Let's talk a little bit from the point of view of global perspective, your role at the UN, when you look at data governance now from a global perspective, what are the challenges that you're seeing in the landscape, in the tech landscape at this stage? Yeah, I think as the world is becoming increasingly digital, there are several things going on at the same time. Obviously, we can get data about a lot more in our businesses and in the world.
data is becoming more important. And at the same time, the whole digital world is creating one big ecosystem. The world is becoming one. And most companies will have customers or stakeholders internationally. So suddenly you have to start thinking about data as a, as a global goods that that moves around the world. And I think that that makes it challenging.
And I think we're probably going to dive into, regulations as well, which are popping up all around the world, which makes it sort of challenging to see where they are moving from here to there. And what are the parameters under which I can do that. So that's that's a big challenge right now. When you when you hear that, I think those challenges exist for companies for corporations. At ADP How are you navigating some of these challenges? Well, we navigated through a multifaceted system.
We start with our AI and Data Ethics Council. That includes experts from within the company and external experts to set the guardrails about how we think about data, how we think about AI and new uses of data. we then have a multifaceted compliance program. So when you think about generative AI, for example, our chief data officer, is responsible for our approval of the use case process, but that has input from my team on privacy, from product compliance, from security to make sure that we think about data at every stage.
We have a group, within our product development team that's worrying about data quality, that's worrying about post-market monitoring, that does the testing around bias. We, of course, have our privacy compliance program built on our binding corporate rules to make sure that we adhere to data minimization. And I have a dedicated person on my team who does privacy by design.
So as we build AI systems or any product, we think about privacy from the very beginning. You know, at ADP, we're really proud of our data. We have 41 million client employees who we provide services to. And so we have a rich data set that allows us to provide insights.
But again, only if we protect that data, only if we use it in the proper way. And make sure that as we build products, that we focus on that quality from the very beginning and all the way through. Fascinating what you've both just talked about. I'm thinking if there is enough knowledge sharing happening about this across the industry, both from governmental perspective but also from a corporate corporate company's perspective, it's great that what you're doing at ADP or Lambert, what you're talking about from a UN perspective as well, how much of that knowledge sharing is happening across the board? Well, for both of you. Yeah.
So I mean, I would say from industry there is a there is knowledge sharing. Like, you know, I get together with other chief privacy officers in a couple of forums and we certainly talk about what our companies are doing. we recently participated in a, project with the Future Privacy Forum, which is a think tank to develop best practices for use of AI in the workplace, which obviously is of interest to us as, human capital management and payroll company. So, you know, that has a whole bunch of things are specifically thinking about the use in that context, what type of data, because of the fact that it's HR data, how do you protect it? What are our appropriate uses and what aren't? and then obviously there are also initiatives with governments and others, you know, that, you know, I'm sure Lambert will talk about that, enable that sort of cross collaboration between the public and private sectors as well. Yeah, I think these discussions need to happen at several levels.
They need to happen at the level of, of of nations or societies and internationally. they need to happen within industries. and they need to happen within organizations as well. And I think organizations especially need to realize that data is going to be so essential for their strategy. And these discussions on how are how are we going to leverage data, but also how are we going to treat data and what are our own sort of values beyond, regulatory requirements that are what's what are the values that we stand for in the way we treat data? And that has a lot to do with, building trust within your own company, with your partners and with your customers, with the public.
That's great. I mean, just sort of, again, going back to you, Lambert, we are in Europe, so we have to talk about GDPR. you know, policies like GDPR have been put into place before, sort of this whole fast paced AI rush came into play. The concerns around that, when you look at that, how do you think, you know, governments and policies are evolving? How do you think, governments are starting to think about it? Do you think policymakers are starting to really think about how to incorporate AI into it? what's your take on that? I, I think regulation like GDPR is, is a very good thing.
Yeah. of course, there's a little bit of discomfort right now that everybody is excited about AI, and they want to do all kinds of things with data. And the GDPR puts all kinds of restrictions on that. And it actually can be quite onerous to keep track of your data and what what can you do and what can you not do? but I don't see it as a, tension between, you know, people see it as stifling innovation or something.
I don't see that. I think data governance in itself is a requirement for innovation. And let's be clear that GDPR regulates, data privacy for for personal data, right? Yeah. there's probably 80% or 90% of the data in your company doesn't fall under that isn't personal data.
It's data about whatever it is. The health of your oil pipelines or, fuel consumption of your or maybe research data of some physics research that you're doing. all of that is, is, you know, needs to be governed. It's falls under data governance, but it doesn't fall under these, these privacy regulations. And when it does come to the privacy regulations, I think it's probably shortsighted to want to do whatever you want with that data.
In the short term. Yes, we will benefit from that. In the long term, you may lose the trust of your customers. And, maybe within your your company, you will not be true to what you stand for as a company. So I think it is, I don't see the to us being in tension.
I think I really see data governance as being supportive of innovation. Jason, do you want to comment on that? Yeah, absolutely. So look, I think, you know, Lambert made a very important point.
People will not use technology that they don't trust. And again, if we take the view that data is the lifeblood of AI, we have to have that trust in companies and governments and others for people to be willing to share that data, to have that data use, to drive those insights. So regulation is a key part of that, because the society we are the ones ultimately to determine how technology is used. So, you know, we see GDPR, which was prescient in having its rules and automated decision making. We now see in the EU, the EU, AI act. And I think this is a really interesting piece of legislation from a data governance perspective, because it takes a lifecycle approach.
It focuses on data quality from the very beginning. What is the quality of the data? How are you doing your data set? Are you making sure that your data set is free from impermissible bias? And then it looks at the what you generate? Have you tested the results for accuracy, for relevancy and again for bias. And then when you put it on the market to do the post-market monitoring, this life cycle approach, all the way through, with focus on those types of things, on data minimization, it takes really data governance to a new level, as it's used in the development of AI products and services. That's interesting.
I mean, I think you touched upon this earlier. We're talking a lot about innovation. We're talking a lot about AI.
AI is that elephant in the room we can't ignore, when you when you look at innovation across the companies, across industry, how are companies striking that balance of really wanting to innovate, but at the same time protect their data? who is I mean, we talked about this earlier in terms of knowledge sharing, but who is watching that? Well, well. I'm sharing. Exactly why I thinking about knowledge sharing more in terms of best practices around around governance. Right. You have to be careful. There is a, case where, you know, ChatGPT came out and there's a company won't name them here whose employees started using it. And it turns out, of course, we know LLMs learn from their prompts.
And so you were able to get all this information about internal financials, internal communications. And so you have to take like, you know, key steps. For example, when we use LMS at ADP, their LMS that we control, they're in a protected environment.
They don't learn from the queries. That information isn't shared back with the provider of the large language model. And so it's important to take those steps to make sure that, again, those that confidential information, whether it's business confidential information or personal information, doesn't leak into the broader environment. And you do it again by minimizing the data that you need to drive the insights, by making sure that you have the tools under proper control, that you evaluate the use case, that you tested, that you tested against things like prompt injection attacks to make sure that you can't, get it to do things that it shouldn't do, or discover information that you shouldn't discover as part of that. and so when you take that and you combine it with knowledge sharing about governance with knowledge sharing about innovation, it's possible to move fast. We've moved very fast at ADP to implement generative AI, but we've done it in a responsible way that is secure, ethical and compliant.
Right, Lambert? Yeah. Well, I spoke about innovation just just now already a bit. And I think, it's very interesting to think about data as actually the driver for innovation, not the sort of a requirement that you, need in your innovation processes, but to to look at your data and to, to say, what can I learn about what I'm doing now, where to market this moving and what new opportunities are? And obviously, you know, the the mix between data and and AI is is really important. AI is sort of the hot new thing right now.
And but I think we should be be aware that this AI doesn't work by itself. It is driven by by data. So that is really the foundation of where you want to get your innovation from. Has AI thrown spanner in the works healing for everything that's been, done all of this while for data governance, all the efforts being put together, how will you kind of when you when you think about AI in your in your own roles, how are you approaching it? How are you thinking about it? Do you feel like it's something that's thrown spanner in the works, or is it more like, oh, this is great and kind of adapting to it? So look, I think we're at the beginning stages of seeing how AI can help with compliance. There's some companies out there that are offering AI based compliance tools. And look, it's fascinating.
You can take all these regulations, you can consume them. You can submit a natural language inquiry and get a natural language answer that you can act upon. And moreover, you can tailor that. So I can do it as the perspective of an employer or I can do is from a, perspective of an employer and record if I'm providing, you know, professional employment services or things like that. And so you can you can really tailor it to be useful to the specific practitioner.
And you can do that. You know, I talked about in the employment context, but you can do it in a range of context. So I think I can really help on that. But I don't really think that it's, it's cause issues. I think in fact we're just building upon what we've already done. Again, going back to our generative AI approval process, we take security.
We take privacy, we take pride of compliance. These are preexisting things. We bring them together when we evaluate an AI case, and then we think about the specific AI elements on top, whether it's transparency, whether it's explainability, whether it's prevention of bias, to make sure that we have a comprehensive view about how we address these types of things.
Lambert, what do you think? Obviously, you know, you need to embrace AI to survive as a company these days. And that means you need to get your act together on data, which is bad news because we've been trying that for ten years. And in most organizations, data sharing is still not where it needs to be. Data integration is still not where it needs to be, but we need to we need to get there. And I think it's it's important to realize that, as we rely more and more on AI, botes in organizational decision making, and even in our private lives.
Right? We, we use these online chat bots to get advice on anything from how to make guacamole to, I have a pain in my knee. What could it, what could of my ailment be? And, you know, as we rely more on AI, it becomes really important. What data are we feeding that I with? And I think with, with the current generations of generative AI systems, for instance, what these researchers set out to do 3 or 4 years ago was to build systems that are really good at human conversation, to pass the Turing test to sound like a human, to have a conversation. And in order to achieve that, they fed it with tons of data Wikipedia from, I believe from Reddit or Quora or it didn't really matter because the point was to learn how to speak human language.
But then when they did that, everybody was so excited about conversing with these chat bots that they started to see them as an oracle and believe all the knowledge that came out of there. And it was never intended that way, because we weren't that careful in what kind of knowledge we put in there. So in the next phase, I think we need to start being much more careful and think about if if this is going to be our oracle. what is the canon of human knowledge that we need to put in there? And that needs to be much more balanced, much more diverse? much better than what is in there right now. Yeah.
Yeah, I agree with Lambert about, what he just said. I think diverse perspectives in the creation of the AI is important because you have to have those diverse perspectives to make sure that you're covering the range of possibilities. I think another important element of data governance is particularly in the AI context, is to have humans at the center, right. AI isn't infallible. It's not, as Lambert was saying in Oracle, and you have to always apply human judgment to whatever is, the output. And so I think making sure that humans remain in control, that there's always a human element in how the AI is used, is also an important element of governance as you move forward.
Yeah. So let's talk about ethics then. so, you know, when we talk about data governance, we talk about AI. There are multiple ethical considerations to take into account. How are you both thinking about that and your roles? And is there much collaboration that governments and corporations can do on this front in terms of tackling that? Jason, do you want to go first? Sure. So as I mentioned, we have an AI Data and Data Ethics Council. I chair it.
And so we really are sitting there thinking about the big questions. Should we do this? you know, for example, you know, we as a company decided we're not going to do anything that's designed to sort of, you know, sense human reactions and say, an interview context or things like that. because we don't think that's an appropriate ethical use of, of technology. It's not particularly reliable.
and so, you know, we think about things like that, we think about, you know, have we tested appropriately for bias? all of, you know, are we providing the information to our clients that are going to use our products to the end users, whom those clients interact with that they have, you know, information and knowledge. So it's important throughout it to think about, like, how would I want to be treated, you know, and what are the appropriate ways there? I think there's a lot governments are doing this area, you know, there's the AI risk management framework. We have the EU, AI act, and there's an EU AI Pact that the Commission is doing alongside that. And so I think there's a lot of open ground and possibility, for government, and industry to collaborate.
But I think that's important because they learn from each other. You know, governments can learn from industry about what's possible, what people are thinking about, where's innovation going? And innovators can learn about what are societal concerns, that we need to take into account as we build products and services. I think a lot of the concerns that people talk about about AI, things like bias and discrimination and, copyright violations, etc. those are things that are already regulated in society.
In many places in the world, it's illegal to discriminate, it's illegal to violate copyrights, etc.. And I think what's happening is that what's, AI these things take a little bit of a different flavor. so it's not that we need entirely new regulation on that. We need to update our existing regulation. And I think that's what what governments need to, to look at, in the United States where I live, they have this concept of fair use. If you if I take your book or your video or whatever, and I take a little tiny clip out of it and I use it in my.
That's fair. Right? But what if you take an AI, AI system and you do that at massive scale with everything that's on the internet? Is that still fair use? And those things haven't you know, there were never part of that previous regulation. And these things need to be updated. Yeah that's great.
I mindful of time. So, one last question for me here. we've come a long way when we think about how we're, how data governance is being put into place in companies, organizations, governments, how, we're navigating this whole new I believe when you look at the future and I'm going to put a crystal ball here in front of you both. When you look at the future, what do you think is the biggest challenge that is coming towards you both? And how can more companies, how can governments prepare for that challenge? So I would say, look, the biggest challenge that we face is that is the, continuing fast pace of innovation. How do we, you know, stay ahead of that curve? How do we take advantage of these new technologies to offer insights to clients and individuals? And at the same time, you know, making sure that we do so in a way that secure ethical and compliant, as I was saying earlier.
And so, you know, that it's one of those things where, you know, they say it's a marathon, not a sprint, may not even be a marathon. We're going to have to continue running even beyond that 26.2 mile marker as we move forward in this space. And so to me, that's the that's the real challenge of the next few years.
Lambert I think we're moving into a new phase where data and AI, of course, becomes so important that companies need to make that part of their strategic processes. It needs to be in the boardroom. Right.
It needs to be a strategic issue for companies. In the past, things like data governance were often with the IT departments, things like, data privacy compliance was what the legal departments. And I think that's no longer going to cut it. It's going to be a cross-cutting thing. And survival of your company depends on it.
On the other hand, you can use if you do it right, the use of data and also the respect of data, people's personal data, if you do it right, it can be a strategic differentiator that makes you that trusted company and that makes you win. That's great. Thank you so much for your time. A big round of applause for my brilliant panelists. Yo. Thanks again. It's been a pleasure speaking to you both.
2025-01-05 14:23