Tackling Technology’s Trust Problem | Forrester Podcast
- [Keith] Hello, I'm Keith Johnston, your host for Forrester's podcast "What It Means," where we explore the latest market dynamics impacting executives and their customers. Today, we're joined by principal analyst, Sarah Watson, to discuss the responsible and ethical technology strategy. Welcome, Sarah. - Hi. Thanks for having me. - [Keith] So, Sarah, you go bold out of the gate by saying technology has a trust problem.
So, as soon as you say that, we have to assume that it implies some level of risk and there are some stakeholders that need to really pay attention to this. Set the stage for us. What do you mean technology has a trust problem? - [Sarah] Yeah, that's been the core question, right? Are we talking about the tech industry itself? Are we talking about tech brands? Of course, big tech has been in the headlines. Or are we talking about the technologies themselves, right? Are we talking about AI? Are we talking about the blockchain? Are we talking about the metaverse or any of these more kind of historical applications of technology? And I think the answer is all of the above, right? All of these things have been having trust issues in the last couple of years and it's not just this kind of academic question about what are the impacts of technology and, circa 2010 to 2016, I think these questions were being raised in social sciences, but now we're starting to see it in the public consciousness.
Whether it's showing up in hearings in Congress or headlines, these are the core questions that we are running into and I think of course the pandemic has kind of accelerated all this digital adoption, but we are running into this question of well how much do we rely on technology and therefore how much is it impacting our lives and kind of having a huge impact on everyone's wellbeing. - [Keith] So what are some of the things out in the market that are happening right now that should make consumers, or even employees, you mentioned that here before too, pay attention to this? - [Sarah] Absolutely, yeah. I mean, we can look at the headlines, of course. There's all this discussion about misinformation and disinformation, algorithmic filter bubbles, and that's just kind of in the social media side of things.
There's also all these discussions about bias inherent in AI, whether it's in the training data sets or the models themselves. Of course, we're running into all these questions about privacy concerns and kind of the extractive business models that are coming out of data driven business, right? And, as we shift towards first party relationships, all these new companies are contemplating ways that they can monetize their data as well. But, of course, that changes the relationship between the user in that first party context. And then we have the whole bucket of emerging technology dystopia, right? Facial recognition, algorithmic decision making, self-driving cars, it's the whole range, right? So it's everything from where is your cloud server located and what kind of data subpoenas might they face in the face of the Dobbs decision to the future of what the metaverse is gonna look like.
We've got survey data that suggests that 46% of U.S. adults said that if Meta or Facebook was running the metaverse itself, that they don't wanna participate and that's even more true for our category of skeptical protectionists. 57% of those people would not want to adopt the metaverse if Facebook was running it and so we come up against this question. Facebook changing its name to Meta is this moment of trying to disaggregate and kind of distance itself from its brand issues that it has established to be able to pivot and kind of go off in this new direction and that's really because technology is synonymous with that brand and I think that's becoming true for a lot of different companies, right? We just saw John Deere getting hacked on live on stage. That's a company that you wouldn't think of as a technology company, right? But it is.
In this shift towards digitization, we are inextricably tying the technology to all of these digital experiences and, therefore, the brand itself has implications for how the technology is used and what its risks are. - [Keith] Yeah. I mean, that's interesting. I mean, with Facebook, I mean, really this trust issue is almost synonymous with the Facebook brand, so they needed to change Facebook to Meta, right? - Exactly. - Yeah.
So, Sarah, it's so interesting. We're talking about brands synonymous with trust, trust being a challenge for businesses. But, again, I wanna link it back to the technology piece of this because you're explicitly saying that people have less trust in tech and, if that's true, it's everybody's problem because technology is everywhere. Can you say more about that? - Absolutely.
We are seeing drops in overall trust in the industry. I think there are lots of metrics that have been tracking trust in the technology industry itself as a whole and that has dropped over time, especially post-2016, and we are also talking about trust in specific technologies, right? So the very nature of blockchain is based on this idea that there is no trust and so you have a system that is a protocol that is distributed and no one has to trust that the system will be working because it's decentralized and, of course, that's not actually how the market is developing, but, as a principle, that's kind of the design of this system, to get out of these walled garden structures that consolidate power, right? And so I think everyone from consumers to employees are thinking carefully about their relationship to technology. - [Keith] Yeah, so blockchain is really a technology that basically inherently removes bias or trust issues because it's an exchange of data. That's a transaction and it kind of removes the parties out of the loop. But, when we talk about AI, which is probably the extreme technology when it comes to this, it's people putting bias in the AI or people making decisions with what the AI knows or it doesn't know.
What are some examples that we need to pay attention in the AI space? - Sure. I go back to the very famous example of Amazon trying to solve its hiring bias problem and then kind of reinstantiating its hiring bias problem because they were modeling based on culture fit and turns out that they were valuing things or devaluing things like women's volleyball showing up on someone's resume and that's really based on the historic bias of the data set that they were jumping off from, right? So you have this tool that you think you are going to be kind of removing bias from the system by removing people decisions but the historical data reintroduces those issues and so that's kind of a constant rampant issue in AI if we're talking about all of these systems based on training data that is based on historical fact and you're just perpetuating these historical biases, right? So you can't have systemic change without introducing a new thing that you are targeting or changing what the data set is and this has really kind of clarified for a lot of people that all technology has these bias issues, right? Humans are always involved in the structure and the design of technology and that means that human choices are always involved, right? So the idea that data is objective, I think, has been really being called into question, right? It is a reflection of the things that you value, of the things that you want to track. It is a reflection of the assumptions that you make, right? And so all of those decisions are human decisions and so I think AI and kind of the privacy landscape has really helped articulate that these are, in fact, human decisions and we're kind of extending that out to the rest of the ecosystem to say, "Oh, wait, cloud infrastructure is also embedded in human decisions and geopolitical issues and so there is nothing." Ultimately, this leads to a discussion about whether technology is neutral or not and I would argue that technology is not neutral because of all these human decisions baked in.
- [Keith] Yeah, so interesting. So would you suggest that the technology needs to be making the decisions for the humans or can the humans really get their arms around this and make good decisions with technology? - [Sarah] So I always lean on the idea that technology doesn't exist in a vacuum and there are always, I go back to Lawrence Lessig's structure for regulating technology, which is not just law. It's not just policy. It is the code limits what you can do. Cultural norms can limit what you do and the market itself can also be a regulation tool, right? Competitive advantage, right? You can say that privacy becomes a competitive advantage if these values-driven consumers are actually making decisions based on that.
If we acknowledge that all four of those things are operating in terms of our relationship to technology and how we govern it and we think about how it's applied, then we're kind of acknowledging that these things don't happen in isolation. - [Keith] So it's so interesting. We're talking about like a whole set of choices that the humans need to make. One would assume that technology executives need to pay attention. But I wanna take it a click above CEOs around the globe that are responsible for their employees and their consumers.
Why should they pay attention to this and what should they be doing? - [Sarah] Absolutely. Well and I think I would take it even one step above that. Boards are starting to have to care about this, right? To me, it falls into the ESG kind of push towards understanding the impacts on stakeholders, not just customers, but actually entire populations and governments and all of these other stakeholders who have an interest in the use of technology and, because we're talking about brands being synonymous with technology, we're getting into this area that CIOs, CTOs have focused on, historically, in a traditional sense, the back office. The core stakeholder was the business itself, business users, kind of employees. Future fit and modern companies have understood that technology is a driver of business and you're kind of focusing on the end customer and really getting closer and closer to that digital experience that users are involved with. I would argue that we're kind of pushing into an age of stakeholder orientation, which is focusing on not just customers and shareholders, but actually extending that to this wider and wider range of stakeholders and so, if you think about the application of an underwriting algorithm or an underwriting AI system, that is making a determination about who is and isn't a customer, right? Are you going to insure this person? Are you going to give this person a mortgage? Are you going to give them a credit card? And those are still, as a company and as a business objective, you're making a decision about is this going to be a profitable customer, right? But there's still a next level implication of how that is excluding that particular person from a credit market or an opportunity and so we really have to think about the larger implications of the applications of these tools and so I think that's where we're headed.
- [Keith] Thinking about the larger implications and the way you explained it sounds a bit utopian because, right now, if we just look at the economic turbulence that's out there, we have a changing global economy. There's a lot going on that this idea of an ethical tech strategy may be deprioritized. What's at risk if that happens, should they not deprioritize it? - [Sarah] I love this question because I think it's really easy to say the ESG efforts, the kind of diversity and inclusion efforts, just go by the wayside when you are facing pressures and you really, really refocus on your core value. But I think you can't talk about value without values and so ethical and responsible technology strategy is really articulating those values and kind of tying things like purpose back into the technology equation and so I think it's super important to keep that in mind. I would also say we understand technical debt as being a huge problem for technology leaders, right? Technical debt gets in the way of innovation, right? And so, if you think about that, what is the ways that we're moving fast and breaking things in an agile framework, if you are really, truly adopting all of these kind of Silicon Valley frameworks for development and application kind of architectures? How much are we running into the same problems that Facebook created, right? They were moving fast and breaking things and never looking back and so that starts to accrue and we are really seeing the cost of that as a society and the cost to Facebook itself as a company and so take that and apply it to your context, right? What are the costs of not paying attention to these questions, if you are kind of pushing forward and if you're putting technologies out there that have a certain assumption about how they're going to be used and applied and you're not really taking the full account of how bad actors could use them or how other populations that you're maybe not considering are going to use them as well? - [Keith] Cool. So make this super tangible for us.
What brand out there are taking this really seriously and what are they doing? - [Sarah] Sure. Well, we're seeing the most mature activity happening in the high technology and service provider context, right? Of course, those are the companies that are developing the technology and so they have the most robust approach to ethical development and responsible innovation principles. They're also starting to kind of operationalize those principles in the form of review boards or offices of chief ethical and humane technology use, right? That's the role that exists at Salesforce and that role sit within the product team because that's where it can have the most impact. So we're seeing a lot of maturity in the tech space, of course, but we're also starting to see it show up in other places where you wouldn't expect, right? So what does tech ethics have to do with chocolate, right? Nestle is developing data ethics principles and that's in part to, not only comply with GDPR, but anticipate how data management will progress in the future, right? They're explicitly saying that this is part of our sustainability approach and how we're going to deal with data is a future-oriented, future compliance structure, right? And so it's not just about current day GDPR compliance. We're also seeing companies like Porsche talk about the brand value of protecting their luxury consumer's data, right? So Porsche is talking about the trust in the brand. They really believe that the control over your data has to do with their brand values of freedom, right? The brand for people to follow their dreams and drive around.
So they're introducing, not only this kind of set of principles, but they're also introducing features that allow users of the luxury car to go into private mode and not send data back to the cloud about their performance and so as all of these, we don't think of a car necessarily as a technology, but of course, all these interfaces and systems are talking back with the home base, right? And so what is that relationship and how does that change? And Porsche's in a position to say, "We will never sell your data for advertising purposes," right? That's an important position to be able to take. - [Keith] I love that example of Porsche 'cause it's slightly unexpected. Although driving a vehicle is a pretty intimate experience and it's location data and all those things. So every brand will have its unique take on this, clearly, because it has to connect to their values and every product is different. I'm gonna assume that the seniormost tech exec in every company should be paying attention to this, which makes me a little skeptical that they're thinking about the brand.
But, with that in mind, whoever's responsible for this, what are the steps that need to happen to actually develop this ethical tech strategy? - [Sarah] Absolutely. Well, I think we need to take into account that it's not just gonna be a CIO or CTO on their own for sure, right? We have to think about who understands risk in the organization, partner with that person. Who understands the kind of legal considerations? You're gonna be partnering with the general counsel or the compliance officers as well, but it needs to have this holistic vision for what is our company's stance on technology and so I think there are a lot of early things that tech execs can do to focus on this. The first thing is really just to tie company values to technology decisions. So every company probably has a statement about what their values are or their principles are and chances are that might might have been developed by an agency or the HR organization. But what if you took those principles and applied them to your technology use? How would that shape your decisions? How would that inform your choices and, frankly, inform the trade offs that you're making? Because, to me, ethics is actually actually not a kind of highfalutin, ivory tower question.
It's really a question about how do you make everyday decisions and trade offs, right? And so, if you have a values-informed decision making process about technology, where you can say, "Well, because we value courage, we're gonna do this in a different way," or "Because we value equity as a company, this is how we're going to apply this AI system." So I think that's the first step. There are a lot of other steps: gaining visibility into existing technology ethics efforts. So all of this stuff is happening proudly in small pockets in your organization so you might have folks working on security and privacy by design or the data analytics folks are probably thinking about ethical data use or ethical AI in their realms of influence and so I think there's an opportunity for tech execs to really figure out what is going on already and coordinate and kind of collaborate on these efforts at a more strategic level, right? How much visibility do you have into those activities? And, lastly, I think, inventory the risks, right? We're talking a lot about, there are lots of upsides of focusing on this and one of them is brand value and I think that's a squishier thing to understand, but, of course, the downside is really where the kind of near term driver sits for caring about this and so working with risk and security leaders to understand where the hotspots are.
How does your data science team conduct an impact assessment? Is there a partner whose business model could deplete the trust that you have? Or, how are you deploying low code tools and what kind of misappropriation of data might happen as a result of distributing all of this kind of computing power throughout the organization? And so I think there's an opportunity to really take a clear look at where the risks are in your organization. - [Keith] Yeah. So I wanna talk more about the payoff. What's the ROI behind this whole thing? I understand what it means for the brand but what's the real ROI to taking a strategy like this seriously? - [Sarah] Yeah, so I go back to this really interesting data point that we discovered: Future fit firms are twice as likely to adopt a business code of ethics than traditional IT firms and so there is, future fit firms are outperforming, and there is a clear correlation that they are also thinking about ethics.
To be fair, only 26% of them are thinking about ethics and so there is this huge gap between, yes, this is an indicator of higher performance and yet it's still a huge opportunity for focusing the attention there. So that's the clearest cut of, there is an actual differentiation here that we can measure. I think, even for the tech companies who are starting on their journeys and kind of operationalizing some of these efforts, they're still not in a position to be able to tie clear metrics and I think they're in the process of developing what those real metrics are, whether it's how do you measure brand value impact of this? How do you tie ethics work to upselling in sales or kind of ethical deployment because you've got all these principles for how to use AI, right? Salesforce talked a lot about how they're really starting to see that connection between product development. One of the things they talked about was developers are motivated by writing new code.
They don't like to go back and fix bugs and so, if we can tie responsible and ethical tech processes to reduction of bugs, then we have a clear indication for that stakeholder as to why this is valuable. So I think we're very much in the kind of feeling our way around trying to to pin down how to monetize or how to evaluate the impact of these efforts. But there are lots of metrics that we can start to think about. - [Keith] All right, fantastic, Sarah. So let's make the call then here because you basically called out technology, said, "Technology has a trust issue." What are tech execs or industry players, what do they need to do to regain trust? - [Sarah] I think that technology leaders and the technology industry, and even the technologies themselves, can't afford to ignore the ethical debt issues that they've accrued and so we are in this moment where we have to reevaluate what the impacts are of technology on a bunch of different stakeholders and that is the only path forward to regaining trust.
- [Keith] Well, that sounds like we got a lot of work to do. - For sure. - Thank you for all this. This is a really interesting topic. I'm sure it's not the last time we'll talk about it. I appreciate it. - Thanks, Keith.
- [Announcer] If you like what you heard today, be sure to check out Sarah's session at the upcoming Technology and Innovation North America event happening live in Austin, Texas, and virtually September 29th and 30th. To learn more, visit forr.com/ti22. That's F-O-R-R.com/ti22. Thanks for listening.
2022-08-27 10:30