The Intersection of Risk and Technology at Microsoft and Beyond
- [Matt] So like Ashish mentioned, my name's Matt Sekol. What a worldwide sustainability industry advocate really does is, it's a sales role. But I'm not here to sell you on anything, but I will be reading my notes because I am in sales and I'm not a practiced, like, academic speaker. I did come into the space from the perspective of financial services. I talk to a range of companies across industries about their environmental sustainabilities challenges, mostly, but also, I'm really passionate about all of the breadth of ESG as well. I also am on the LP advisory committee of Morgan Stanley's Next Level Fund, which I introduce in the Microsoft Treasury.
That fund invests in email and diverse-led founded startups. So I'd like to think that I'm a little bit intersectional with this from a practitioner perspective as well. Now, like Carolyn said, ESG and CSR are not the same thing. I wish I had seen her slides before I got here, but this is how I like to kind of frame the conversation. And it's very much taken from MSCI's perspective, which is one of the world's largest ESG data aggregators.
ESG is really the world's impact on the customer, representing on the company, representing the risks and opportunities that are presented from the pressures that they're feeling across stakeholders. Where sustainability, DEI, and CSR, they're really becoming on [Inaudible] slides, licenses to operate, but they're not quite the same thing. They're more the company's impact on the world.
So there's no global standards necessarily for what ESG means, but we're talking about material risks and opportunities. For example, a manufacturer has good environmental health and safety practices that keeps their employees safe and keeps fines and regulators off their backs. It's not necessarily philanthropic efforts, but those are obviously really great and important to do. And sometimes there can be an intersection with philanthropy and materiality. So for example, here in the front row is Theodore [Inaudible] and on her podcast for vision I heard somebody from Dell talk about a program that they have to train high schoolers to be their IT help desk.
Now this not only gives somebody a potential to have a career skill and maybe a marginalized community, but it also helps them have a Dell advocate for when that person gets into the market. So there's a material intersection. Now, also Carolyn's point, what we're seeing with sustainability DEI and CSR right now are those big focuses on things like scope three emissions, which come from things like your supply chain, forced labor issues.
And it's really coming out of what I saw, what I think accelerated around COP 26 last year, this intense focus on climate change and the social impact of it. And so we have two things here that are close but not quite. And at the intersection is the company. I find in talking with companies that it, to Carolyn's point, again, it doesn't much matter how they're labeled and that each one can influence the other. As companies learn about CSR efforts, they understand their risk and opportunities better and as they try to address a risk, they also might find that they can figure out how to lower their carbon label, let's say, better. But this is pretty complex.
Now let's make it even more complex by adding technology into ESG. And I hope you guys can see that it's a little bit washed out with those colors. Across the three pillars of ESG, there's a mix of quantitative and qualitative data to assess and it can get pretty complex for boards and even more so for investors because they need to scale those insights across entire industries and portfolios. Now at the most basic level, ESG and CSR activities happen with data, which means it's largely technology driven. But data can do so much more than disclose what I was telling you this morning. And it presents opportunities to help solve these ESG issues.
For example, when the heat dump parked over California this past summer, the grid was about to go down, the government decided to send a text out to consumers to say, "Hey, shut things off, otherwise power's gonna go down and we can't do anything." And within five minutes of that text, very low tech by today's standards, technology, the grid was relieved and consumers got the message and it reduced the risk of that environmental issue and potentially what would've been a social issue if hospitals lost power, things like that. So we have ways that technology can solve these ESG issues. But turning to that plus team part, so going horizontally, in 2020, Dr. Andrea Bonime-Blanc released a great book
that I love, called Bloom to Boom. In the book she covers the three pillars of ESG, but she adds a fourth pillar for technology, with its own risks and opportunities. And she argues for technology in the section of her book, like this, "The utterly disruptive and revolutionary impact that technology of all kinds is having and will continue to have on everything in the world today invites the question of why technology hasn't been properly included in ESG discussion before." Now, I am a longtime technologist. I've talked to Dr. Bonime-Blanc,
the answer to me is kind of obvious. In 2004 when the UN Global Compact first coined the term ESG technology, I was in IT and technology was really still getting its real legs in the business. There was just a point 10 years earlier where companies were debating whether or not they needed domain per names and email addresses. And just a few years after that, that website email domain became a must and a no-brainer. It's hard to imagine it in today's world. And keep in mind that 2004 when ESG was first coined was three years before the first iPhone when technology really became ubiquitous in our lives.
Now, Satya Nadella, who, if you don't know, is the CEO of Microsoft, said that every company is a software company and he's right. Technology underpins modern business in the same way that sustainability DEI, and CSR are table stakes. Now here's the thing. By 2020, 90% of the S&P Global's market value was in intangibles. This largely happens with technology. So of course it would represent a risk in an opportunity.
It's that fundamental to business. Just like any good ESG issue, and like any ESG issue, what I find is it's best to consider it alongside the other pillars. All right, so, let's talk about how Microsoft does ESG as a big tech firm. Now what I like to say is Microsoft is the biggest and best company at ESG and we never talk about it and I don't know why that is. So I built this slide. This is not a Microsoft slide in order to try to rationalize all the work we do because what's weird about Microsoft is we're really good at these things, but we don't have a centralized CSR or ESG function. It all reports up to our president, Brad Smith.
But there is not one person in charge of all the reporting. We have an accessibility group, a DEI group, we have human resources, we have an environmental science group, we have my team that's in sales firm, like the sustainability stuff. But let's take a look at some of the stuff that we're doing. So starting with the environment on the first four, I don't know what we call those five areas there, we made big moonshot commitments in 2020 to be zero carbon, water positive, and zero waste by 2030 while protecting more land than we used by 2025 and building a planetary computer so that companies can have democratized access to large planetary data sets in order to make more important decisions about their operations.
We also launched something called the Microsoft Sustainability Manager, which is first party tool to help companies measure and hopefully lower their carbon emissions with we're adding in water and waste soon. Now this, this was a huge commitment and I'll talk about it a little bit more on the next slide. And actually that was, I heard from my chief environmental officer that this was the talk of Davos in 2020. Apparently everybody was coming up to him like, "what did you guys just do?" Now social, when it comes to social, which are the next three, there's a lot of work we're doing, but let's start with human capital and talent.
We've committed to addressing racial injustice with 150 million investment to strengthen inclusion and double the number of black and African American, Hispanic and Latinx people managers, senior individual contributors and senior leaders by 2025 Materially. Materially, though, this diversity brings in diverse thought and better matches our customers. Now we're also focused on things like pay equity and racial, with racial and minorities and women, with these groups, in and outside the US having slightly higher salaries than their white male counterparts.
And you can look that up in our diversity, equity, and inclusion report. Now, when it comes to inclusive growth and fundamental rights, we do a lot of philanthropic work here to enable communities all over the world, but we also have a material intersection. So over the past few years, Microsoft has done things like announced AI and cybersecurity skilling efforts to help people pivot their careers where we saw a talent gap that was material to us but also give them an opportunity. And just three weeks ago now, our president Brad Smith also announced a similar skill skills initiative, focused on sustainability. See Carolyn's point, that's a huge talent gap at the moment.
But we also lead with empathy and inclusion and materiality through accessibility in our software and hardware development, including things like realtime translation, accessibility checkers and office tools like PowerPoint. I don't know if you saw the Xbox Adaptive controller and new surface accessories for accessibility on the surface devices and things like that. We're also helping close the digital divide with something under that fundamental rights called Airband, which uses TV white spaces to deliver internet to rural and now urban areas where they can't afford the internet. But to be honest, getting those people and closing that, getting those people online and closing the digital divide for a cloud company is a material issue. And so we take, we kind of take what we know best, which is technology and apply it in a material but also a little bit of philanthropic way in some cases. Now regarding governance, this one's a little bit difficult.
Rather than look at things like board transparency or executive comp, which, I've heard we tie our executive comp to ESG, but I can't find exactly how we do it yet. So there's not a lot of transparency there. I'd rather talk about how we talk about it to customers, which is addressing material risks through something called digital acceleration, which is really a modernization effort around the core systems that run your business, so that people at the board, senior leaders, can get the data they need to make more informed decisions. Our robust cloud, our AI capabilities very much play a role. Now, we also lead with the consideration to technology as an ESG risk. So if a couple years ago our president, Brad Smith, wrote a book called "Tools and Weapons," does not mention ESG in this book.
As I started thinking about it though, I was like, "he's talking about ESG." It's tools and weapons, risks and opportunities, plays, basically. And all through the book, it's kind a history of Microsoft addressing these issues with this [inaudible], and it very much that kind of permeates our culture. And lastly, Microsoft is making a 20 billion investment in our own cybersecurity capabilities, monitoring a tax form, a wide range of bad actors.
You may see in the news we talk about, we've been talking about Ukraine the last couple months, but we try to publish that information out to be as transparent as possible and help governments and corporates react. Now, we often get asked, "how the heck does this work at Microsoft? 'Cause it's like 160,000 people." But if you kind of took any of us off the street and showed them that last slide, even though I had to make it up, they should at least be able to tell you what we're doing.
We have accessibility training, we have ethics training, we have diversity and different employee resource groups. They should understand what we're doing from an ESG perspective, again, even though we don't talk about it, but very much it comes by setting the tone from the top. When we made our 2020 environmental moonshot commitment announcement for example, it was not only our chief environmental officer, but it was Satya, our president, and our CFO, Amy Hood, all on stage with him making that announcement in that commitment. Now we also make sure that our ambitions are grounded in science. There's a science team that I keep mentioning, our chief environmental officer, separate from my team, which is sales. We have great environmental science research happening across Microsoft research.
And all the targets that we set are science-based targets, with the science-based target, the [Inaudible]. We also make sure that our strategy scales to achieve ambitions by using the whole of the business. And this means that we're trying to integrate these things into our products. So for example, we talk about how when we move to Azure, it's a sustainable cloud, not only 'cause of things like our renewable energy contracts, but because of the way that we built it operationally in order to gain as much energy efficiency out of that component equipment as possible, working with the manufacturers and things like that.
But also we do it across social issues too. For example, when you use office tools, there are signals created that can help your leaders understand how the employee resource groups work better together. We make this commitment relevant to our business groups. There's an economic incentive for business groups to deliver on our environmental sustainability commitments.
We actually have an internal carbon fee that's assessed at the business group level and is reviewed, I think biannually with with each business group. And we hold everybody accountable with the governance structure, again, visiting them bi-annually with an [Inaudible] structure in order to figure that out. Now, about accountability, though, in our last sustainability report, which if you read it, and I don't encourage you to since it's 119 pages, we actually had a problem this past year because of COVID, our revenue went up, which isn't a problem. But the problem is because our revenue went out, our carbon emissions went up. More gas boxes, more surfaces, more cloud usage means that all of our emissions went up.
And it was the first time since, you know, we just made these commitments in 2020. It was the first time really that we saw that there's not a linear progression towards those 23rd goals. And instead of kind of like fudging it or something, we said in the report, "Look, this isn't gonna be a linear journey, and this is gonna be really hard, and so we have to buckle down and figure out how we're gonna do this."
And so they said that they've added scope three emissions to the carbon fee for business units. They've also tied against, somehow to executive comp, the carbon emissions swap. Now we also need technologies that don't exist today. So in the top right there. And so also in 2020 we announced something called the Climate Innovation Fund, which is a $1 billion fund meant to invest in startups that are trying to build a carbon removal gap largely.
There's a little bit of water and waste support that goes on there as well. We're about, we're just about halfway through that fund. 571 million dollars has been allocated already. But we can't meet our goals, we've realized, unless those technologies come to market. 'Cause we have to start moving carbon if we're gonna get to net zero, especially if we keep growing and spending more carbon. Now, let's get into some examples of technology as an ESG risk.
Now these are not examples, I have to stress this. These are not examples that Microsoft had handed, but they're examples that are public. You can look them up. I encourage you to read about 'em.
I think they're pretty interesting. Now, per a 2022 Edelman survey, businesses are trusted over governments. In 23 out of the 28 market survey, 81% of respondents say CEOs should be personally visible when discussing public policy with external stakeholders or in explaining their work, the work that their company has done to benefit society. And Carolyn brought this up earlier, it was on one of her slides. Companies basically are at the top of the trust pyramid right now. And employees, and other stakeholders are looking to find a sector, to say, "Hey, you need to step up and do something."
And some are doing it as a competitive advantage. Now take that and then layer on an impactful technology like artificial intelligence. So Capgemini had a report on why addressing ethical questions in AI will benefit organizations, and it shows how ignoring ethical issues can break trust. So from the report we can see that companies who approach AI ethically have a 44 point net promoter score advantage over those that don't. And the chart on the right reflects the negative consumer sentiment when consumers feel like their trust was broken because the AI model wasn't treating them fairly.
And it ranges from things like complaining to the company at the top all the way to litigation at the bottom. Now complicating this, is the explainability of the algorithms, which is one of the key factors in holding back businesses from implementing AI. In fact, fear of breaking stakeholder trust is at the core of slow AI adoption and a risk in itself.
If AI systems are opaque, they can't be explained how they work or how their results are presented. This lack of transparency undermines trust to very important stakeholders. So let's take a look at two examples here. So first, an algorithm widely used in US hospitals to allocate healthcare patients was systematically discriminating against black people.
The study published in Science in 2019 concluded that the algorithm was less likely to refer black people than white people who were equally sick to programs, personalized care programs, to help improve their care for patients with complex medical conditions. Now hospitals and insurers use this program to manage care for about 200 million people in the United States every year. So what happened, this study was routine statistical checks against data from a large hospital, showed that people who self-identified as black were generally assigned lower risk scores than equally white, I'm sorry, equally sick white people. The algorithm assigned the risk scores to patients based on the total care costs accrued in one year, which seems like a reasonable assumption. You would expect that somebody who is more sick would have higher healthcare costs than somebody who's less sick would have less healthcare costs.
But here's what happened. The average black person in the data set that the scientists use had similar overall healthcare costs to the average white person. But researchers found that even with similar costs, the average black person was substantially sicker than the average white person with the greater prevalence of conditions like diabetes, anemia, kidney failure, and high blood pressure. The bias came in because it was due to the lack of access to healthcare that the costs were the same. So, and remember, the algorithm assigned people to high risk categories based on the costs.
And so the resulting bias was black people had to be sicker than their white counterparts before being referred to for additional health. Only 17.7% of patients that the algorithm assigned to receive extra care were black. And the researchers found that it would be up to 46.5%
if that algorithm was unbiased. Now, AI's opportunity here was to identify a way to service patients better with more personalized care, but it ultimately ended up as a risk because it augmented the bias that was already there. Now since this case was from 2019, I looked this up last year and I keep forgetting to switch both my screens. I've decided to look up the company that did this, which is Optum. When I checked it last year, Optum had a website for the responsible use of artificial intelligence analytic algorithms. They had a video, an outline of the steps they took to reduce the bias, which included things like this, establishing a culture of responsible use through purpose stating, "We will use AI to advance our mission to help people live healthier lives and make the healthcare system work better for everyone.
We affirm our commitment to be thoughtful, transparent, and accountable in our development and use of AI models." They were working to embed fairness testing in the model development process, developing a diverse workforce, which I can't stress enough in a situation like this. Researching the root causes of health inequities.
And then the site went on to make a very surprising admission, which is "No solution will eliminate the risk of healthcare AI bias. However, we can use human experience and insights to minimize bias and be prepared to respond for what happens." Now I checked this as I was [Inaudible] again, because I checked this like last year or the year before and the site is gone, which is wildly disappointing cause I had a lot of really great information on it and I don't know why that is.
Now they have kind of a very generic AI ethics page. I kind of think they made a mistake. I really wish they would've kept it up and I'm hoping that they learned their lesson with that incident.
Now here's another example that you might be more a little bit more familiar with. Coincidentally also from 2019. So 2019 was a bad year for AI, I guess, somebody named David Hansen, vented on Twitter, that even though his spouse had a better credit score and other financial factors in her favor, her application for a credit line increase on a new Apple Card was declined. And this led others to chime in on Twitter, as people do on Twitter, that their experience was the same. As it turns out an algorithm was to blame, the algorithm was apparently discriminated against women and, it was quickly corrected, but it was still a black eye on which should have been an amazing digital payments launch. Here, AI was used, as it often is, to assess something like credit worthiness at scale.
But it quickly turned into a reputational risk over social media really, really quickly. Now what ended up happening with this was there was an investigation by the New York State Department of Financial Services. They didn't find any fair lending violation. I really want you to listen closely to the what the report found.
I think this one small paragraph is just so indicative of, I mean, break things and move fast or move fast and break things. In the rush to roll out Apple Card, the banks seem unprepared for the possibility that its complex underwriting model that allowed applicants to begin using their Apple Cards right away would produce outcomes that might surprise applicants. These issues were compounded by internal deadlines and pressure on the bank to roll out the Apple Card by a particular date. In the end, a more nimble policy for credit term review at the time of the Apple Card introduction would've provided consumers with a greater sense of their treatment. They went on to further explain in the report that the bank used a black box algorithm that produced unexplainable outcomes, which is called [Inaudible].
So clearly taking a well governed approach to rolling out such a powerful technology can help even with brand reputation and customer satisfaction. So again, technology is really emerging as a risk. And one area, again I keep forgetting, is cybersecurity. So according to IBM's 2021 data breach report, there's an average cost of 4.24 million dollars per breach.
It behooves organizations to spend the money on prevention rather than recovery from a breach. And the costs from a breach are high because organizations seldom spend the money on prevention. And this is happening all over the place.
Earlier I said, I repeated what Satya said, every company is a software company. That means that you have to protect those assets and cybersecurity is where that happens. My background is largely in IT.
Most of my time was spent in IT. And I can tell you that cybersecurity was largely looked as an insurance policy. Like executives would not invest in it. They would rely on the cybersecurity's team expertise more to react to events than to actually be proactive and protect their employees and their IP.
And I mean especially the IP is wildly important. This really started to shift in 2015 with the Anthem healthcare data hack, which affected about 30 million people, including my son who was six at the time, who got lifelong identity protection from Anthem, now. Anthem ended up settling civil lawsuits about that breach for 115 million dollars. I mean, that is a material issue, and it's still a mixed bag as to whether or not its cybersecurity is given the attention it deserves.
But it's launched the whole thing. You've probably heard the expression, "Nobody wants to end up on a front page of Wall Street Journal." That is a cybersecurity expression and it's so true. So here's our next [Inaudible]. After receiving a ransom note, Colonial Pipeline shut down the entirety of its gasoline pipeline for the first time in its 57 year history.
The hack was done on an account over VPN with no multifactor authentication, which is, I'm assuming you know what that is. It's a second form factor that you have on all your accounts, I'm hoping, because if there's one thing you take away, I hope it's turning on multifactor authentication, Colonial ended up paying the hackers, they were affiliated with Russia linked cyber crime affiliated group known as Darkside over 4.4 million dollars, pretty darn close to that IBM number.
The hacker stole about a hundred gigabytes of data from Colonial Pipeline and threatened to leak it if the ransom wasn't paid. Now the hack impacted price. This was just last year I think. The hack impacted prices and availability gas to millions of consumers in the Northeast US. But up note, Colonial Pipelines actually privately held. Something that we often see is that public companies are under a lot more scrutiny to address things like ESG issues and also technology risks.
But client pipeline wasn't really in the spotlight with any stakeholders. It has a range of private equity investors and private investors. I got the impression in investigating this that they weren't really under a lot of stakeholder pressure. Now since then, they've of course done the reactive thing and engaged experts to secure their system against future attacks.
Now just days after the attack, politicians were clamoring for new regulatory oversight for utilities and pipelines. And by the end of May 2021, the Biden administration asked pipeline providers to take several actions or face $7,000 a day fine until they had. $7,000 isn't a lot of money but it was still something. And here's what they had to do.
They had to report confirmed and potential cybersecurity incidents to a group called Cybersecurity and Infrastructure Security Agency, or CISA. They had to designate a cybersecurity coordinator to be available 24 hours a day, seven days a week. If you are a student, do not go in cybersecurity. They had to review their current practices, they had to identify any gaps in related remediation measures to address cyber security related risks and report the results not only to CISA but also to TSA within 30 days.
So because of a lack of regulations telling utilities and pipeline managers to have good cybersecurity practices, one pipeline ended up influencing an entire industry. So not only can risk come from within your own company, but it could come from other players within your industry as well. Alright, now let's talk about mobilizing for action. 'Cause those were really just some examples and I talked about Microsoft this one earlier, but there's a lot of actually pretty interesting stuff going on out there right now.
So one of the things that Carolyn mentioned this morning, if you saw it, was, there's a lot of data ratings agencies emerging. Largely they're evaluating public companies for their ESG efforts. But something that I've started seeing lately in the last two years mostly has been the emergence of companies looking at technology ratings. So I highlighted three here to be aware of. The first two, Bitsight and SecurityScorecard measure companies' cybersecurity footprint.
Now, Bitsight collects 250 billion points in signals from externally observable events, meaning largely the internet, they do not like internal company practices. So everything is outside the company. Now, they claim to have proven outside validation of its ratings, demonstrating a correlation between financial performance and data breach risk and cybersecurity incidents. That claim is backed up by a series of indices created by a group called Selective, that's backtesting Bitsight scores against others and finding a slight outperformance over time. You can look it up on that on the Bitsight website.
It's pretty heavily promoted there. SecurityScorecard is a similar ratings agency, pulling similarly, those external signals but also layering on other data sets that they can get access to. Like for example, which technology vendors a company uses. Because that can also have something to do with cybersecurity.
I came across SecurityScorecard because I was talking to a customer about a first-party sustainability tool and they said, "What is your SecurityScorecard score?" And I said, "What is that?" No idea what that even is. And I still, I know what it is now, but I don't know what the score is 'cause you have pay, you know. But what I wanted to impress upon you is that cybersecurity is entering. I'm starting to see it more on when we get RFPs especially, I mean I was talking about a sustainability solution, not a financial solution.
And they were still asking about cybersecurity. I'm seeing it pop up on RFPs and all kinds of things. Now lastly, on the bottom there is a company called Ethics, which I'm luckily enough to be connected with the CEO over in the UK. They started out by measuring responsible AI practices and talking to companies. They will call 'em up and say, "Hey,
what are you doing from an AI perspective?" And now they've since evolved a little bit in something called corporate digital responsibility, which they define as a set of best practices and standards which guide companies to emerging technologies and user data in ethical ways. And then they go further on their website to articulate how ignoring this can present risks, reputational damage and all things ESG. Now they're collecting a lot of data. Now they're no longer just doing interviews. It starts with publicly available data from websites, from news sources, AI audits, annual reports, press releases. From there they have a survey that companies can complete and an internal assessment on system people processes.
And then they analyze it against the pillars in a proprietary algorithm. They're probably the closest on this slide to what I would say most investors see from an ESG perspective. Like who you find from an ESG Global or an MSCI. Now here's, oh, here's a couple of other, I gotta figure out better way to [Inaudible]. Now Microsoft does a lot of work in this space, like I said, and we explore risks largely through educating customers and publicly through our skilling initiatives.
Here's some other things that I wanted to call out. The same month of Colonial Pipeline Act, President Biden signed an executive order on improving the nation's cybersecurity. I am not an expert in this regulation, but it sounds like it covers a lot of what happened post 9/11. But cybersecurity information being shared across federal agencies, as well as enhancing the supply chain security here in the US. And then for companies earlier in the year, there were two SEC regulations, oh, I'm sorry, there's one SEC regulation on cybersecurity management.
So, this comes right on the heels of the SEC's climate risk proposals as well as the climate fund proposals that they have. So they're clearly kind of thinking, I mean it has the word risk in it. They're kind of thinking about it as ESG.
I haven't seen an actual proposal yet. Just kind of the draft is out there now. Turning, and then lastly on the bottom there, the Green Software Foundation, one of, you know, a lot of the issues that I've been talking about then intersections with the T and either the ESG.
But running artificial intelligence workloads, it runs a lot of compute and a lot of data. You have to train the models on large data sets. That training takes compute time.
And so there's an environmental aspect to this as well. And anything, any kind of software that's developed at scale may have that environmental impact. And so Microsoft and the Linux Foundation co-founded with a whole bunch of partners, something called the Green Software Foundation, which has published a spec for developers called Software Carbon Intensity Spec. And what it does is it helps developers look at their code and understand how efficiently it will run so that they can hopefully lower their energy and carbon emissions.
And then just along the right there, I came across this other group called, gosh, Responsible Innovation Labs. Now they're very much focused on running the infrastructure. So very much traditional IT in an efficient way. And they just published their, what I think is their first report. And of the people they surveyed, there were 1500 people they surveyed, 80% of respondents think that practices, like responsible innovation, should represent the future of the tech industry. So it's ethics and responsible innovation is really starting to make its way into even the most traditional kind of IT staff.
Now, one group that I didn't put up on the slide but realized I should after I had submitted them was the Montreal AI Ethics Institute, or MAIEI. They compile reports on AI ethics in these massive documents and reports, and it covers all, if you read them and you have an eye for ESG, you will see ESG all over those reports. All right, and lastly, I was lucky enough in the spring to work on a day long session with University of Berkeley on digital harm.
It was Berkeley Center for Long-Term Cybersecurity. And one of the insights out of that day, it was published [Inaudible], leading firms recognized that digital initiatives provide opportunities for positive social impact and reputation building, even as digital harm may add reputational risk. So, Dr. Jordan Famularo is running a new study if you're interested or if you know somebody in the cybersecurity industry, it's an interview study that's anonymous, they're currently recruiting people, you can feel free to email her at that address. The results of the study, again, it will be anonymous, but it will be shared with business simple society and academic communities as well as the broader public.
So, I'm gonna leave this slide up so that you can have her contact information, but I hope I kind of help pivot, maybe change your mind a little bit about something that I see is often overlooked, which is, we take technology so for granted and just, it's so ubiquitous in our lives, but it absolutely has risks and opportunities across the E, S, and G. Whether we're talking about the E and it's the energy intensity of cloud workloads, AI workloads, whether it's the S and AI bias, ethics considerations, whether it's trying to help connect communities maybe, or whether it's the G and you have kind of an out of control board who wants to get a product to market really quickly and is pressuring somebody to make hard decisions. There's definitely an intersection with E, S, and G. I kind of feel like it's fundamental enough that it can be its own letter, but every industry tends to do that with its own letter at the end. So at this point I think that's all I've really had, but I'm happy to take questions.
- [Man 1] I agree with you. We're thinking along the same lines, that, T or technology, or something like that, [Inaudible] is important enough, that it should be included as its own letter in ESG or ESP. - [Matt] Right. - [Man 1] Theory is that would incorporate it as a separate measurement.
So how you measurement is a good question. Scientists always want to ask, what do you measure and how would you measure it? Any suggestions on how would you measure the T? - [Matt] So, wanna measure the T here. The question would be how to measure the T without measuring it from an ESG perspective and I don't know that you could, because if I think about it from a carbon perspective, you know, as a cloud provider, Microsoft surfaces the carbon emissions that our customers use to the customers. So if you run an Azure, we show you what your carbon was for the month and understand that better from an AI model training perspective, there's a lot of work that our researchers are doing.
There's a gentleman by the name of William Cannon, who works at Microsoft, who's trying to figure out how to sacrifice AI efficiency while saving tons of carbon. So is it worth, in other words, is it worth getting like 96% accuracy in order to save 50% carbon? Probably is, but I know that, measuring the T is tough 'cause it's so embedded in every other thing, which is why I think it's not its own measure, to be honest. (laughs) But, yeah, I don't know. It's a great question. Oh, one of the things that I would say is I have talked to investors who have considered things like data breaches and cybersecurity incidences. Not all companies publish that though, because that's not regulated. So for example, Microsoft is, we do tend to be more transparent, but it takes us a little bit of time.
And this particular investor was complaining to me that she won't invest in Microsoft. So data breaches, cybersecurity incidents might be one way, it could also be a little bit more qualitative. Like one of the other things that we do is we don't invest in facial recognition technologies because of the risk. It's a little bit less number-y, but you know, it doesn't, to an investor who might be like, well I don't know, you know, you might be selling to, there's all sorts of bias. But again that's the S, so yeah, it's hard. It's a good question.
You have a question? - [Man 2] Yeah. So I think when you show parts of rainbow of Microsoft actions for example, and one of the issues, I mean one of the great things about ESG is, that it's very broad, but that's one of the problems as well. So it seems that your chart there was more of making sense of what you're doing and trying to categorize [Inaudible], is there any direction coming from the corporate level of prior, because management is about priorities, right? It's about choices of one versus another. So what is Microsoft's stance on all those elements that you brought? - [Matt] You know what, if Microsoft is a really interesting place and I know what you mean when, I mean Carolyn, you said it earlier, that some of these programs are being pulled back, as the economy goes down, inflation goes up. I haven't seen that at Microsoft yet. Somehow we figure out all those things and keep them running.
What I'd say is that we're very much an engineering led company and so we tend to engineer to be a degree, I'm a liberal arts major. It makes me absolutely bananas to see that level of engineering. But those programs are built in such a way that they don't have to stop when things go bad. Not every company, though, is built that way. And not every company can run those programs at a time.
I really, I wish there was a secret recipe for it, but I think it's a mix of, and I, to be honest, I don't know that it might be a mix of this our engineering legacy, but also Satya's leadership. Because he's come in with, and I joined after Satya, but Microsoft is always in my career, his meeting with empathy has made a huge difference in where we prioritize. And when you bring that together with the engineering aspect, I think that's where that "it doesn't stop" comes from.
So having strong leadership, I'd say. Yeah. - [Woman 1] So, I was fascinated by your AI examples and if their oversight in companies on the logic for AI, so, like, the cost of healthcare, a proxy for how sick somebody is. Like I'm not that kind of analyst and I, how did they come up with that as a approximate? - [Matt] Oh gosh. Yeah.
- [Woman 1] So like do you know, like it just goes back to how it's developed and like do most companies have any kind of oversight that looks at the, you know, the basic logic that goes behind any of this stuff? - [Matt] Not typically. So in our AI business school, which is publicly available for everybody to take, there is a responsible AI module. But, think about it like this. If you're running any program and you want it, and it doesn't have to be technology based, you would probably take a thoughtful approach and design it and think about the ESG factors 'cause you're a practitioner. But to your earlier point, there's not a lot of this expertise in companies.
And so what's happening is they see the opportunity to capture a market with something like AI or solve a problem that's been out there forever, or augment what they've been doing with ai. And that's really what it's about. Like, if you try to take, that's where Optum came in, I think.
If you try to build on something that you have, there's probably biases in there that we haven't seen that AI's just gonna surface once you implement. - [Woman 1] Right. - [Matt] So, we really need oversight boards. They need to slowly figure out like all the considerations, taking ESG lens to it and be more confident. But yeah, I don't see it probably a ton.
- [Man 1] Yeah. Perfect. I'm on LinkedIn, Matthew Sekol LinkedIn slash [Inaudible], I have a link here, I'll find it as fast as I can, but I'm always happy to connect. I'm always happy to wax philosophical about ESG, which is [Inaudible] sustainability, my day job. Technology is ESG, any of this stuff. So free to reach out.
2023-02-06 19:54