Top Cybersecurity Threats: 2022 RSA Panelists Share Their Thoughts | Intel Technology

Top Cybersecurity Threats: 2022 RSA Panelists Share Their Thoughts | Intel Technology

Show Video

- Hi, and welcome to "Cyber Security Inside." I'm Camille Morhardt. Today on the podcast, we're gonna do something a little different, featuring conversations with three panelists from the 2022 RSA Conference. They spoke at a panel called Hands on Deck: A whole-of-society Approach for Cybersecurity.

You're gonna recognize one of the panelists. It's my co-host Tom Garrison. And he was joined by Aanchal Gupta of Microsoft and Dr. Diane Janosek with the National Security Agency.

We'll hear first from Aanchal. - Our guest today is Aanchal Gupta. She's corporate vice president at Microsoft, and she heads Microsoft's Security Response Center. So welcome today, Aanchal.

- Thank you, Tom. - I thought it would be good maybe to start off with just your view in terms of what are the top threats today that require a joint collaboration to try to go off and solve. - There's a litany of threats I can go on with, but I'll start with the one that is top of our mind, which is software supply chain. That's one of the biggest security risk right now. Even though it is not a new risk, but our reliance on the third-party and open-source software is exponentially increasing. And it is only a matter of time that we will see more supply chain issues.

Log4j and Nobelium are just the tip of the iceberg. And there are two primary reasons, Tom, that the supply chain attacks are on this continuous rise. The first one is our dependence on this third-party software is growing, and it is becoming very attractive for our threat actors to find the soft spots.

They could easily convince an insider to get onto and modify some code in the software supply chain, or they can inject this malicious payload into the software supply chain. And the second reason is the usage of the certain software is literally like salt in our pantry. And when I say salt in our pantry, when you look at different food items in your pantry, and you start to look at the ingredient list, you will most likely find salt in there. And if someone were to tell you, hey, salt is contaminated, you need to do something about it for the food items in your pantry, it'll be immensely difficult. So the same way for certain software, it is really difficult to fix those issues because they're pervasive. And that's what made Log4j is such a big challenge for the entire community.

So that's why I feel like supply chain is top of my mind. - So what are some of the joint goals that we can establish to work toward this whole-of-society approach for cybersecurity? - We cannot think about security in a silo. We have to make our ecosystem safer. It cannot be like, what can I do for myself? It has to be a close partnership with our security research community, with the security industry, with government agencies, like all of us coming together.

One example, that is, again, top of my mind, as we saw with the war in Ukraine, and we did some publishing around it. It is a hybrid attack approach that Russia is using. There are nation state threat actors who are doing, who are conducting these intrusions on the software side. And then there is kinetic military action going on on land, air, and sea.

And they're doing this in tandems. Now, these going in parallel, Microsoft worked very closely with Ukraine cybersecurity agencies and their local enterprises, the private sector to look at the TTPs, the tactics, techniques, and procedures that are used by these threat actors to do a timeline map of how these threats are unfolding. And it was really good to see that timeline map. And it was also really helpful to do this partnership, because without this partnership, we wouldn't have gotten this end-to-end view that we were able to get.

And I think that we have to continue to evolve this partnership globally, because that is the only way we can defend these threats. When it comes to addressing the threats we all face, I believe transparency and partnerships are the key. We have long said that when it comes to cyber attacks, the question is not if but when. At Microsoft, we work in an assume breach mentality, and we encourage others to do the same. Let's also not penalize the people for sharing a breach of their system. We need to shift the culture from blame to community support.

When we support organizations to be forthcoming with their experience, we get better insights. We are able to help identify supply chain risks sooner. And one example that comes to my mind for us is Nobelium attacks.

When we were seeing this in our network, we were sharing Intels and IOCs from our own experience through numerous blogs. And it was very helpful for the larger industry because they could use this information and see if they were getting attacked in the net. So I believe sharing this information, becoming forthcoming is very helpful. - Thanks, Aanchal. I think there's so much that we could be talking about today, but these are two really important aspects of what we need to talk about in terms of supply chain and in terms of how we all come together in the industry to promote more secure platforms.

So thank you. - Thank you, Tom and Camille. (upbeat music) - Our guest today is Tom Garrison. He is my cohost with "Cyber Security Inside," the podcast. He's also vice president of Client Security Strategy and Initiatives.

As a part of his role there, he's in charge of the cybersecurity innovation roadmap. And he's also in charge of industry-wide initiatives and security research. I'm fortunate to be able to interview him with Abhilasha Bhargav-Spantzel who's a partner security architect with Microsoft. Tom, first question for you is you actually are doing security, but from within a product division at Intel. So I'm curious your vision or your sense of the world in terms of what's actually happening with respect to cybersecurity. - From my perspective, what I see is a sort of perfect storm that's happening around cybersecurity.

One is that the technology of our devices is getting more and more and more complex. The analogy I like to use with people is to think about, let's say a 1980s automobile, and you could have your buddy down the street, who's a garage mechanic, work on your car and he could fix a carburetor, whatever needed to happen, that worked well in the 1980s. But if you have like a 2022 car today, if you open up the hood and one of those things, I mean, good luck trying to find anything, the technology is so, so, so much more complicated.

And that same is true for our platforms, whether it be a client platform, a server platform, and the like, couple that with the fact that we have devices now being used in ways that have never been envisioned before. Workers that are outside the four walls of the company that are out with customers and like, and so they are subject to whole different kinds of attacks. And then the third is that the state of the art in terms of security research and security attacks is significantly higher than it has ever been in the history of mankind. And so the combination of all three of those things, security continues to evolve.

The nature of these attacks tend to be more and more complicated, more and more nuanced coupled with the way the devices are being used, coupled with the complexity of the technology itself, all conspired to make the environment a very challenging one from a security standpoint. - You talked about the complexity and the attack surface that is associated with it. What do we do with trusting a car, which has all these bells and whistles, and what are the steps that one can take to build that trust and reliability that we need from these products? - That's a fantastic question. I think it first starts with a realization that we can't have third parties acting on behalf of the device makers. So in the case of, say, Intel Silicon, Intel CPUs, or wireless components or whatnot, we shouldn't be satisfied with third parties that are trying to interface in to say, is this device safe or not? As an industry need to become more confident in that you need to talk directly with Intel to be able to say, is our technology working as expected and the way that Intel and others should do that is through interfaces, so that we provide interfaces that are secure, that are trustworthy, that vendors can utilize through APIs or otherwise to be able to attest whether or not the devices can be trusted.

And that hasn't really existed in a robust way in the past. And I think that's the real opportunity is that if you're trying to figure out whether device is safe and the only person you're talking with is somebody who is trying to determine it themself as opposed to really deeply partnering with the technology providers. I think you probably aren't as safe as you could be if you were working with somebody who had partnerships that co-developed these solutions to be able to attest whether the device is really trustworthy.

- What are the things that we are doing with respect to innovative solutions that allow us to build that trust across that entire set of partners or the supply chain? - What we are trying to do at Intel is to take the first step, and that first step is around transparency. So what we want to do is to peel back this sort of almost secrecy that's existed around what components are used to build your device, whether it's a PC or a server or an IoT device. And we think with that transparency comes a level now of intelligence that you can have around. Okay. Do I know what the state of those devices are in? And do I know what firmware versions they're running? Or do I know that the patches have all been applied that are appropriate for those devices? And furthermore, you can make other choices around do I really want devices to come from parts of the world that you're concerned with.

And so all of that comes together and allows and empowers customers to make intelligent decisions around their platform. And the state of the device, is it trustworthy or not? And how do I manage it over time? Am I smart about updates? Do I have a process around updating these machines on a regular basis? Those are all things that come with transparency. We think that's a great, healthy first step. - What can users do to make themselves safer as we continue to enjoy the innovation and various solutions that are out there? - The most important thing that a customer or an end user could do is to be able to ask themself the following question. Do I know if my machine has been fully updated from all the known vulnerabilities for my platform? And if you know the answer to that question, then that is a very healthy first step.

Now, by itself, it's not the end all, be all to the answer of everything, because as we all know the industry, this is a very, very complicated problem. But what I am saying is that while this doesn't fix everything, this is a very, very good first step to protect yourself. Can you know for sure that your machine has been updated properly for all the known vulnerabilities that exist for that platform? And I would recommend that if you don't know the answer to that, then the answer is probably no, you're probably not safe and you have a very solid next step to embark on.

- So what are some of the things that you do besides checking that you're accounting for known vulnerabilities in incorporating any kinds of components or software or your own learnings? What other kinds of things should sort of world class product divisions be doing? - Lots of steps, as you can imagine, the first would be how much are you investing yourself as a product division in security research. Not just relying on external researchers to find vulnerabilities in your platform, but with a deep understanding of your own product, your own architecture, the trade offs that you've made, are you investing in security research and staying on the cutting edge of the latest attacks? Because if you are, what you want to be able to do is stay in front of even external researchers, even ethical hackers. You want to be in front of them with your own development teams. And you'll only do that if you're investing significantly in your own security research. That would be step one. Step two is, do you have a process to take those findings, whether they are internally discovered, or if they're externally discovered, do you have a robust process to take the key learnings and develop them as part of your future products, not just fixing your historical ones, but ensure that your future products are also safe.

Those are two concrete steps that world class providers of technology should be utilizing. - How does one share these best known methods across the industry? - We at Intel, we have a couple of different strategies that we employ there. One is, we just have relationships with a lot of people. So Intel obviously been in the industry for a very, very long time, and we share lots of information back and forth. Intel also participates in coordinated vulnerability disclosure, which means that we have a very, very rigorous process in terms of how we maintain confidentiality around vulnerabilities, and we don't talk about them until we need to with either a partner as we're developing a shared solution that requires both parties to do something or we don't disclose it as just part of a public disclosure until we already have a mitigation. Within Intel, we have a PSIRT team, Product Security Incident Response Team, and that PSIRT team engages with our partner PSIRT organizations with Microsoft, with all the OEMs as well.

And so there's a very robust relationship that exists there so that we can do that timely information sharing when it's appropriate. - So, Tom, thank you so much for joining us today. It was really fun actually interviewing you with Abhilasha who's also been a guest on our podcast in the past. Thanks again for your time.

- It was a pleasure. (gentle upbeat music) - Tom Garrison is gonna hop back into the host seat as we wrap up this episode of "Cyber Security Inside." Again, featuring panelists from the 2022 RSA Conference.

Our final conversation is with Dr. Diane Janosek. She's deputy director of compliance for the National Security Agency, and she has experienced spanning legal policy as well as executive management in the United States government. She's got numerous awards for security including the 2022 Women in Cybersecurity Leader. - And I thought maybe we'd start off with something we've sort of mentioned in the past maybe a little bit, but most people haven't really thought about it.

And that's the use of artificial intelligence and the use of that to attack somebody. And I wonder if you could just share some thoughts about how real is this threat and what can companies do to help alleviate that. - Our adversaries have one intention in mind, maybe two intentions in mind, that is to make as much money as they can offer you or cause as much disruption as they can, or two of them together. So they're working very hard to accomplish that goal. And they're using adversarial AI where they'll come together, they'll understand where the sweet spots are to affect us and to cause the most amount of damage or harm or financial damage.

So from an adversarial AI perspective, how do we respond to that? We have to recognize that our AI, which is phenomenal in machine learning uses data points, uses lots of different data sets depends upon the integrity of that data for the algorithms to actually work. If our adversaries are altering that data, recognizing that we're using certain models, our models will be incorrect. We won't even realize that our models are actually directing us to the wrong place. So it's a real, it's a double-edged sword. So we have to recognize what they're doing, how they're doing it, and be able to respond.

- From what we know so far and it is still relatively early days for artificial intelligence or you call adversarial artificial intelligence, is the nature of an attack different if it is an AI-based attack versus the more, I guess, traditional old school of a human-generated attack? - Oh, absolutely. Right. The velocity and the complexity or the sophistication of those attacks across multiple domains, across multiple platforms can only occur usually with some sort of assistance. And so we have to recognize what we're looking at and where is this coming from and who are the actors are and who are the threat actors are, really, it's usually a plethora of them to come together.

In the cyber crime area, cyber criminals don't care where you sit, where you stand, what country you're in, they'll work across any domain to achieve their end goal. And as a National Security Agency, one of our missions is help to protect and defend the United States of America's Department of Defense. And as a member of the intelligence community, what we need to ensure is if cyber criminals the same way that if it was in a law enforcement side was coming into your home and disrupting your daily life at home, if they're doing that through the digital network, you need to be assured that there is somebody hopefully watching that threat factor, stopping that threat factor, and stopping any type of malicious activity that could come your way. And that's what you hear from the Department of Homeland Security, the assisted organization when they talk about shields up, they're really trying to give a strong perimeter, a digital perimeter around the United States.

- Could I just interrupt and ask, does that include like critical infrastructure that's not necessarily government owned? - The way that we look at the critical infrastructure sectors is there's currently 16 of them. Critical infrastructure or sectors that our country is so dependent upon that if they were somehow degraded, we would all be impacted, right? Like the energy sector and the transportation sector. So currently, 80% of the United States critical infrastructure and the telecommunications that underpins at is run by the private sector. So even if I get the A in 20%, which is the defense, maybe the telecommunication side, it's not good enough for the country, right? I mean, getting 20% is not even a D, right? It's an F. So even if I get an A, the country doesn't get the A.

So what do you have to do to kind of raise the bar? It's giving the tools and the information, sharing what we know about vulnerability, sharing what we know about threat factors, sharing what we know about adversarial attacks and what the emerging threats that are coming down the pike. If we can share that with the other 80% and through the healthcare sector, the financial sector, the energy sector, all 16 sectors, actually like the water, right? Just supply chain of just the water. If we can share what we know, Americans as a whole can go to sleep knowing that their country is better protected. And I'm only doing that because I'm sharing the data that I know through the information that I've gathered. What that means is when we perform our signals intelligence mission, understanding what our adversaries are saying and doing and maybe plotting against the United States, or we're doing cybersecurity, defending the network for which we run sensitive data on.

When I do that, I'm gonna have access to a lot of data. I have to make sure that I protect that data, that I ensure that it's constitutional before I accept that data into the systems that we have, that we treat it properly, that we don't take data that's on US persons and Americans. So we need to be able to do the signals intelligence mission and the cybersecurity mission while protecting the constitution.

And that's a very fine line in a world where it's completely data intense. - So Diane, you're in the NSA, I'm wondering what should Americans be asking of our government and actually of our industry as well, especially with this explosion of data collection around artificial intelligence when it comes to privacy and security? - As Americans, what I would expect of big tech and your government is to work together. If I at the National Security Agency have information that can protect Americans, I should be sharing it.

If the private sector has information about what's going on, what the particular vulnerabilities are, where they can see particular ransomware trends, to share that information, don't be afraid of really partnering, because between a partnerships between state and local, the universities, the industry, the nonprofit, the government sector, if we can share that data, that's how we really have our shields up. That's how we come together and we're fortified together. I would expect as an American that your government is transparent with what it's doing and you know that it's taking all efforts it possibly can to secure you in your home and your daily way of life and that we are partnering with the best and the brightest across the private sector to make that happen. - What kind of demand should we have from the privacy perspective if we're talking about sharing data across government and industry? - So I would say that we should ask from our government and from the private sector to be transparent with what they're using the data for and to truly collaborate. Don't just say the word public, private partnerships.

Actually understand what that means. See how you can integrate your systems to share the data in an appropriate, lawful, legal way. So that at the end of the day, we are all stronger in safe resignation. - So Diane, who really is responsible for worrying about cybersecurity in this country? - It takes everybody.

It takes people patching their systems, doing the updates on their iPhone, making sure that they have a password on their home network. You wanna make sure that government's doing the right thing, that they are really locking up the supply chain, that they really are securing the water supply plants. The planes are safe. The hospitals are safe.

At the end of the day, cyber's personal to me. My mother, my brother, my neighbor, my husband, my children, cyber affects all of us. And so whatever we can do to really secure it in a way that makes sense and that's transparent and defensible, we should be doing. - I know that Camille and I are both just dying to ask you so many more questions, but we don't have time today, but I do want to thank you so much for joining us.

And we look forward to having a future conversation. - Thanks for joining us today for conversations focused on a whole society approach for cybersecurity. Aanchal Gupta of Microsoft, Tom Garrison, again, "Cyber Security Inside" cohost and VP of security at Intel, and the NSA's own Dr. Diane Janosek spoke on this topic as panelists at RSA 2022.

Thanks so much for listening to the conversation on "Cyber Security Inside." (gentle upbeat music)

2022-06-14 11:45

Show Video

Other news