Implementing Emerging Technologies: Agile SDLC Still Works

Implementing Emerging Technologies: Agile SDLC Still Works

Show Video

This is ISACA's Page to Podcast. Thanks everyone for joining us today. I'm Kevin Keh, ISACA's IT Professional Practices Lead. Joining me today to talk about her recently released article "IS Audit in Practice Implementing Emerging Technologies: Agile SDLC Still Works" is Director for What's the Risk LLC, Cindy Baxter.

Cindy, thank you so much for being here today. It's probably our third time doing this right? So how are you doing? Great. How are you, Kevin? It's great to be back. Yeah, awesome. So, yeah, let's get right into it. I like this question right off the bat.

You mentioned The Matrix and Free Guy in your introduction. How much do you think media like these films influence the public's understanding of and feeling surrounding AI? Well, Kevin, I think it's really interesting to see how the media has changed over time. I mean, it's hard to believe that 1999, The Matrix was really a long time ago and even seeing films that are aimed towards children, as well as adults, that include AI themes to me speaks a lot to the fact that it's a new world, it's an accepted type of technology. And certainly there are a lot of undercurrents in terms of the movies that I cited in the article, Ron's Gone Wrong and Free Guy, they are listed as comedies. But if you look at some of the reviews from others, not for me, it talks about social media undertones, it maybe is not that great, but the reality is it takes artificial intelligence and says it's part of everyday life. And of course, we know it is in fact part of everyday life.

So attitudes have changed and it seems to be very much throughout society and certainly well based on the movies and how many viewings they've had, certainly popular. Yeah, absolutely. I haven't seen Free Guy yet.

I love The Matrix, but it would be one of the movies that I take a look at. So could you elaborate on some of the risk factors and some of the potential benefits that come with the implementation of AI? Yeah, absolutely. And the movies are kind of interesting because they speak to what we think the risk factors are, or at least what the media thinks the risk factors are.

So certainly undue influence is one of the things that came up in Ron's Gone Wrong, which is a story about a young, kind of awkward child who has a bot for his best friend, but everybody else has bots for their best friends. And the message might be that social media is not a true friend. You need to have connections with other people. So certainly influence, I think, is one of the major risk factors. What is accurate data? So to really put it in to risk management terms, influence is a great social term, but the reality is what is true? What is the data that we're looking at? Is it really accurate? Are we giving people facts? Are they able to make clear, conscious decisions of their own choice versus overtones that then influence behavior based on inaccurate information? So I think that's really a major consideration from a risk management perspective.

I think reputational data is also key. So certainly putting a movie out is an easy thing to do because you can title it anything you want. You can title it a documentary when you consider to be totally fact, but you can also title it as these two movies are titled as comedy science fiction, and kind of get away with almost anything. Not true for businesses. When you look at the risk in how you're going to use A.I., or quite frankly any emerging technology,

you want to make sure that really focused on what the intended outcome is. So the reputational risk, the client risk really depends on looking at that outcome carefully and making sure that that's where your objectives are and that the risk that you are going to assume with either the service or the product is what you want your audience to look at. I think regulatory risk is certainly a factor, but I would say it's an underlying theme. So we should always consider regulatory risk as a baseline. You don't want to do something that you then have to take back. It's simply is a lot of rework and lost effort.

In this particular case, however, emerging technology, AI, it really, in my view, is all about reputational risk and it's about data accuracy and maintaining data accuracy throughout, not just at the beginning or the update cycle of the project. Absolutely. So you mentioned influences, and I'd like to expand a little bit on that with social and economic justice. So how can AI be properly evaluated with regards to social and economic justice? You know, I was thinking about these things because it's such a prevalent theme today, alright? So it's a prevalent theme in local communities and community government and election and absolutely in the media as well.

But what about for us? And I think there's a lot of due diligence that the audit community and risk management community has to consider with social and economic justice. And there's huge opportunity there. I feel that the best way to take AI, or any emerging technology, and keep it true to what a company or an organization is trying to serve is to make sure that the risk management and auditing techniques used are monitoring techniques. Absolutely, there's a lot that needs to be done at the onset when a project starts, when updates occur, when a technology is considered. But when you're considering predictive learning, machine intelligence, we know that that changes with time. Having meaningful check points, having a robust group of risk managers and auditors that include the business is critical.

Making sure that the intended outcomes today, six months later, two years later, as the technology progresses, still are spot on with what the organization intended. I think the other key is, and this is something I've always felt strongly about throughout my career, when you have people involved of different backgrounds, whether it's the business, whether it's auditors, whether it's developers, whether it's your clients, whether it's the diversity of our population around the world, whether it's gender, you always get a better outcome because ideas are so rich. And whether it's cultural influence that provides a mosaic or tapestry that's much more beautiful and effective, or if it's just different learnings or different ways that we learn and solve problems, a more robust group looking at the risks, looking at the audit team, looking at the outcomes and participating in the monitoring keep the social and economic justice where it needs to be.

There's so much unconscious as well as regular bias that goes on. You can easily keep it or much more readily keep it in check when you've got a diverse group of people who can help call out those situations and bring things into a better outcome that is fair for everyone. Yeah, those are all really great points. To switch gears a little bit, how important is it to have collaboration between developers, operations management, and the users themselves in the business when it comes to reviewing new technology? And where do IS auditors and compliance teams come in? That's really a great point, Kevin, because I think in almost every new project that I've been involved with, there's always been some kind of a disconnect between one group or another working on that common goal. So I often say lost in translation, right? So it's a fact that everybody speaks a different language and we don't always hear what is intended.

And I've been in many situations, and I know many of our folks in our audience can nod their heads to say I implemented this system and then the business was really upset with me. I don't get it. I did what they told me to do. When in fact, the business, on the other hand, is sitting there saying this isn't what I was looking for. I don't understand what's going on. And I do remember one of my particular situations where I was so thrilled that I had finally gotten funding for my compliance group so that we actually had an enabling app that we really needed. And when the development team took it and they worked with me, as actually part of the compliance team at the time, the outcome was absolutely not what I expected because even so they read things one way and I and some of the compliance leads that worked for me were viewing it a different way.

We just simply didn't know each other's world. I think collaboration allows you to put that aside, and it's not just verbal collaboration, but I think that's very important, especially in a new, normal, post-pandemic world. We need to talk to each other more, but there are all kinds of tools, DevOps tools and systems that help ensure that those check points of collaboration have been met, where you can see that a user was involved, where somebody has to make a checkpoint.

But I still feel that nothing replaces that speech, somebody talking to somebody, somebody looking at somebody and saying is this what you wanted? Is this system you expected? Or if I deliver this because I can't get this other feature accomplished, is it still valuable for you? Those interactions are critical parts of collaboration that avoid a ton of rework and make everybody much happy with the outcome. So we're talking to IS auditors, compliance teams, but to further the point when should risk and audit professionals get involved in the development of technology projects such as AI? I firmly believe that the earlier the better. Now, not everybody does that, and I'm sure we've all been in project meetings, especially the ones that are hot projects that everybody wants to be on the project team. So I get it. We cannot all be on the project team.

You cannot effectively have a project with a stadium full of people who want to be part of it. But when it comes to risk management especially, risk is best evaluated at the beginning and having risk managers in with the business, in with systems and application developers at the get go is critical because that group, the risk management group I see as being responsible for what if? What if this occurred? Did you check this? Who is the audience that you're directing this application to? How much is this really going to cost time wise if there's a delay? Those kinds of questions around impact on reputation, impact on financial, compliance with regulatory and what is the client reception going to be? All are critical things that the risk management community adds, and adds best at the project inception when the business case is being made. Now, that doesn't mean auditors are not important at the beginning of the project, but certainly if you think of three lines of defense and if your organization uses the concept of three lines of defense, then collaborating with those who are on the front lines, on the first line of defense, is probably good enough, making sure that you're up to date as opposed to being on the floor and interacting. Because audit and that separation of duties does help give that objective post, or project review that's not at the business case inception. So that doesn't mean that audit should only be involved at the end.

Once again, to be effective, having control points that can be checked and audited throughout the lifecycle of a project, and when it's a DevOps or an Agile project, that project cycle includes the opportunity for auditors to sit in is really critical so that they can look, evaluate, and keep things moving on with a minimum of rework. Yeah, to expand on risk management, when does risk management begin? And then for a project or better yet, does it ever end? So I, I know it's no fun to say the sky is falling, the sky is falling. It's not falling.

And back to, Kevin, the question you asked at the beginning about the movies, if you watch the news all the time, you get depressed really fast. It's just really bad. So you can't worry all the time.

That said, a risk management community that is intimately engaged with the business, that has that business hat on to understand the benefits, especially when we're talking AI. We are living in a world where you can only handle so much driverless cars, improved health care, financial speed. There are so many - there's really almost nothing I can think of that hasn't been touched by the innovation or will be touched by the innovation of A.I. or ML or any number of new technologies.

It is the way the world keeps up. So in order to keep up, it's just really important to consider what is the risk or what are the things to mitigate it? Now, there's no reason to say that you would ever live in a risk free world. None of us ever do. There's a risk in doing nothing.

Understanding what those risks are, having risk management involved all the time, that never ending concept that you're talking about, Kevin, so that we can understand is this a consideration? Should we mitigate it or is it okay? Is it a low risk item that we really don't need to spend time on and we really need to focus on this list of higher risks? That's what has to be evaluated all the time. And there's a big role for risk management that - and it's fun getting your feet wet into every little project is kind of stepping into somebody else's world without leaving your job. So it's like having a new job every day. Very fun. Absolutely. Yeah. When you were talking about the TV shows and movies, a TV show that really had a lot of impact on me with technology and everything like that was Severance on Apple TV, I don't know if you've had the chance to watch it. I will now! Yes, I highly recommend.

So what are some of the key check points practitioners should look for in the development testing phase? Well, the development testing phase is really a crucial phase, as is the post go live phase. I think those are really two key areas. And I think the development and testing phase is one where the stories or a test plan has to be clearly laid out in an organized fashion, meaning you can have a number of stories to keep your development going, but it has to have an outcome that is meaningful and fits in with the rest of the puzzle. So checkpoints should be at that level of completed work, and it should always involve a knowledgeable user and that knowledgeable user, if you're thinking of RACI, is responsible to go back to the rest of the user community and make sure that it meets objectives.

Because it's critical for any project, especially AI, especially any emerging technology, that when somebody is at the leading edge, even if it's only the leading edge within your own company, and maybe not the leading edge for other companies, that's going to be dramatic for everybody. And drama is reduced when people have clear expectations. Clear expectations occur with check points at the story level. The story has to have a meaningful outcome that people can understand. There should be another checkpoint to make sure that there was signoff off that user representative, and there should be some kind of at least inspection that expectations have been shared. If there is a change in timelines, if there are a change in milestones, if funding gets reduced, and that's why the milestones are perhaps reduced in terms of the deliverables, that's another important checkpoint.

Anything that changes against the original approved business case and project is an automatic checkpoint during the test phase to make sure that not only are the tests done successfully and approved by the user, but when something doesn't work and is removed from the version that's being released, that everybody's properly informed, that it makes sense against the intended outcome if missions are necessary and if changes are made that a reevaluation is made in terms of user acceptance, once again, not a rehaul of the project. But again that succession of intervention and information that's shared with those who've really purchased the project from IT will make it much more successful. And those checkpoints should be done objectively by an audit group or a first line of defense group that can maintain your objectivity. Great. So how does the reexamination of key risk factors like reputation, customer and regulatory impacts speed up the adoption of new technology? Well, I think, and I actually talked with a colleague last night, it was one of those things where it's been I want to say a year or two since I've been in a major system implementation, and it always feels difficult.

And I thought, well, sometimes that's all emotional and people want to hold on to what they've got. So that's probably what it is. So my colleague had just gone through a brand new health care system update - new patient health care system update and they're still in the throes of it, it's less than three weeks. So I asked her and went like, well so what do people think of it? And sure enough, nothing's changed. Oh, we're surviving. We'll be okay. There's nothing we can do.

IT and the business sponsors have to take that away. So how do you do that and how did she mention they were doing it in the hospital that she works in? As it turns out, they have - I think she called them floaters. So they've got extra people out that are from IT Operations that are on the floor and they're staying on the floor to ensure that users are comfortable with the system. So adoption is all about is somebody making sure I use it. So that's more enforcement, but it does promote adoption. Then it's hey, I kind of like this stuff.

I see some advantages to it and that's often fostered by pitching it or selling it. And that could be your roving IT guy who's there and says, hey, I see what you're using. I'm so glad you're using our system. Do you know if you just key it in over here, it'll save you, like, an extra screen? Oh, I didn't know that. So facilitating adoption through sharing information on the spot is also a big deal.

And I think keeping tabs on what the outcome is and consciously deciding whether it's true in production and part of what the organization's using. Oftentimes people need to step way after the first month or 90 days and they consider it done. And then an auditor will come back in at the six month or annual mark and find out that there are 20%, 30% of the people keeping notes on the scratch sheet because the system didn't work for me. Successful project management means constant review, constant intervention and monitoring, not just enforcement. So that adoption - the product is assimilated and it really becomes part of the culture, and people want use it.

Cindy, you've covered a lot in this episode, in the article itself, but is there anything else you want to mention or shout out? Kevin, it is really an interesting subject and I think it's one of those things that I would encourage people to think about how they're involved. Don't be afraid to raise your hand and say, Gee, I think if you include me in this phase of the project, I could do this, this, and this that will be helpful. I think that's not just a great value add the ISACA members bring to the table, but it makes the job really fun. And it's just so interesting to see how we use emerging technologies.

My call to action would definitely be go in, feel free to add value as early as you can and then enjoy it. So Cindy, you and I could probably talk about this stuff all day, but that's all the time we have left. Thanks again for taking the time to chat with me today. Kevin, it's been such a pleasure. Thank you for taking time with me. If you want to read Cindy's full article, click on the link in the description below. I'm Kevin Keh, and thank you for tuning in.

Thank you for joining us today for this episode of Page to Podcast. We hope you enjoyed this episode.

2022-08-06 21:23

Show Video

Other news