Ignite AI & Emerging technologies

Show video

fantastic thank you very much and uh welcome to everyone um it's a pleasure to be meeting all of you a bit virtually and I wanted to share some thoughts just to follow on from the very interesting discussion that you've been having around digital skills and my comments will be specifically on AI from a policy perspective so if you think about the discussion you just had around the The Color Purple for example where the technology and the business side are seeking to understand each other's language um and and The Color Purple being used to describe that the policy side is the enabling environment of regulation of standards and of wider policy the ecosystem within which that collaboration happens if you like it's the blank canvas on which the purple is being painted if you like and I wanted to really anchor my comments as was said earlier in this paper that we're doing jointly with ey which is a response to the UK government white paper that was recently released on AI so with that in mind I will share my slides and take you through some of the thinking if you could just let me know I'm going to try and make this full screen so bear with me one second Sophie are you able to see it in full screen at a minute yeah we can see yep it's full screen fantastic just one other thing to say which is that I'm on full screen and will not be able to see any comments in the chat box so we'll try and keep a little bit of time at the end if possible for for questions but otherwise please feel free to reach out to me separately uh if there's anything you want to discuss so the name of the UK government paper is a pro Invasion approach to AI regulation you can find it on the internet and we are putting out a paper this is almost a sort of a pre-release discussion we're putting out this paper next week jointly with ey called building the foundations for trusted AI so what I wanted to cover was really three things one is I don't want to go chapter and worse into what the white paper says but I want to just give you know a couple of very quick points which might be most relevant from where we are sitting within the ecosystem and then I want you to reflect a little bit on what responsible AI means in in the context of the profession and then just a few thoughts on looking ahead where all of this leaves us in terms of where we go from here so the first thing um around the white paper itself so I think the white paper talks about this language of wanting to set up the UK as a pro-innovation ecosystem so what they're really trying to do essentially is uh to ensure that you create an environment where Innovation can happen and where the regulation doesn't hamper or stop the Innovation from happening and in actual fact you know it sounds like a balance you're trying to balance regulation on the one side throttling new thinking and Innovation and entrepreneurs on the other side creating new ways of doing things and so it feels like a balance but I think it's also worth saying that actually the real Point here is I don't think businesses have an issue with regulation per se often the more damaging thing is regulatory uncertainty when there's regulatory uncertainty it stops businesses from investing because they don't know exactly what the burdens are going to look like or where the regulations going to be pivoting towards so in terms of the approach of the UK government it's actually moved away from what we would call a horizontal regulator which is to have a broader AI regulator across all sectors and use cases in the economy which is the approach taken within the EU at the moment what they've done instead is they've asked existing sector Regulators so whether that's automotives or whether that's Financial Services they've asked them to rely they've asked organizations to rely on their existing sector regulators in the context of the use of AI and then for over a period of analysis and and and uh if you like watching and learning over a period of time they will decide whether they want to introduce and understand where there are gaps if there are gaps and where they would need to introduce specific more bespoke regulation if needed so obviously that's a very different approach now there are pros and cons to that one is obviously from a uh from a challenge point of view some may say that it's not prescriptive enough and and I think our view would be that there are areas where business needs a little bit more certainty so I think there is a particularly smaller businesses if they're going to invest in upskilling and if they're going to invest in you know figuring out how they get prepared for this they're not going to make an effort on their own unless there's a really clear direction of travel so I think there is a and this is something that we've shared to government in our consultation response that I think potentially there's a space for a little bit more prescription than is currently available also there are differences um you know there's very big difference between Financial Services the FCA is a is a 800 pound gorilla for regulator not every sector is going to have that level of depth of expertise as a regulator um I think it's but there are some positives which is I think the approach is well suited for the fast pace of development of AI at the moment I think the EU were slightly wrong footed with the way chat GPT exploded recently and generative AI has exploded recently and I think you know there's an element of unknown unknowns with AI and I think the way the government's setting this up is uh they're keeping a little bit of room and flex um so that I think they have the option to become more prescriptive over time and and I think that's potentially a very sensible way of doing it as well so I think I think that should be called out and also if you think about it from an accountancy profession point of view it kind of aligns with the notion certainly in the UK of a principles-based approach rather than having very prescriptive um you know rules that try to cover every possible scenario that may or may not happen I think the other thing about the white paper is it puts a lot of stress on both at an international level to position the UK as very output outward looking and also very much with the intention of creating um if you like a coalition of of like-minded allies who um you know share similar values in terms of um you know approach to uh individual rights approach to data and privacy so creating that and also very much with the intention of facilitating uh interoperability uh across different jurisdictions where possible in recognition of the fact that while jurisdictions are National and Country based uh Ai and Technology more generally is cross-border so I think there is a lot of emphasis and space within the UK government white paper approach for that and I expect to see much more and I expect you know we at Acca have already said that to government and we expect and hope to be able to support that through our acc's own very substantial International Network as all of you are aware I think on the domestic side uh there is uh you know something around ensuring that and and the white paper does make reference to this that we don't just make this a London conversation this is not just about London and the southeast I think this is about making sure particularly from through like the leveling up agenda to make sure that we uh are taking into account the wealth of expertise and also need the nature of the need that exists outside of London and the southeast as well I was in Manchester a couple of months ago for example um in in something with the um FRC or arga as a soon to be called um looking at um you know the regulator perspective outside of London and there's a wealth of uh talent in different places and it's a slightly different type of person that comes and it's a slightly different type of conversation that happens and it's really important not to lose sight of that I think small and medium-sized Enterprises are also a really important part of the equation it's not enough I think there's a risk in the white paper approach if you don't have prescriptive enough approaches where it's needed because one of the functions of Regulation is also to create a Level Playing Field and to look after those who don't have the same Market power what you don't want in a very very light touch approach is for um you know for example very powerful players with you know big Tech or dominantly they've got no problem with large organizations and big Tech organizations they contribute a lot in terms of innovation but it's important that small and medium-sized organizations who contribute and who are a huge part of the economy aren't essentially forced into certain ways of doing things or don't have a voice at least to lag when there are issues and I think that's the other thing that we're saying back to government in our response that we need to create really clear avenues for the voice of small and medium Enterprises and there needs to be recognized as the regulatory ecosystem and Landscape is shaped and finalized I just want you to reflect um so you know moving from that broad basis of the white paper and where it's at I just like to reflect a little bit on some of the things that we picked out which we thought you know are a particularly relevant also to the accountancy profession in terms of creating a responsible AI um approach and an ecosystem um one is the UK corporate governance code so as you may be aware um for the first time in five years the last time was 2018 and now in 20283 uh the frcs put out a consultation um asking for views on on updates to the code and we think that you know AI will be a part of that conversation as you will know it's about principles of good practice for listed companies across a range of areas board leadership company purpose a lot of stuff in the previous session talked about the role of leadership and I expect that some of the areas of the code will speak to that they'll obviously read across areas to audit risk and then turn control um I think one of the key things to to note is you know the code requires boards to present a fair balance and understand understandable assessment of a company's positions and position and Prospects and I think if your company is using AI if you are you know trying to transform your business model you need to be able to understand the impact it has on your company's position and Prospects you have a you know a responsibility to understand it it's not just a nice to have I think the other areas around the need for the codes need for um you know for for boards to have an understanding of emerging risks principal risks and how they've been mitigated through risk management and controls obviously there's links to whether the company will be going concern and and continue to operate for the next 12 months so emerging risks is a really important thing in the context of AI because uh you know sometimes you might have weak signals or you might have something which is not immediately obvious but which then later becomes something that really blows up or you might be dealing with a range of AI vendors and not really understand what contractual agreements you have with them and who has what rights for example if they are providing the AI model and you are providing the training data is it really clear contractually what the rights and and responsibilities are in respect of legal liability among other things so uh you know these are things that will be really important I think as we look ahead for the accountants uh and and the profession in terms of the impact of AI I think the company the companies act I think will also be an important thing because it speaks to directors duties section 172 for example talks about the notion of uh the importance of a director to promote the success of the company by considering why does stakeholder impact on the community and the environment and I think won't do in a previous uh session talked about ESG for example and I think uh AI is not something which is just about the vendor who's selling it and the customer who's buying it it involves data large language models generative AI uses it's not like you have a training data set anymore in an llm it's just taking data from the internet all the time you know it's all over the place it's that because it's so huge the scale of a foundational model is so huge is taking data in real time all over the place so they're really important questions around the public the citizens how their data is being used and how much information they've been given about how their data is being used so it's really important to think about from a director point of view if you're using you know either an application directly that has that impact or through a foundational model like an llm that underneath the application that has that impact I think as I said earlier there are issues around Contracting and Licensing but also issues around intellectual property that directors may need to familiarize themselves with because it will be really important if you're using certain bits of data whether you have actually the rights and the authorization to use that data for whatever analysis your AI model is doing and also I think around the business models long-term impact is again overlaps with ESG because this is not just you know the nature of reputational damage you can build trust over decades and lose trust with one major ethical incident so it's it's really important I think to understand how the business model is being shaped for the long term what that's doing for the trust that you're building up with your key stakeholders and as I was saying earlier around invisible and weak signals and also the risk of black sworn events where something really substantial happens uh which you just did not foresee uh in the nature of how your model is being used uh whether in terms of bias whether in terms of even cyber security could be linked to AI because you don't understand where the data is coming from I also wanted to reflect on ISO standards which are I think a really important part of the equation here there's a new risk management standard that's been released in 2023 all of this information by the way will be in the paper that's coming out next week but the iec23894 is a risk management standard that's released this year and it really focuses on risks connected to the development and use of AI and I would highly recommend that those are a few interested in the space uh do perhaps just bear that in the back of your mind as you think about risks in your own organization and it does build on a previous ISO around IEC 31000 which looks at broader General risk management principles and Frameworks uh I think it's also worth noting ISO 42001 that covers among other things the rule of leadership management planning documentation so you know there are lots of um you know thinking that's been done already and you don't need to reinvent the wheel you can look at all of these things and really try and understand how your organization can draw from these requirements and guidances that have already been produced I'd also like to finally call out ISO 24368 which was 2022-1 because it really talks about the ethical and societal concerns and when we talk about ESG often we talk a lot about the environment right in the social side is also really important uh because if the impact that AI has on stakeholders more broadly so that ISO will give a bit of a sense on ethical Frameworks human rights practices and so forth uh I think the risk management course of framework more broadly Enterprises management framework is another voluntary framework that is really a useful one and I think um you know the uh collection and use of data and the application of AI to decision making as has been noted by some is a really important aspect that touches on how Koso thinks about risk it's something which is really important around um the leadership effect it really stresses the governance structure has to be led by a senior executive they can't say I'm not a specialist not my problem that's not an option on the table and I think the risk assessment of every AI model unintended bias um including unintended bias portfolio viewer risks and opportunities for AI initiatives so both at the model level understanding risk but understanding at the portfolio level how you're looking at AI across your different uh use cases in your organization is also really important and I think board training is really important so I think you know it's a really important I think again there was a reference made in the previous discussion around you know understanding the basics at least you know what is what is it that makes ai ai for example learning bottom up from the data as opposed to setting a top-down rule by a human programmer like as the nature of the data changes the way the model treats the data might change and therefore that's the learning within the machine learning so some basic stuff like that I finally wanted to reflect on isber the international analytics Standards Board for accountants so they've established a technology working uh that's been explored in the impact so I think there are few things that they're already thinking about in terms of revisions to the code which are motivated by AI one is uh whether to clarify the code needs clarified on whether firms or organizations may use client or customer data for internal purposes like training AI models and uh to really quite clear where they stand on that and what the parameters should be for example prior informed consent I think explainability was mentioned at the back end again and I think that's been something that the ISP has picked up as well in terms of where further guidance should be developed such explainability I.E how the model reached the outcome it reached transparency I.E way you're using AI or not using Ai and I think um you know the rare professional accountant relies on those technologies that needs to be made clear I think there was a piece around the code needing to be revised to address the ethics implications of a professional accountants custody or financial or non-financial data belonging to clients customers of third party and I think they expect to see some pronouncements in that space I think there's also a really important piece and again this speaks to the board training point which I made earlier which is around these need to be Inc the code the concepts of transparency accountability need to take into account the role of professional accountants to communicate meaningfully with boards about technology related risks and exposures so very much from the view of how do we help boards really have the information they need as well so it's going to be really important to think about what communication constitutes effective communication uh and just finally when we talk about looking ahead so the white paper is great and it sets out it's been a long time coming so we really welcome it and and we think it's really important because other jurisdictions are setting out their approach but we also think that it needs more work in six areas one is statutory footing as I said earlier it kind of leaves it a lot to sector regulators and it says that we will not impose um a statutory requirement at this stage to comply with certain foundational principles that they've identified but we might do so quote unquote in the future and our concern is that that kind of regulatory uncertainty is not helpful and it stops organizations from really investing because they feel you know what the rules might change they which might change so we've really told them um that that is something they need to really make up their mind about one way or another quickly as I said less mature sectors is a you know if you're relying on existing sector Regulators you know the FCA will be way ahead you know as another sector might not be and there is there a risk that the most dominant sectors end up defining how AI is regulated across the economy so how do they get that balance right uh the degree of the horizontal approach so as I said at the start there's they've already said they're not going to have a set of AI rules that cover every sector but is there an in-between point do we need a convening body that has a you know the ability to really understand what's going on across different sectors and really force them all to certain common Baseline ways of doing things or at least convene them to have a comply or explain type approach where there are divergences so I think there's a question around do we need an in-between level um legal liability will be an important uh conversation uh around where reliability sits in certain types of AI use cases they've talked about Central functions which are more of an administrative function but which have a really really interesting role in terms of um picking up insights from across how air is being used in the economy they have an education and awareness as respect to it as well where organizations like us Acca will hope to tap on the insights we'll get from from Members such as yourself and therefore we really need to be able to feed into some of those aspects of the central function and then finally environmental impact is completely absent in the white paper I mean large language models are huge in terms of their energy consumption and we just feel you can't have a conversation like this and not talk about that in some detail so I think that's a conversation for the future finally in terms of our recommendations to policy makers and government and and Regulators more broadly fast on the detail we really need to get on with it because you know the US is doing a lot of stuff he's doing a lot of stuff if you don't have a defined regulation organizations operate cross-border and they will tend to default to the most prescriptive regulation that they have so you might end up becoming a rule taker in the UK without intending it so I think that's something that we need your government needs to think about align internationally with like-minded partners and countries but coordinate domestically for example across sectors seek multi-stakeholder feedback and involvement so regions outside the UK smes is not just about a very narrow pool of people deciding what happens and finally our message as always will be the accountancy profession is mature and skilled leverage us as part of the solution we have a look to bring to the table the white paper talks a lot about the role of audit tools and we've picked up on that and supported that a lot in our response to government and said look let's continue this conversation and see how we can really ensure that we bring the best practices that the accountants profession can bring to bear in order to support the government in creating a world-leading responsible AI ecosystem here in the UK so that really is a short we should stop tour if you like from a policy point of view um I think we've got a few minutes for questions so I will stop sharing my screen now uh yep any any questions from anyone um you can either pop them in the chat or raise their hand uh raise your hand with a little icon at the top um someone has asked if we can share the slides and Ryan and is that okay yes of course no problem you seen that one um in the chat from Sean looking at the Horizon any any thoughts on Quantum computing it's a good question Sean uh so what I'd say is uh the computational power is increasing exponentially so I think Quantum Computing will be part of that journey of increasing the computational power that's available to us I think it does create some issues as well as some opportunities with large language models they're going to be more hungry and and you know you might get kind of quantum uh powered models as well but on the other side uh Quantum Computing could actually Break um the basis on which a lot of um a lot of security platforms are based uh because most of computing is based on a Boolean analysis where uh you know a computation is is either a zero or one and a Quantum can be can be brought at those things at the same time so it breaks a lot of the existing um cyber security underpinnings having said that it's not going to happen tomorrow it's very much out a bit further out um so I think by then said the security table also evolve so my view overall in a nutshell on Quantum is that it's definitely earlier on in the journey and I think AI is much more of a Here and Now conversation than quantum yeah any more questions at all from the island um just a couple people ask about the recording so yeah we will put these on the AC ACC website and then we'll um share a link for those once they once they're live on there it doesn't look like we've got any other questions thanks Madrid brilliant okay so um thank you very much everyone um pleasure to connect with all of you and and do feel free to reach out to me um uh if you'd like to discuss or have any clarifications thank you

2023-08-05

Show video