Canada's Leadership in AI Policy / Le leadership du Canada en matière de politiques relatives à l’IA
(upbeat music) - Hello, everyone, welcome back. I'm Rebecca Finlay, and I am very pleased to be moderating and leading this final panel of the day on AI policy. Bonjour a tous. One note, please remember you can join into the conversation on social media by tagging CIFAR news on Twitter, with the hashtag AICAN. And we are going to end the panel with your questions. And so please submit your questions through the zoom Q&A function, at the bottom of the screen and I will try to get to them at the end of the panel.
The focus of the session is on AI and Policy. And as Minister Shang Pang affirmed so clearly, in his remarks this morning, the government of Canada has committed to developing and implementing policies that promote trust in a digital world and keep pace with the rapid technological development of AI. We know for example, about the launch of the digital charter in May of 2019, and then the recent legislation proposed through bill C-11, the digital charter implementation act in November of 2020 alongside all sorts of international efforts including the Global Partnership on AI. And as the minister kindly mentioned this morning, CIFAR's AI and society program.
Part of the Pan-Canadian AI strategy, this program supports working groups on a range of topics related to social political and economic impacts of AI. And most recently with the generous partnership of IDRC focusing on a call for AI governance solutions in low to middle income countries. Canada is fortunate. We have a rich community of experts and advocates committed to charting a path to responsible AI governance. We have offices of information and privacy commissioners across the country, AI and social science researchers and legal experts, as well as human rights and civil society organizations taking up the cause.
And we are very lucky, because we have some of those leaders with us today on this panel. So I am going to introduce them. And as I do, please take a moment to share in the chat, the top three things that come to mind for you, about AI policy in Canada. So please join me in welcoming four remarkable individuals who are going to join today's discussion. First, we have Golnoosh Farnadi.
Golnoosh is the Canda CIFAR AI chair, a member of MILA, and an assistant professor at HEC Montreal. She is a leader in algorithmic fairness in machine learning, and decision-making causality and interpretability. Great to have you with us Golnoosh.
Also Patricia Kosseim. Patricia is Ontario's Information and Privacy Commissioner, with a wealth of knowledge in privacy and access law and extensive experience in the public private and health sectors. I for one think we are very lucky to have you in Ontario, Patricia. So thanks so much for joining us.
Then we have Deborah Raji. Deborah is a researcher and a Mozilla fellow. In 2020 she was named one of the top innovators under 35, by MIT technology review for her research on racial and gender bias in facial recognition systems. Deborah, great to have you with us.
Thank you so much. And finally, we have Jacques Rajotte. Jacques is a lawyer and a leader in International Development, who is currently the Interim Executive Director of the International Center of Expertise in Montreal for the advancement of AI, for the global partnership on AI. Thanks so much Jacques. So let's get underway.
I would really welcome hearing from the four of you, on sort of a, with a very broad question just to reflect briefly for us to get started in sort of three minutes ish. Perhaps respond to the question, what do you think Canada is doing well with respect to AI policy and where do you think we can improve? And Commissioner Patricia, why don't we start with you? - Thank you Rebecca. And thank you for the invitation to be here. I'm very honored to be joining this panel of wonderful colleagues who am I've just had the pleasure of meeting. It's always wonderful to meet new faces and new people in this space. And so thank you for that opportunity.
So, in terms of what Canada is doing well in terms of AI leadership, I'm gonna speak about privacy aspects of that AI leadership in particular. And I would say that, from my perspective, I think Canada is quite an influential actor on the international stage, punching way above its weight, relatively speaking in terms of privacy leadership. And to be clear, I'm not talking about legal instruments which are sorely lagging behind, but I'm talking about concrete policy actions that Canada has spearheaded. And there are several reasons for this. First, as you alluded to Rebecca, we have a federal system, of course, here in Canada which means we have 14 jurisdictions, each with its own Commissioner or data protection regulator. And as we say in French, (speaks in foreign language) this certainly strengthens our capacity, our involvement and our collective voice on the international stage, and in particular, our role in the global privacy assembly which just this year, for instance this past year adopted a resolution which our office co-sponsored on accountability, responsible accountability in AI.
Second, we've had a really strong and steady cadre of commissioners, who by their own individual personalities, interests and priorities have taken a very proactive stance in the international arena. And I'm thinking here for instance of the influential concept of privacy by design that was minted by my predecessor here in Ontario and has become an integral component of the EU GDPR, among other modern privacy regimes around the world. The previous federal commissioner, has had a very influential role in updating the OECD guidelines on privacy and security, and, really was instrumental in building the foundations for international enforcement cooperation. We're seeing made possible today between data protection authorities around the globe. The current Federal Privacy Commissioner has been, I mean, very significant behind the scenes contributions to strengthening the governance structure of the global privacy assembly.
And one of our former commissioners in Canada from BC, has gone on to take the leadership of the UK information commissioner's office which is probably the largest DPA or data protection affiliate in the world. Third I'd say, given the absence of an overarching federal privacy law in the U S at least in the private sector, this has really allowed Canada to take a highly credible persuasive leadership role, vis-a-vis our counterparts internationally, without being overshadowed by our neighbors to the South for change. And forth, I think we've built a very strong privacy research capacity in Canada. Thanks to some amazing research programs you mentioned Rebecca and to name I think some of them would include, you know, the large scale social science projects, we've seen dedicated to privacy funded by Shirk, the consumer interest research program funded by innovation science and economic development, privacy research funded by the office of the privacy commissioner. And not to mention the interdisciplinary research programs prompted by CIA (murmurs) ethics and knowledge translation programs.
Genome Canada's integrated GELS programs, and of course CIFAR's AI and society program. So many of these opportunities have helped our Canadian privacy researchers gain visibility on the international stage and have some very significant influence. So, this is of course as I said, you know, concrete policy action. And while our actions have been louder than words, I think it's high time for our words to catch up. And I'm talking specifically about to the need to modernize Canada's privacy statutes, but my three minutes are up.
So I'll leave you in suspense. And I'm gonna talk a little bit more about that later on in the panel. - Great, thank you, Patricia. And I love to think about the words and the power of words and the need for that.
And when we think about what we're doing today in terms of bringing these communities together so thanks so much. - Deborah, would you like to take a moment to reflect on that question for us? - Yeah, for sure. I think I'm sort of probably gonna take a little bit of a broader lens to things, in terms of to start off things that I think Canada is doing very well, and is very impressive is just the agile policy development especially with, the algorithmic impact assessments and that effort incredibly at the department of treasury I believe, it's incredibly impressive how sort of quickly that group was able to iterate on that idea and how, you know, that effort has really stood as an example for a lot of other countries. A lot of the work I do is in the U S and in the American context. And even in that context, it's really, sort of a gold standard in terms of quickly iterating on, you know, documentation practice for policy purposes and doing that, you know, at a pace that's not typical of policy development in general.
So, yeah, very impressed with the algorithmic impact assessment effort and how quickly that was able to be piloted, and how quickly it was able to iterate. And also connecting that with different stakeholders in different ways to really make that effective. So, yeah, that's definitely something I think Canada is doing very well. Also, Patricia already mentioned this but I think the privacy by design effort is also quite impressive. I do a lot of work in facial recognition and that work has been incredibly influential in terms of the policy development in that space. You know, a lot of the standards for data storage and data distribution is sort of derived from ideas of that work.
So, I also agree that privacy by design work in Canada has been incredibly influential and incredibly effective. In terms of, I guess, things that are sort of opportunities or things that Canada, you know, might I guess benefit from paying more attention to, you know, one that comes to mind immediately is this challenge of disclosure. So, and this is not a Canadian specific thing, but, you know, at the municipal level, at the provincial level, at the federal level, it can be very difficult for anyone you know, a journalist, a researcher, a regulator a member of the public to understand, you know, the answer to two questions.
One question being, if their data is being collected and how it's being collected or used as part of some kind of algorithmic process, and then two, what kind of algorithmic process is being used as part of, you know, a governmental bureaucratic process. And given that, you know, Canada, as other countries, you know, has a bunch of public institutions that require transparency on these issues. It's become incredibly challenging for researchers even to get a sense of how different departments or institutions, public institutions even, are making use of algorithms to make very consequential decisions about people's lives.
So, as an example of something I can think of, you know the Toronto police was recently discovered to be using Clearview AI, a facial recognition tool, that's very invasive. And it was revealed as part of, you know, a journalist looking into the Clearview AI vendor. And it wasn't anything that was disclosed by Toronto police to the public, or, you know, even to any other sort of accountability board. So there's a lot of challenges with disclosure, and with open and transparent communication around, when an algorithm is being used, and if it affects you, and if it's using your data and things like that. And I think that's a huge challenge that Canada also has to face as well.
And then I will say the other sort of potential challenge that Canadians will probably need to encounter with respect to policy making, is also really having open and honest conversations around anti-racism, and some of the issues with, you know, with respect to you know, the societal dynamics that contribute to some of the injustice perpetuated by these algorithms. That's something that in the U S, just due to the weight of their history, it's sort of a conversation that flows a little bit easier, and people are a little bit more informed about it. Whereas they find sometimes in Canada it can be challenging to allow the conversation to get to that point with respect to algorithmic deployment and how that connects to the way that algorithms are designed and deployed.
So, yeah, that's what I would say would maybe be a couple of challenges to look forward to (chuckles), in addition to the stuff that Canada is doing very well at. - Thank you so much. Yes, much, much, much more work to be done on both of those fronts. So thank you so much Deborah. Golnoosh, do you wanna speak a little bit to what you think what's working well, where you think there could be some improvements? - Sure, first I want to thank you again for inviting me out for this panel.
It's a pleasure to be here and also get to know all these people. I want to also mention one thing that Deborah also mentioned about the AI, the impact algorithm assessments that is happening in Canada. It's a very impressive work that has been done.
But, I don't want to repeat the same thing as she said. So I'm going to actually mention something which is kind of personal to me that I'm very proud of. And that actually, I'm considering myself as an international researcher. I worked Europe, in China, and different institutes in the U S and the reason that actually I moved to Canada, was the AI policy about bringing international talents to Canada.
So there are so many fundings, and I'm so proud of it, that there are so many fundings for international talents to come to Canada and actually make this ecosystem very attractive for researchers to come, because they can come and actually work with other talents as well. And specifically for the problems that I am interested in is algorithmic discrimination. We actually need international (chuckles) people to come and think about the solutions.
So this is the place that I see so many advantages of. like, there are even advantages to international people to come to Canada for this. And I think that's one of the reasons that Canada can be, it is actually the leader, but can be the one of the biggest leader of the AI technology. And the changes that I want to also see is also related to this, because, for the fields, this fairness is by nature is interdisciplinary. And as you can see from the panel (chuckles) that we are talking about is also from different backgrounds because the problem needs people coming from different backgrounds to think about the solutions.
One of the things about the AI is that, most of the research is happening at the conferences. So we have big conferences in AI, like NeurIPS and I clear et cetera, that are happening. And because Canada is one of the biggest AI ecosystem, we are actually having these conferences happening in Canada.
So for example, (murmurs) happened in Canada for many years. But one of the issues that I see is that we should allow people to come to these conferences because we want to have the solutions to be from the researchers from around the boards. We cannot actually bring all of them to come and work and live in Canada, but we can actually help them to come to these conferences and we can think about the solutions together. So one of the problems that I see is the problem with the visa, the issue that we have with the visa. So I think we can actually make some changes to ease the path to bring more people to Canada.
- Thank you Golnoosh, and your comments, just thinking about it from an international, you know, researchers attraction and also from conferences, are just a perfect segway Jacques, into your role at the international center of excellence and the global partnership on AI. So why don't you join us now and give us your thoughts on the question before us. - Yes, thank you very much for the time. Thank you also for the opportunity to, join this panel and be able to introduce at least a very new initiative on a global basis which is called GPAI, the Global Partnership on AI. I'm sure I'm not the youngest on the panel but I'm the newest one in the AI sector, 'cause as you mentioned before, I come more from a legal background and international level that's very much linked with the mission of the GPAI.
Also I'll add comments too what Golnoosh just said. But I think what's important to understand first in terms of the mission of the GPAI, it's not directly focused on developing AI policy, because we have the privilege of having has as a partner, the OECD as part of the GPIA and that's where the secretary of the GPIA is situated. But we are actually a bot end of the policymaking.
First, you know, the mission of the GP is, to move from theory to practice. So we are developing solution or experimenting potential solution that will eventually inform a policy maker. And once those policies have been adopted, then we are also at the other end, trying to develop business models, so that we will be able to implement that. The way that this has been done and in terms of talking about the leadership of Canada, for those who are not aware, the Global Partnership on AI is, initially an initiative of Canada and France, started through a transition to the G7 presidency. And I think it's another great illustration of how Canada is taking a leadership position on the adoption of responsibility.
I mean, everybody's aware of the declaration of Montreal, resonate quite a lot on the international scene. So I think that Canada has a lot of credibility, and the GP is one more example of how our commitment to make a strong contribution to the development of responsible AI. In terms of the specific elements of the mission, I think I would like to link to the comments on the international side. I think that the mission of the GP is not only to ensure the adoption of responsible AI, but to making sure that it is done for the benefit of all and more specifically in collaboration and taking into consideration, the interest of emerging and developing countries.
So part of the mission of GP will be to make sure that we are inclusive of the voice of the developing country, but, we as (murmurs) and I'll maybe I'll spend just 10 seconds explaining what the center of expertise boost the abbreviation that is (murmurs). we are actually supporting some of the expert group. And one of them being the responsible working group, through their work to make sure they have all the back office and the proper support to be able to deliver on their ambitions to deliver the impact as well of the Global partnership. But part of what we do also will be to incorporate within our own team, international researcher who comes from the developing countries. So we have been in discussion over the last few months with the various government institution in Canada, to make sure we have the proper financial support to do that, but that will be one of the important contribution also of the (murmurs), to the positioning of the GP.
So I think that we'll continue that conversation. There's so much more to talk about the Global Partnership and the role of senior within the Canadian AI ecosystem. (murmurs) It's an illustration of our leadership. It's linked from before and after the policy-making, so it's very much part of a continuum of implementation. And I can answer more questions about specific work that we will be focusing on for the coming year.
- Thank you Jacques, and I know we'll get some questions in the Q&A for more information about some of those areas. So thanks, we'll look into that later in the panel. Thank you all. I think that really helps to set the stage for, some of the questions that I think as a group we would like to dive into a little bit. And one of the things that I've found useful in thinking about AI and AI policy is to really understand specific use cases, or specific sectors to really understand what regulatory regimes might already be in place, but also what the policy challenges might be.
So why don't we start with one of those. Let's dig into the question of facial recognition or surveillance technologies a little bit. It's clearly top of mind, we saw it last week in terms of the events in in Washington that's very controversial. There are all sorts of initiatives underway from all sorts of different policy perspectives.
So I think the question I'd be interested is, what do you think policymakers need to understand? What is it about this particular technology that is problematic? And, what if any guidance, would you give them as they start to think about how this technology moving forward? And Deborah why don't we start with you? - Yeah, sure my favorite topic (laughs loudly). Yeah I think there's a lot of really important things that policymakers need to understand about facial recognition. Something that I start off with is that it is, you know, often grouped in with other biometric tools, such as fingerprinting and other forms of sort of being able to track humans and surveil them using their biometric information.
But there's a couple interesting differences with facial recognition. So for one thing, facial recognition almost requires no interaction. So my, you know, I can... I don't upload my finger print onto social media, and I need to interact with a particular device in order for my fingerprint to be, you know, uploaded somewhere. Whereas with facial recognition, we're very casual with the way that we handle our face data.
We treat it differently than we do other biometrics. We're much less careful and that's you can find someone's face quite easily and that can be included in any kind of database. And also as I'm walking through the street, you know, a facial recognition camera can capture me and it doesn't necessarily have to notify me that I'm now, you know, captured and my image processed. So there's a lot of challenges with respect to the lack of direct interaction with facial recognition tools, that makes consent very, very challenging. The other thing about facial recognition is that, starting in about 2014 with the deep face model that was developed by Facebook, deep learning has been sort of the primary method of training models for facial recognition. And that means that there's this enormous data requirement in order for modern day facial recognition tools to work effectively.
What this means is that there's pretty much databases of you know, millions of faces required in order to train in a really effective model to detect particular attributes or to, you know, a database that is much larger than you know, what you might need for other sort of biometric tools using other types of methods. So as a technology, it is quite particular there are characteristics of facial recognition that, position it to be especially harmful and challenging and dangerous. And with respect to sort of the policy directions that I hope people take with facial recognition, you know, one direction that we saw, I used to be a fellow at the AI Now Institute in New York university and, a lot of the work that happened in that space was around disclosure like I had mentioned before. Just understanding when facial recognition is being used, and enforcing a requirement where if my face is included in a database, that's then analyzed, if my face is captured in a way that, you know, I'm not aware of, you know, how do I get notified? If the NYPD or another sort of policing groups such as, like I mentioned, Toronto police was caught using Clearview AI. So if there's a institution that affects me, you know, how can I be included in the process of, you know, asserting my rights and being even notified that, you know, my police department is using facial recognition. So the simple question of disclosure was something that had to be really fought for in New York.
What that ended up being was the post act, which was pretty much an effort to force NYPD, to disclose what kind of surveillance technology that they were using. So I think that's an undervalued policy strategy is to just encourage transparency and communication with the general public about, how an algorithm is being used and when it's being used. And then outside of that, something that's also not very well considered or often ignored is this idea of performance.
And that's been a lot of my work has been, auditing these technologies for performance on different demographic groups. So we've already seen in the U S you know, at least two or three cases of individuals with darker skin tone being misidentified by facial recognition tools and having their lives really negatively impacted as a result. And then I guess the like very final thing I'll say is, and this is sort of, the next wave of policy interventions that we saw after the audit work really became public, was a move for restriction. So, you know, there's particular institutions like I mentioned, police departments is a good candidate but also, you know, there's been great work by the The Citizen Lab at University of Toronto that's done great work into the use of facial recognition and immigration processes and things like that. So there's a lot of cases where, perhaps facial recognition is a tool that should not be used ever. So a lot of proposals for moratoriums, and bans for particular individuals or institutions use of facial recognition in a particular context that's deemed, you know, too high stakes or too sensitive for that kind of technology to be used.
So yeah, I think those are like three policy directions that I often speak about that I hope Canada definitely embraces moving forward. - Thank you so much. Others who'd like to comment, Patricia.
- Yeah, I just thought that was excellent. Thank you, Deborah. I wanted to just add a little bit of a postscript to, the story about Clearview AI, and comment a little bit on the specific use case that you presented Rebecca. As some of you may know, and Deborah alluded to this, it came to light, the assistance that Clearview AI was providing to police services across Canada came to light and, it led to an investigation that is being conducted by the Federal Commissioner along with Quebec, BC and Alberta, are conducting a joint investigation. Interestingly Clearview AI pulled back and ceased providing it services in Canada, but the investigation is ongoing because of the importance of, producing a report, given the stakes for Canadians privacy. So it will be very interesting to watch.
I just wanna say it's not the first investigation on this specific use case. If you recall, remember there were the Vancouver riots of 2011 following the Stanley Cup loss. And, on the evening of that game there were riots and many fans riding in the streets of downtown Vancouver had taken photos and uploaded those to websites, Facebook pages, et cetera. And the police ensued with an investigation that, included the collection of thousands of these images.
And the insurance corporation of British Columbia had developed FRT system, originally developed for reducing fraudulent acquisition and use of driver's licenses as part of the insurance process. And they offered to assist the police with the use of this technology, and they had no compulsion to do so. There was no subpoena, no warrant, no court order, and it led to an investigation by the information and privacy commissioner of BC. And this was more than a decade ago.
And what's interesting there was, you know, one of the key issues that the Commissioner raised was the shift in use, from a technology that had been developed for the use of detecting fraudulent use of driver's licenses, to actually investigating criminal behavior post riot, without any knowledge or consent of the individuals, let alone any adequate notice to citizens, that this was being done. And so I just wanted to point that out 'cause that is an interesting use case here in Canada more than a decade ago. And the commissioner had this to say about the use of this technology without judicial authorization saying, judicial oversight is necessary to ensure that any changing use of this magnitude is proportional to the public good served by the infringement on privacy rights of citizens. So there's a lot of empathy and sympathy for the use of FRT in the context in which we saw that, you know, the riots in the Capitol, but I think there are also compelling reasons to make sure that the guardrails and, you know, respect of human rights and civil rights are protected in the process. - Thank you Patricia. Maybe I'll move on to another question because I think both Deborah and Patricia, your comments and advice with regards to facial recognition and surveillance technologies really speaks to data.
And the way in which AI, and AI systems like deep learning use data and the requirement have governments and for companies to understand how that data is used and how it is used differently when we think about in terms of AI systems. So I really would be interested maybe beginning with you Golnoosh to dig into the question of, what is it that you really think that policy makers need to know about how AI systems use data differently than other big data or data analytics programming and what should they be considering when they think about the use of AI in terms of either products and services to citizens, or private sector products and services to customers? - I want to also mention something about like the applications that we discussed, the face recognition. It's not the only application, that we see discrimination is happening. So currently, like AI tools are becoming part of the modern society. So we are like, people are using it in different scenarios every day. Even the face recognition is just not harmful that you're using it in law enforcement and policy.
And you can actually use it in like, when people are coming to the door and recognizing their face and open the door. The simple scenario as that, it could be discriminatory, right? So the issue is coming like partially coming from a data because we are living in the era of big data. We have access to large amounts of datasets.
So from the social networks we have, from the mobile data, we have sensors, et cetera. And also we have easy to use tools, AI tools at the moment. And the combination of these two, means that we are creating different applications, different domains that people are using without really considering the consequences of like these kinds of applications. And now we have seen like many negative impacts of those cases. Then the issue is that like people are or the companies are using this kind of tools are not really aware of like the data that we are using.
So they are not looking at the data and the breakdown of the data and maybe the respective different sub groups. So we have discrimination partially coming from the fact that we don't have the representation of the population in the data when we are using these kinds of tools, for the general population. So we have like one group that the model is trained based on it, and we are applying to all the groups in the society.
So that's one of the issues that we can see but, I want to mention like discrimination is not just because of the data. So models can learn those kind of behaviors from a data because of the stereotypes, because of the historical discrimination and can actually amplify it. So it's not just learning it and doing the same thing, it can actually amplify it. But unfortunately, I should say that like the way that these kind of models are designed they could actually introduce biases as well. So it's not the fact that like, if you are cleaning the data, if you don't have any issue with the data, the model itself is going to be clean. So we have to consider that discrimination can happen at both sides and that's why we need the solutions with respect to that.
I also wanted to mention about that, the issue about AI. AI is a kind of a vague term. The problem is that fairness by definition is interdisciplinary and also subjective. So in different domains we have to actually define what does it mean that is discriminatory.
And we do have law against discrimination, of course. But the issue is that it's designed to be vague (laughs loudly). So it's a law for different interpretations but then you're coming to the case of AI, as a researcher, as a computer scientist, you actually need a definition which is deterministic. And that's why this kind of interpretation of coming from a legal document into the AI system it's kind of hard. So we know we have issues with data.
We know we have issues with AI tools with respect to discrimination, but coming up with a solution for that is going to be very hard, because of it is very subjective. You have to define it for different disciplines, different stakeholders, what does it mean that it's fair or not, and that's the hard problem to solve. But it doesn't necessarily mean that we cannot solve it (chuckles), because the issue that we are facing is the issue of today. And if you are not finding the solution even a good solution, not the perfect solution, today we are actually facing the systematic discrimination which is far worse than the discrimination that we had before the society. We are dealing with the harm that is going to be systematic.
And we have to actually deal with it today. - Thank you, would others like to comment on those issues that Golnoosh has raised before we move into a Q&A with the audience? - Yeah, maybe just very briefly if you allow me Rebecca. Just to connect maybe to the international context the one that I'm living in through the GP and (murmurs), but I think that those biases that have been referred to, also in the way... Especially when we talk about the developing or emerging countries, you know, there are a lot of those when they implement or try to implement some of those technology processes you know, in, in developing countries, the level of bias, you know, the risk of bias has increased because of the fact that there was a lack of input data in that process. So that's one element. The other element also which is one that could have an influence is the difference of the cultural environment in which, you know, some of those technologies are being implemented and the implication that they might have or the perception it might have from a cultural point of view also.
I think that when the GP was designed, the idea is that first, in terms of having as many voices as possible represented at the table, it's a multi-sectorial. So we have private sector, civil society as well as scientists and academics are at the table, so that all of the point of view... But the idea is also as I mentioned before is to bring at the table the voices of developing countries and emerging countries, so that these are developed thinking taking into consideration the fact that they might need to be adapted and it's already built in, you know, in the design that they will be adaptable to those specific contexts recognizing where those differences might be. - Thank you, Deb? - Yeah I just had a quick addition which was another sort of dimension of data's relationship with like AI, especially modern tools which, lean heavily on machine learning and deep learning in particular is just the resource allocation required. So, you know, data sets used in modern machine learning and AI systems are very, very large, and that requires, it's very resource intensive to train a model and deploy a model. And as a result of that there's very specific power dynamics that come out of that.
There's a lot of environmental risk, but also it's really hard to control, very, very large datasets and things that are sort of only, I guess projects and a culture, which we do have in machine learning of just only focusing on, collecting as much data as possible leads to a lot of the mistakes that were mentioned by the others around not doing a good job adapting to a specific domains, not doing a good job representing the population that it's gonna be deployed on. And this is, you know, one of the causes of this is definitely just the sheer volume of data usually required for these systems to work. - Good point thank you.
Okay, so let's turn to one of the questions from the audience. I'm actually gonna pull a couple together here because, this really interesting thing we've been talking about, the rapid advancement and innovations in the technology and policy and the difference in speed. And so the question of, how can policy be used to advance innovation to actually support innovation to create a policy environment where there is responsible innovation is one of the questions that's come in and whether that touches on data or international efforts or otherwise.
And I wonder, Patricia, can you talk a little bit about that in terms of how you see that interplay between policy and innovation and the role of the commissioner? - Sure, I think there's an important role for different forms of policy. Deborah talked about agile forms of policy, I'll speak to in a moment. But I think at the very basic level, there is a foundation of statutory policy that's required to set the level playing field for all competitors, to play fair, responsibly and to spirit on innovation. So you talked about, for instance, C-11. C-11 is a much needed effort, federal bill, that's been tabled to modernize the federal act PIPEDA, which is lagging behind in many respects in terms of its capacity to deal with, some of the challenges of artificial intelligence, data analytics that were never anticipated when it was adopted more than 20 years ago.
One of the big advantages of PIPEDA was that it was principles based, from the very outset which allowed it to survive this long but even those principles have been overtaken and is in need of modernization. The good news is, C-11 is principle based as well. There are provisions in there that deal specifically with automated decision systems. So there's certainly some really important modernizations there that will help spur on innovation and provide predictability for innovators in this space. So that's a good thing. There's a flexible mechanisms for instance of permitting consent exemptions, for responsible use and collection of data.
And along with that comes enhanced transparency requirements stronger accountability mechanisms, the right to an explanation, the possibility to also spur stronger enforcement triggering stronger enforcement mechanisms. So all of that is part of a package. I think from a statutory perspective that's important to form the base.
But along with that, what's important also is that the bill allows for other forms of policy. More agile responses such as codes of practice, certification programs, where industry itself the innovators itself can develop sector specific guidance that with the approval of the commissioner, then allow them as long as they comply with that code approved or that commissioner approved code of practice within the certification program, they are given or provided certain level of immunity from downstream penalties that could be imposed. So this is the kind of complimentary forms of industry specific compliance mechanisms that I think are very important in an innovative space. The other forms of self regulatory mechanisms are the data privacy impact assessments or PIAs, or the algorithmic impact assessments in the space of AI that are incredibly important and indispensable part of a robust privacy management program, where again innovators can take matters in their own hands, in order to, you know, develop the baseline that will engender the trust needed to enable adoption of these technologies.
So I'll leave it at that. I can talk for hours, but I think that I think we need both. I think we need statute based law that forms the principles the baseline principles, and we need these mechanisms for compliance initiatives from the industry sectors themselves, to compliment those statutes. - Thank you so much.
I really do feel like we could keep talking for much more time. There's many more questions that we all have and lots more information to share. Jacques, I want to give you one minute because there's been a number of questions that have come in in the Q&A about how individuals can get involved with the global partnership on AI. Can you in sort of 30 seconds or less just touch on where they should go, and what they should if they want to learn more? - Yes, sure, first where they should go, two things, there's a website for the GPAI, so gpai.ai, and I can share that on the chat after, and then there's also a webpage for the (murmurs), which will be, you know, improved over time. But that's the two easy access to get more information.
In terms of how individual can be involved, first, within the the terms of reference the GP, there will be a mechanism where expert who would like to be joining one of the working groups, could actually be self nominated. There will be a selection process, so not everybody will be selected, but there will be. And that process will be developed over the next couple of months, so please keep an eye on the GPIA website for more information on that. The same also as we support the working groups through their projects over the coming years will be and there has been in the past five months, if you call for proposal, for individuals to come and assist, you know, bring their expertise to the group. So again, keep an eye on our website. And we also use our partners website to be able to like see far, to publicize that.
But that's another way where there will be opportunities. Plus, you know, we'll have interns program within the task so that there will be multiple opportunities for individual in Canada to join forces and bring their expertise at the table. - Fabulous, thank you. To be continued with all of you.
I hope we have an opportunity soon to reconvene and to pick up on so many of the topics we started and those we will want to continue to talk about in this very important area of work. Thank you all, Merci. It's been wonderful to be with you this afternoon. That concludes our policy panel for today. (upbeat music)