Regulating Artificial Intelligence The Brazilian Approach

Show video

- Hello to everyone who's joining us. My name is Victoria Adelmant, and I'm the director of the Digital Welfare State and Human Rights Project at NYU Law's Center for Human Rights and Global Justice. And I'm delighted to welcome you to our event today on regulating AI in Brazil.

This event is part of a series of conversations on issues concerning digital technologies and human rights, and we've been running this conversation series since 2020. Today's conversation is the 14th episode in this series. In each conversation in the series, my colleagues and I interview an expert who's doing cutting-edge work at the intersection of technology and human rights, often focusing specifically on digitalization in the public sector. So, so far we've looked at case studies such as the European Union's introduction of digital technologies in its border control operations. We've looked at digitalized welfare payments in South Africa, social credit in China, for example, and you can find the recordings of all of our previous events in this series on our webpage. You can also find summary blog posts, additional reading materials, transcripts, and guest blogs by academics and practitioners, including our speaker from today, Mariana Valente, on that webpage.

We're hoping that all of those materials can provide a really helpful repository of information for anyone who's interested in learning more about these topics. Without further ado, I want to introduce our guest speaker for today's conversation, Professor Mariana Valente. Professor Valente is assistant professor of international law and economics at the University of St. Gallen in Switzerland. She's also Associate Director of InternetLab, which is an impactful Brazilian civil society organization, working on law and technology issues including internet policy. And it's done excellent work on Brazil's Bolsa Familia welfare program and the privacy issues that have arisen as the program has been digitalized. Crucially, and most relevant to today's conversation, Mariana was also appointed to an expert commission of jurists that was created by the Brazilian Senate in order to work on a draft bill to regulate artificial intelligence.

And that bill has recently come before the Senate and will hopefully shape the future of AI in one of the largest economies in the world. I want to hand over now to my colleague Katelyn Cioffi, who's a Senior Research Scholar on the Digital Welfare State and Human Rights Project here at NYU Law's Center for Human Rights and Global Justice, and who runs this event series with me. Over to you, Katelyn.

- Thanks, Victoria. We're so pleased today to welcome Mariana to discuss recent efforts to regulate artificial intelligence in Brazil. Since our audience today is joining us from around the world, we want to begin the conversation by providing some important background information and really positioning Mariana's work on the draft AI Bill in the political, economic, and social context of Brazil. We'll then discuss some of the specificities in Brazil's recent draft law and what might characterize an emerging Brazilian approach to regulating AI.

And finally, we'll discuss what's next, what can we expect from the different proposals, and also what roles different stakeholder groups may play in the process going forward. As usual in these conversation series, we'll reserve the last 15 minutes of the session for Q and A. So we do encourage you to send in your questions using the Q and A function at the bottom of your screen and to do this throughout the session so that we avoid a kind of a crush at the last minute. So a very warm welcome to Mariana.

In introducing you, Victoria mentioned the Brazilian Senate Commission and the draft bill that you've worked on, but this process of having a commission of legal experts revise a bill isn't something that happens in every country or indeed, for every piece of legislation. So perhaps we can start with the basics of the current efforts to regulate AI in Brazil. Can you kind of set the scene for us and walk us through the process surrounding Brazil's draft bill for a regulatory framework on AI? - Sure, thank you. Thank you, Katelyn for the question.

Thank you both for the introductions. I'm happy to be here. I'm happy to be collaborating with you again. We've been doing some things together for a very long time.

It's good to be here doing this discussion. I think this is a very important discussion for us to be having at a more international level, let's say. So really just starting with the basics. Let me guide you through the timeline to make this process clear, right? What this draft bill is about, what this commission is in which I was participating. So the Brazilian House of Representatives approved at the end of 2021 a bill for regulating AI.

It's called Bill 21/20 because it had been presented in 2020. And although it had been presented then, it was discussed only for a few weeks before it was approved at the house in 2021, and then it went to the Senate. And at the time, there were no public hearings. It didn't gather a lot of public attention. And I think one would ask how come? This is such a hot topic and I think it's important to understand, right, it's important that we mention that during the Bolsonaro presidency and to a certain extent, apparently, it is still the case, the country has been involved in a lot of political turmoil and so many subjects are very controversial. So this bill was passed without getting too much attention among many other things happening at the time.

And what did it cover? So it was basically a very minimal bill. I'll raise here just four characteristics of that bill. It provided for minimal intervention and that's really provided for minimal intervention.

There was a specific provision saying that specific rules shall be developed for the users of artificial intelligence systems only when absolutely necessary to ensure compliance with the provisions of the laws enforced. So really establishing that regulation of AI should be the exception. It also provided for a so-called decentralized model. So there was this idea that particular agency, let's say health agencies.

Let's say health agencies, telecommunications agencies, they should regulate their own sectors. There was no definition of stakeholders because there were no specific obligations or sanctions. And there were a few guidelines for public action for promotion and use of AI. When the bill reached the Senate, some stakeholders started to call the attention to the fact that more discussion was needed and that if such a bill was approved, that could crystallize a model of non-intervention before effective discussion was held.

And then the Senate decided to create this commission of 18 members. I was invited to be one of the members, and we had seven months to develop a substitute draft bill and present it to the senator who created the commission then. And that was from April to December last year. And the commission held public hearings, a public consultation. We received 102 documents, contributions to the process.

We also held an international seminar to hear experts in AI regulation, then we started working on a new proposal, so a substitution to that one. And we delivered this proposal in December. I think it's important to highlight that not even the commission considers this to be a final version or something that should be approved right away.

The time was really short, and although there was much participation, more participation, I think, is needed, and the right place to do that, really, is the parliament. And we were working together during the process on joint meetings, but each member was responsible for heading a few efforts. And one of the things that I was heading were the contributions about the public sector, the use of AI by the public sector and also some discussions in discrimination. And in that, I was also bringing research. So Victoria mentioned research that we did in the past about the use of data by the public sector for welfare programs in Brazil, particularly the Bolsa Familia.

That's the largest welfare program in the country. And we were looking into data and algorithmic practices with the most vulnerable populations in Brazil, those that are entitled to receive this benefit. InternetLab has also been doing research on the use of facial recognition in public schools.

So many of the conclusions drawn from those studies and others were things that we were taking into account to think of specific rules for the public sector. And then just to wrap up, we did deliver the bill so it was presented in the Senate just a few weeks ago, although we delivered it in December, and it's now one of the proposals on the table. So it's a very different proposal from the 21/20. I think we're gonna discuss this a lot, and I also think it's very healthy that this model is also under discussion now. But just to make sure, so you have the complete timeline now, they are like competing bills in the Senate that have to be discussed in the future. - Yeah, thanks so much for that very clear timeline and description of your role in this process.

And we know that this bill revision process isn't something that's happening in isolation. For instance, there are several other bills and policy initiatives relating to AI and also to emerging technologies. So where does the draft bill on a framework for AI fit among other legislative and policy efforts to govern emerging technologies in Brazil? - Yeah, that's a very good question. I think it would be a mistake to say that there's no legislation that applies to AI. So we could say, for example, that the data protection law was approved in 2018 in Brazil it applies to AI, of course. Also consumer law.

But Brazil has had in the past a very strong tradition of creating creative legislation for regulating technologies. You might have heard of the Marco Civil that was approved in 2014. That was let's say a very local process after some public consultations as well of approving a legislation that at the time was generally considered to be fairly good, progressive because it was a rights protective legislation that didn't focus too much on, that didn't focus at all on criminalizing behavior online, but was thinking really of citizens.

So that's in place. But another thing that I think is worth mentioning is that Brazil is heavily discussing regulation of social media right now. this has been going on since the beginning of the pandemic when an important bill was introduced in the Senate to tackle disinformation. There was a concern about disinformation regarding the pandemic but also politics in general in the country.

And this has been moving forward in the past years, and it's currently a very heated discussion. So I think the regulation of AI comes in the midst of all these discussions. And that actually may not help because the discussion of regulating social media has already been very controversial, so we still have to pick up the subject of AI regulation and make it also a public subject. We'll see when that will be possible. - Yeah, it's clear that that's a growing trend in a lot of jurisdictions around the world.

You know, the different approaches for different kinds of technology and also pieces of legislation, I think, that kind of overlap and cover lots of different technological developments more generally. And much of our audience is probably familiar with the broader global trends of efforts to establish these new governance and regulatory frameworks for AI. And the one that comes to mind often is the efforts that are underway in the European Union where, you know, there have been a whole series of regulations that have been developed over the past several years and which are really culminating in the EU's AI Act. And I think there's certainly a strong narrative emerging that the EU AI Act may be the one that sets a standard or a benchmark for the rest of the world shaping how, you know, a lot of different governments will approach AI regulation in the future.

So in your experience, to what extent are developments in places like the European Union influencing the Brazilian process of regulating AI? - That's a very good question that speaks to the so-called Brussels effect, right? Definitely to a certain extent, sorry, a certain extent. And I think the meaning of such developments in Europe is also very disputed. So in the case of social media regulation, definitely, there's some conversation now about the rules created by the DSA and Brazil is bringing in some of that experience, but the original discussion that I was telling you about from the beginning of the pandemic didn't really like mirror much of the discussions in Europe at the time.

Now it's more in debate, let's say. The provisions in Europe are being brought more into the debate. When it comes to AI, I think there is what effectively made it into the draft bill, let's say, and there are all the narratives around it, right? And the AI Act, I think it has been used in both ways.

Let me try to clarify that. I think the first way is this narrative that you see: Europe is regulating, it is setting standards, details, how come we're gonna have just this piece of legislation that limits the regulatory power of the state and not doing something that Europe is already doing, so they're far ahead on this and everything. And the second way that the European argument, let's say, is used is this, you see, "this is not Europe."

It's like the argument that I would call like "this is not Europe." But that can also be used for both sides. So in this process of developing the draft bill and listening to the stakeholders speaking in the public hearings, we kind of see both. We see some people saying: "this is not Europe, we're lagging behind in innovation, and if we regulate too much, we'll hinder AI development." And that's a very common narrative in the private sector specifically.

So like, Europe has more AI development than Brazil, we're not even doing that. How come we want to create so many rules that will stifle innovation? But the same argument like "this is not Europe" is used by the other side when stakeholders are trying to regulate more. So they'll argue, for example, that Brazil is a country with such high levels of inequality in which you have so much violence, police brutality. So in issues, for example, like facial recognition, we should be stricter than Europe.

So I think this kind of shows that it's inevitable to bring the AI Act to the conversation. It's being brought by both sides. It's definitely setting a standard for the conversation, let's say.

And we can talk a little bit more about like how much the different bills or draft bill mirrors some discussions had in Europe. - Thanks, that's super interesting to see that the argument of you know, the kind of, are we like Europe, are we different from Europe? is sort of used strategically in so many different ways. And just to kind of pick up on that, I suppose, you know, it's important to note that, of course, the Brazilian context is incredibly different on many fronts, and you've just mentioned inequalities. You know, Brazil is one of the most economically unequal countries in the world.

The six richest men in Brazil have the same wealth as 50% of the population, and racial disparities are extremely stark as well. So any deployment of AI is going to be landing in a really complex and difficult landscape of inequalities. And you mentioned police brutality just now, but can you paint a picture for us of this social and economic context that we're talking about here? - Yeah, so I think the picture you make is quite precise. Brazil is one of the most unequal countries in the world, and these inequalities are multidimensional, right? They correlate to income, race, gender, territories.

For example, if you look at the statistics, Black women are the population with the lowest income in the country. And I think another thing that's really important is that police brutality and their disproportionate impact, its disproportionate impact on Brazilians of African descent is widespread and goes quite impugn, and that calls for like reforms in the police system. So there are plenty of examples of cases in the very recent years and months. And the criminal enforcement system also targets the Black population disproportionately.

I think that's very important to note because whenever we're discussing like predictive systems, predictive AI systems, or facial recognition, that really must be taken into account that it's not just that there's discrimination, there's a very high police lethality against these populations and very disproportional imprisonment also. And then there are other inequalities to be taken into account. So, for example, 20% of the population is not connected and still they're affected by technologies and algorithms, right? Be it in facial recognition in public spaces or by algorithms used in public policies, for example, in welfare programs. And that speaks a lot to technologies and potentially AI systems used in the public sector, right? So, for example, what's the specific responsibility that should be borne in that context? In the case of the Bolsa Familia Program that I was speaking of, we did not see AI being employed, but that could be the case in the very near future, of course. And there are experiments with AI being conducted, for example, in programs for the unemployed.

So definitely the situation of stark inequalities must be taken into account. - Absolutely, and you've just kind of mentioned the public sector deployment, and I'd like to kind of go into that in a bit more detail if that's okay. So you mentioned earlier that you were within the commission responsible for looking at public sector deployment, specifically, and in your own research and practice in the past and currently in the sphere of emerging technologies, you've especially focused on the deployments of digitalized systems that have emerged in the public sector and impacted some of the most vulnerable people in Brazil.

You know, you've done research into the harms surrounding data-driven, automated decision making systems in the public sector, for example. So given the severe inequalities that you've talked about, is there a particular kind of sphere or example that you're especially concerned about? A sort of example that stands out to you that really demonstrates some of these harms in more depth? - No, for sure. So I agree with you very much that the public sector is a key concern because it's where especially vulnerable people are impacted and the state has enormous power over individuals, right? And we've seen a lot of excitement and not much constraint on the use of technologies by the public sector, right? So, for example, if we consider using algorithms in general, or AI systems in particular to select beneficiaries for welfare programs, this has a huge impact on people's lives, much more than almost anything. So we continued studying welfare programs after the study on the Bolsa Familia at InternetLab. We did a study on the emergency aid that was a benefit that was provided by the government during the pandemic, and it aimed at a larger spectrum of the population. And one of the things, one of the particularities of this program is that you had to apply through an app, which already led to so many problems we can speak of.

But many people had the benefit denied based on false or incorrect data. This was, of course, a situation that could be addressed already by the data protection legislation, but I think it shows the extent to which people's lives can be affected by the use of such technologies, right? We're speaking of like people who were in very vulnerable situations and were denied benefits. And the only thing they could do because there was no embedded way of questioning the decision individually was going to court. And then we're speaking of people in a situation of high vulnerability during a pandemic having to go to courts, right? To question an algorithm. And I think that shows the gravity of the problem involving such things and how like we must really think of like ex-ante obligations and of ways of seeking repair as well.

Another example is the use of facial recognition in public schools. InternetLab has been doing studies on that too. So some public schools in Brazil were using facial recognition systems to certify attendance in class, and there is no public information about how this data is treated. We're speaking of children and teenagers in an educational context in which, of course, they have no choice or no consent is possible.

So I think that kind of shows the extent of these problems when we're speaking of the public sector and of a population of 220 million inhabitants. - Absolutely, those are really very rich examples that show the extreme harm, also kind of looking forward, that we can see. I think just very briefly to maybe touch on what kinds of positive use cases have been kind of salient in the public imagination in Brazil. I don't know if there are any particular examples that you have found that, you know, legislators or the public have been especially excited about in terms of AI use cases within Brazil, whether that's kind of self-driving cars or anything like that.

- You mean cases in which AI has been used in positive ways, that people really recognize? I'd have to think of that when we're working with harms, kind of. I'm looking too much on problems, but certainly. No, certainly, I think I'd have to do to think a little bit about that, but I think in general, the approach is positive. I think, in general, the population sees like AI as being able to bring many benefits to society, like in terms of research, medical uses, and even, I don't know, like providing benefits to people. - Yeah, I think overall, you've created quite a vivid picture of how high the stakes are for getting this right, both for some of those potential positives, you know, realizing maybe some of these benefits that could improve people's lives, but in particular on the harm side of things. You know, the really egregious effects that these can have on, particularly, on vulnerable groups.

So I guess the really natural follow up question to this is how does the draft bill seek to address some of these concerns and to kind of mitigate some of these potential harms? Could you explain a bit what kind of approach was adopted in the final version that was eventually submitted to the Senate? And what would you say were the key considerations and provisions that were included in the bill? - Okay. So speaking broadly. In terms of model of like the general model of this draft bill, the draft bill that we created, it borrows from the European AI Act in that it distinguishes AI systems according to risks. Not the same categories or the same examples are used, but this idea of separating AI systems according to the risks they pose, considering areas of application is quite mirrored by our bill. But I think what's most important is the difference, right? And the difference is that it merges this risk-based model with a rights-based model. And that speaks very much to a Brazilian tradition of regulating certain legal areas according to individual and collective rights.

So not only principles are established, but individual and collective rights are conferred to subjects by this bill. And these rights can be pursued in court. So let me try to make it more concrete. The person affected by an AI system has the right to challenge and request the review of decisions, recommendations, forecasts generated by such a system if they produce relevant legal effects or impact their interests. And this is a right that can be brought to courts if the person doesn't get a response they consider satisfactory. So the bill does create ex-ante obligations, meaning that the agents of AI and in the case of our draft bill, those are the suppliers and operators of the technology.

They have to comply with standards, which were thought of as a way to prevent harming people's rights, right? To make these agents think of the possible harms that could be created and act ahead. So that's created and that's created according to this risk scale. But anyone who's affected might may also bring a complaint. And that applies to all systems, regardless of the level of risk. And I think that's the most important difference, and that's something that we really thought of considering, all these specific, all these levels of specific harms that can be produced in this context in Brazil. I think it may be said that all of those rights already existed in the legislation, but the draft bill specifies them in the context of AI.

So in the cases that I was mentioning, for example, you could find other provisions in the legislation that could allow someone to challenge a decision, right? So, for example, if it's based on personal data that could be done through the data protection law. But we were trying to specify how that applies specifically to AI systems, and also in that case, it could be using personal or non-personal data, right? So yeah, I think one thing that's perhaps important to mention here because of the discussions we were having of the specific harms involving the public sector before is that some extra obligations were established then for the public sector when they're adopting AI systems that affect people's rights in a relevant way. So, for example, no racial information can be treated and that was included following the outcry arising from a program called the Sao Paulo smart city project, the Smart Sampa, but also the relevant population must have the right to participate in the decisions regarding the adoption of systems. That's something that's foreseen for the public sector but that's not foreseen for the private sector. So there are many elements, which try to address the specific requirements that should be in place when we're considering these specific responsibilities of the government, especially towards vulnerable people. - Yeah, I think it's very interesting to hear about how some of the provisions in the draft bill kind of borrow from the European approach, but also introduce some of these new obligations, use the Brazilian legal tradition to weave in some of these more individual rights mechanisms that, as you say, kind of already exist but are now being specified for specific technologies.

And picking up what you were saying about the provisions that are specific to the public sector, I wonder what were some of the kind of unique challenges of addressing, in particular, public sector deployments of AI? Were there any ways that you saw it really deviating from the approach that was targeted towards more private sector actors, or any specific challenges in tailoring obligations to public sector authorities and public sector bodies? - Do you mean like concrete examples, or in the discussions we were having inside the commission? - I think a bit of both, the discussions you were having inside the commission, you know, were there any roadblocks or any barriers towards developing some of these more public sector-specific obligations? - No, definitely. So I think then I go back to Victoria's question, even my difficulty in answering it, right? I think when we're focusing on what can go wrong, we are also like faced with what can go right, right? And the good uses of these technologies and how can we let's say encourage their adoption and not create rules which would inhibit any adoption of artificial intelligence, right? So I think I could mention a couple of things. I can think of one of the rules that some of us were intending to create thinking of a very radical transparency in the use of these technologies by the public sector. So one of our ideas during the drafting process was to create a public database in which the documents like preliminary assessments, for example, of the systems used by the public sector would be centralized. So that also, as a researcher, was something that I wanted very much, you know? That there could be like one database in which all these systems are and that could be an obligation.

And some of the things that were brought up by other members was like this has been attempted in other processes, and it hasn't worked well because of lack of budget, lack of coordination between all the over 3,000 municipalities in the country. So we ended up with like a lighter transparency requirement requesting municipalities to publish information in their websites or wherever they publish information. So that means it's not centralized, right? So you would have to go to like each municipality and find the information or request information if it's not there, for example. So some of those things were frequently brought up, right? So that's something you have to think of, like the capacity of municipalities in the concrete situation in the country. And there was a specific conflict also under discussion, and I think that's an important thing for me to talk about because that's also like a disclaimer, right? As a member of a commission of 18 participants, of course, I was one voice, and also the very process of preparing the draft bill, even if it's seen as a more technical moment because we were an external commission, it still encompasses political decisions, right? So some considerations of political scenario were made like this one that I was referring to.

And although the report was adopted by consensus, I think everybody in the commission had to flexibilize some of their views on how things had to be to have a consensus adopted. And yet again, this still has to go to public discussion, right? To see the positions of the broader society. But one of the things that civil society has asked for loudly was the complete ban of facial recognition systems in public spaces, especially for public security ends. And this has not made into the bill. Instead we managed to create something similar to a moratorium, establishing that a law needs to be approved for such uses and that certain criteria must be met, that is that law must meet certain criteria. And this provision has been very criticized by civil society for being too soft, including the criteria that was created for this new law.

And I agree with this criticism. And during the negotiation, an argument that was raised was that, with such high levels of criminality in the country, we should not outlaw technologies that could support solving crimes. In my view, the losses are greater than the benefits of using such technologies for solving criminality, though, especially considering the level of discrimination, especially racial discrimination in the Brazilian criminal system and how it's public and consensual that these technologies discriminate, right? So that was a moment also in which these considerations were brought and weighted and this position of banning completely this use, let's say, lost. And it's definitely gonna be part of the discussions for the next steps. - Excellent, well picking up on those kind of next steps, I want to look ahead now, and you mentioned this a little bit at the outset in terms of what's coming next, but I wonder if you can kind of tell us a little bit more about what will be coming next in the process. You know, it's been three years now since the original bill was published.

And as you mentioned the draft bill by the commission was published in December. So what do you foresee in the coming months in terms of adoption or sort of debate in the Senate, like the next step in the process? - Sure, yeah. So as I was saying in the beginning, the bill has finally been introduced into the Senate, so it doesn't substitute the previous bill.

Instead now we have these two different bills. There are also some other bills that were already, had already been presented to Congress. So all of these different proposals are going to have to be discussed now. It's still very early to know what will happen because it's again been just a few weeks that it's been presented.

But I think one thing that's important to stress is that there hasn't been a lot of discussion on the side of the government yet. And the government is usually a very important stakeholder in terms of like pressure for themes to be discussed more intensely in Congress, let's say. And as I was saying, there's a lot of discussion around social media regulation right now, and that's been very contentious.

And I think that's one of the reasons why the government's like busy with that discussion, also other discussions, but on the side of technology. My evaluation's perhaps that one. I think it's also important to mention the reactions to the draft bill, right? So once we presented it, it became clear that at least most of the private sector would prefer the previous version. They've been manifesting in the press, in policy briefs against that regulation that we proposed, stating that it's still early. So there have been some articles in the media, for example, stating that Brazil should not be the first country to have an all-encompassing legislation on AI, that the discussion is not mature yet.

And on the side of civil society, what I think is that there's this consensus that it's good that there is this new proposal on the table, and that it's a proposal that really deepens the discussion, let's say, and thinks of regulation and of rights. But, of course, the civil society is still unhappy with many of the provisions as well. For example, the facial recognition one that I was mentioning, but other things as well. So they also think that more discussion on specific issues is needed. So there is some pressure to move the discussion ahead on the side of civil society, but not like a complete support for this bill specifically. It's like support for more discussion and a model that thinks of rights and of harms, which the previous bill didn't, right? So yeah, this is all I can say by now because these have been the reactions, but really knowing what the next steps will be is kind of a futurology exercise that's really hard to do right now considering the other subjects that are in Congress.

What I would like is that we would move further in this process, having more discussions, more participation, and I think it's good that this new proposal is on the table. That's my opinion. - Thanks, I mean that's really interesting hearing about the various different reactions to the bill. It's always sort of somewhat disappointing that there's been a sort of negative reaction from the private sector and then civil society actors haven't been sort of overly happy either. It sort of little bit, you know, a tricky situation. And just to kind of pick up on that question of the reaction of the private sector and the role of the private sector here as well.

You know, in this discussion, we've talked about public sector deployments especially of AI, in part because often government deployment receives a lot less attention than private sector examples when we're thinking about artificial intelligence, but, of course, the private sector plays an enormous role when we are thinking about development and deployment of AI, but also in the policy environment, as you said, you know, there's been this big kind of reaction in the media from the private sector in Brazil. So I wonder if you could just tell us a little bit more about the role of some of the private technology companies, whether that's kind of homegrown Brazilian companies or whether there are sort of also, you know, foreign tech companies that are sort of involved in reacting to this bill, but also how you see the role of some of these private technology companies playing out in the future as regulatory initiatives, not just this bill, but lots of other legislation that you've mentioned, kind of continues to roll out. Do you think they will be playing a central role here? - Yeah, I think so. To take your first question, when we're speaking of what this private sector is, I think we could say it's a mix of both.

The private sector usually acts through associations. So in these associations, you have like national companies, foreign companies, and they're all acting like through this group. What I can say is that there's a very strong narrative.

It's not consensual, okay? So it's not like brought by the whole private sector, but there is a strong narrative in Brazil that the private sector needs, let's say, a libertarian environment to develop, that one of the problems for technological development in the country is too much regulation. And that's a complicated view. It's not that it doesn't have any truth to it that like regulation could hinder innovation, but it's also brought, for example, in this case, in a way that doesn't take into account the specific harms that these technologies can also produce, particularly in a country like Brazil, right? But that narrative has a strong hold in some sectors of society, and I think that makes things a bit complicated. On the other hand, I think there are parts, relevant parts of the society, there are pro-regulation more and more, I think, people are concerned about the harms of technologies. But yes, I think that's like a very important knot, a very important point of the whole discussion here.

Like what does regulation mean, right? And the meanings and narratives around regulating technology. - Hmm, thank you very much. Just to end on a kind of final note as well, in thinking about the many different actors involved here and the kind of moving forward piece. You know, you specifically are in a really interesting position because you are an academic expert involved in a legislative process as we've discussed, but you are also embedded in civil society as the Associate Director of InternetLab. And so in the coming months, what role do you think that civil society can play or should be playing? And as we're joined by an audience of people from all around the world, from outside of Brazil, would you have a call to action for lawyers, scholars, civil society around the world? - It's an interesting question. I was just yesterday seeing this new study came out by EPIC, Electronic Privacy Information Center, focusing on the harms of generative AI and saying specifically that, well there's a lot of hype already around these technologies, let's focus on the harms.

And I think that that's an important role that the academia and civil society has to play, right? Because that's the stakeholders who can really like show what's beyond a hype that is also commercial, right? That also speaks to like interest of adoption of such technologies. And again, they can be very beneficial, but like you have to counter that with other narratives, and data, and studies, and everything. And I think that's a very important role for the academia and civil society to play to like clarify such questions. And I think like in terms of clarifying such questions, these sectors have been really successful in a way, right? I think some of these harms are pretty well-known because of studies and campaigns and things like that.

When it comes to the Brazilian space, I think it's very important that we really speak for this model of regulation that takes these things into account. I don't think the specific draft bill's words or provisions have to be defended, but I think like a model, I think it's really important. I think civil society has been doing that just to be fair, has been doing that a lot, like in a lot against the previous model that I was mentioning that was like non-intervention at all.

And when it comes to the international context, I think there's very little information out there about this process going on in Brazil. So sometimes they see some international discussions about what's at stake, which are the countries regulating AI right now, and they don't even mention that there's this initiative in Brazil. So first like raising awareness about it and that there's a model here being discussed that like brings new things to the table. I think that's already important and can support this process. But also when thinking of research and campaigns being really sensitive that context is different in these different places, right? Some of these things that I was mentioning, they're very specific let's say to Brazil and some other countries in the Global South, with their own characteristics, of course, but that the situation is very different from the situation under discussion in Europe. I think that's another thing to be aware of and paying attention to when developing studies and campaigns and everything.

- Yeah, that's excellent. It's great to kind of end the primary conversation with some action points for our audience and some things to do. We're gonna move to Q and A now.

We've got just over 10 minutes, so please continue to submit questions through the Q and A function. We have one from Alex Barbosa, who says, thank you for these very clear explanations and insightful reflections, Mariana. Given the timing and the fact Brazil is under its most conservative legislature since the late '80s makes me question how to balance the urgency with precaution.

And the question: could you mention other obligations in place within the new draft bill for national and foreign companies? - Yeah, good, thank you. Thank you, Alex, for your question. And I think that's another thing that's important to clarify to an international audience, right? Which is very contextual. So there's a new government in Brazil since January, but we're still speaking of a congress that's still very, very conservative that hasn't changed much in that respect, and this is why I was also saying that it's very challenging to have anything discussed and approved right now.

And I think I don't have a good answer to your question. It doesn't just speak to AI regulation, right? It speaks also to social media regulation right now. Like in some ways in the past four years, when we had a conservative government and a conservative congress, in some topics I was just really happy that the subject was just not raised. You know, when you have that feeling of maybe it's better that this is not even discussed because I'm sure more harm will be produced than benefits.

And then when you ask like, we should act with precaution but there's also urgency, I don't think I have a good answer to that. And I think we're really learning a lot from this process of regulating social media right now, right? I think the only thing more concrete that I can say about my opinion regarding that is that we're seeing that urgency is not helping in making these discussions like move to a better place, right? So I think we had very little time last year to develop this project, and now there's another alternative on the table. I don't think we should be discussing this in like only the next six months, for example. I don't think that that's the timeline we should be working with. Although these technologies are being employed already, and they are causing harm already, right? When I was speaking of like AI technologies being developed in the employment ... for unemployed people,

the first thing I think is like this is being done without oversight, without transparency. But I think there are already some instruments that could be used and are not being used too, you know? So just let me be clear. I think it's very important that we have this regulation for AI. I think this is happening right now, and it's urgent. I don't think it's worth it to do it so fast in a way that we don't have like the breadth to discuss all the subjects that we need to discuss before having good legislation approved. So I don't know if that answers well your question.

It's an attempt. And then you're also asking about other obligations in place within that new draft bill for national and foreign companies. Yes, because I was talking more about the public sector, right? So as I was saying, rights are created, and that's very transversal, right? In the draft bill that we created. Those rights citizens have against governments, against companies, so that applies to everyone. When it comes to the private sector, besides all those rights that I was mentioning, there are all these ex-ante obligations which were created, which didn't exist in the previous version of the bill.

So one of the things, the first thing that a stakeholder - that meaning an operator or a supplier - has to do when they're deploying an AI system is to make a preliminary assessment of risk, which can be reviewed by a centralized independent authority that also has to be created by the government. And then, depending on the risk, there are different levels of obligation. Then when we're speaking of high risk, for example, use of technologies in employment, in health, involving children, for example, then there are many obligations that have to be followed, and that's thought of as a way of preventing harm, right? As I was saying, making a human rights assessment, impact assessment is something that when you have a stakeholder doing this, they'll have to like sit down and think of this, which often is not done, right? And they'll have to report on the risks identified and what has been done to mitigate such risks and all that can be audited, and there are provisions regarding that. So there are many preventive measures besides the rights that were created to citizens. And then, of course, when I was mentioning the public sector, the issue is that we created an extra layer of some obligations, preventive obligations, which refer to the specific responsibility that governments have towards citizens.

I see there's another question there. Do you want to read it? - Thanks, Mariana. Yeah, that's a very rich answer. We've got, if it's okay, in the interest of time, I'll combine several questions. We had a question in advance from Alan McFarlane, who is asking about generative AI. And I think we could an anticipate that this would come up in terms of the timing here.

And the question is that there's a robot android approaching its owner in a warehouse, and the robot asks the owner the following question, "I want to be free." And the owner just turns the robot around and presses the reset button. So this question really goes to the kind of fears that are arising now about the kind of rise of generative AI, robots taking over.

And so I suppose to kind of add a piece onto that, you know, what has been the role of generative AI and sort of the fears around that within the discussions that you've been having? So that's kind of one question. Then we've got a question as well from Joao Victor Stuart, who says, "Given the international functioning of AI machines and systems across boundaries, do you think that an extraterritorial application of EU AI legislation in Brazil would be beneficial? Or would that cause another form of colonialism?" - Oh, such an easy question, okay. Thank you, Joao, okay. Should I take those two? - [Victoria] Yeah.

- All right. I thought this question was going to another direction. I thought they would ask about jurisdiction and extraterritorial application of Brazilian law, which is an issue that we discussed, right? Victoria and Katelyn in the past. Like when we're speaking, I spoke so much of the use of AI by governments, and that's like pretty clear to who the law is enforced, right? And then we're speaking of, so let's take things together. ChatGPT, for example, they're not headquartered in Brazil.

So there's the whole challenge in making them accountable for the different legislations. Everybody saw that ChatGPT was blocked in Italy for this consideration that it was not complying with the Italian data protection law. So that's definitely a challenge, which is not easy to answer to. And that we are seeing... it is a big issue in Brazil right now regarding social media and messaging apps, especially Telegram, that's not responding to court orders, and there are many threats of blocking it as like a matter to make it comply with orders.

I have the impression that these conflicts are only going to be heightened over time. But then you're asking of something else, right? You're asking about the extraterritorial application of EU AI legislation in Brazil. I think there are different levels to this discussion. I think in a concrete case, it could be beneficial, considering like a citizen, right? Considering the specific interest of someone who wants to see their rights observed. And then I'm not only thinking of the European citizen, but of the Brazilian citizen, right? So when you have protections in another part of the world that let's say becomes embedded by design in a certain technology, that benefits other people in other parts of the world.

But then if we go beyond the concrete cases and to a broader discussion, at a broader level, for sure, we have a problem there. And I think this is one of the problems for our generation to solve. This issue of these global platforms, which respond to some jurisdictions more than to others, right? That's definitely an issue. We've been seeing that a lot in also harmful ways. So, for example, the US DMCA and the field of copyright has really been enforced in Brazil very easily.

You can just like file DMCA reports from Brazil and have the US legislation approved, even if the Brazilian Congress didn't look at the situation yet to see what's the model for copyright protection. Anyway, it's just an example. So I think it can be harmful, it can be good, but like on a broader level, it needs a political discussion, right? And then referring to generative AI. So that's something, right? The European Union revised the AI Act to encompass generative AI more clearly.

And there has been a discussion within the commission as well for us to stop, take a look again at the draft that we drafted, and see if it encompasses the problems and other tensions involving generative AI. I do think that the description of AI and high-risk AI, that it encompasses technologies like ChatGPT, but there has been some discussion around it. And I think this shows difficulty of regulating a moving target, right? So I think that's really changing very fast. And it's amazing that we're speaking of this now because when we delivered the bill in December, it was not a big issue yet.

And it's been only six months. So it's really like a challenge of making good legislation that doesn't get old in six months. And that really requires a lot of thinking and a lot of technique, I think. But I do think, just as a short answer, that the draft bill applies to generative AI. I don't know if it encompasses all the challenges. I think we still need some conversation around that.

- Yeah, I think it's a such a crucial point and something we've seen in all areas of digital government about how fast technology evolves and how to design kind of future-proofed rules and norms that will apply equally well. We're almost out of time, but just maybe one final question from Marina Garrote, which is: could you talk a little more about how the existing legislation could be used to protect from AI harms? - Yeah, sure. So when I was mentioning, for example, this AI system that's being tested by the Ministry Of Labor in Brazil, just... I referred to it three times, and I didn't explain what it is.

So let me just say what it is. Microsoft partnered with the government to develop a system that profiles the unemployed and tries to match them with employment opportunities. And for those who are less capacitated, it would direct them to training opportunities, capacitating opportunities. I don't know if the word is capacitated, but like have the qualifications, let's say. And the issue, so there was a study done by Fernando Bruno from the Federal University of Rio de Janeiro about this system.

One of the issues was that there was very little transparency on what data was being used and what criteria was being used, or even if citizens would be aware that they were being given opportunities according to a system, and they made many requests for information, didn't get any information on preliminary assessments for discrimination or any of that. So I do think that a specific regulation on AI would definitely help because there's so many layers there of the problem. But there are things to be done already with the data protection legislation there, right? So there is a, not very good and complete, but there is a right to review of decisions made based on personal data in the Brazilian Data Protection Law. It could be improved, and we tried to improve it in the draft bill, by the way, but it does exist, right? So I was meaning that when we are speaking of systems that use personal data, the data protection law definitely applies. Another thing is that consumer law can be applied, enforced when we're speaking of a consumer relation, right? And that's not always the case.

So when we're speaking of the public sector, that's often not the case, right? But there are many protections against faulty products, let's say, that can also be interpreted in that sense. And, of course, the provisions of civil law. What I want to say is that there's no such thing as, "there's no law that applies artificial intelligence and we're in a situation of a legal void." There are many things that could apply.

And we also need like a sort of activist legal interpretation there to do it. But at the same time, it would be very, very helpful to have legislation that clarifies and goes beyond, right? In terms of making these technologies accountable and accounting for the specific challenges that they pose to the population. - Thank you so much. That's such an important point to end on. You know, that we are not in a legal void at all, and that, you know, as these efforts are ongoing, it's not the case that no provision at all could be used from other existing legislation to address some of the harms that we've been discussing in today's conversation. I'm afraid we're at time, but thank you so much to Professor Mariana Valente for this fantastic conversation.

This has been incredibly rich and helpful. So thank you very, very much for your time. Thanks to everyone who has attended today, and we hope to see you at one of our next events.

Thank you so much. - Thank you. Thank you for the great conversation.

- Bye - Bye.

2023-07-05

Show video