[BGM plays] (Brandie Nonnecke) Welcome to this special episode of TecHype Live. I'm your host, Brandy Nonnecke. Today we are joined by a global leader and responsible AI governance, Minister Clara Chappaz. As the French Minister of AI and Digital Affairs she is in charge of implementing France's responsible AI governance strategy.
As part of that mission, France will host the AI Action Summit in February 2025, bringing together world leaders across both the public and private sectors to identify ways to harness artificial intelligence to benefit society, the development of appropriate standards to ensure responsible AI, and coordination of AI governance strategies around the globe. Die. to kick off our discussion, I was hoping that you could give us sort of a summary of France's AI strategy. Where is it heading now? And especially with the European Union passing the EU AI act. Great. Very happy to be here with you this morning.
So it's the first time, actually, that, AI is part of the title of the secretary of State for Digital Affairs, and I think it gives you a sense on how important we believe this technology is. Because the potential for fundamental progress is, is enormous. But also it raises so many questions. And you've mentioned a few of them. And that's basically, a testimony to how we want to make sure that not only we make friends being, one of the powerhouse for AI, but we we make it a powerhouse because we have a vision on how the technology should be used. And we want to make sure that we can bring countries together, to support this vision.
And that's the AI for Action Summit that will be hosting February 10th and 11, 2025. I'm sure we'll have plenty of time to talk about that. But to come back to your question, where do we stand and how did we, do it over the last few years? We didn't wait for ChatGPT to think that I could be transformative.
Back in 2018, France, put together the first national AI strategy. Looking at what are the things that we have in our country that could be an edge, building this technology. And the very first and foremost talents. So we're in one of the most prestigious universities in the world, France, also home to some of the best, education training programs, research labs.
When it comes to mathematic in particular. And that's probably one of the reason why in this trip we landed on Tuesday, I've seen French people all around in the Valley working on the AI. So back to 2018, we thought, okay, if we are going to be serious about this technology, we need to reinforce our research, capabilities. And so we've been supporting, development of programs, research, training programs, focusing on AI. We have nine, what we call AI clusters that are very interdisciplinary, and that are working on, on advancing science behind the technology because, this is one of the strengths we can bring to the world when it comes to developing AI. Second is energy. I'm sure we get it.
We get a chance to talk about it, because that's also part of the vision for the AI summit. There's a lot of great things that can happen with AI, but yesterday we were, discussing with 15, leaders thinking about sustainability and AI and energy is obviously the key question for on this topic. France is likely to have, green energy thanks to nuclear. And so that as also been something that we've been, really pushing through when we think about, developing AI, how can we make sure we, strengthen our edge when it comes to having clean energy, strengthen the infrastructure, behind it so that we can get training, computing facilities.
We have actually a, national, computer that is used by our researchers that is green because it's working on, nuclear energy so that we can advance the science. Also on, on, on those type of issues. And then ultimately, the third big piece of the strategy was how do we create an ecosystem where, we can have more and more scientists move towards launching their own companies and see this transfer happen from research science towards, building a LMS. I think a lot of people maybe in this room, or at least during this trip, I've told us about Mr. College. Mr.
Ali, I h I all those labs that are, advancing science when it comes to LMS. And they're part of a very vivid ecosystem with 1000 AI companies all smarter than the others, working on, developing not only the, sovereign infrastructure and LMS, but also the applications, especially on health care, where, for instance, very well positioned. So that's been the focus. I think we're entering a phase and probably going to, make a link with the topics of research that, that you're working on here, where we have great building blocks, we have great tools. But where we see is an expertise adoption, because, and that's also something that was quite surprising for me when, when I arrived here.
If we are now going to see companies, administrations, but also everyone in this room, and beyond adopting that technology, we're not going to benefit from the progress it can bring. So the phase two of our strategy has been really focused on trying to, help companies understand better what are the use cases as administration as well? Understand how is it can really, brings this technology in to their daily operations to see what, what good can come out of it? And this is going to be one of the major focus, I think, for the next year to come. Yeah. And of course, the, the summit is called the AI Action Summit. So it's really thinking about how do we spur this adoption.
Yeah. How do you balance, the advancements in the innovation with ensuring that there is safety and security and that the public who is adopting it can actually benefit? I think everything is completely linked, actually, because, when you talk to people about AI, the the truth is we are very excited about AI in this room, but majority of people would would not feel comfortable using that technology. Sorry for the people who heard this anecdote quite a few times now, but the very first words I was told when I landed, in the US back on Tuesday, was from an agent helping us and guiding us through security and the person, the American agent, helping us navigate when we left the airport told us, welcome to the US, Mrs.
Minister, that please make sure I doesn't kill my job. Wow. So I think that is top of mind for them. This concern you. So that gives you a real sense of if we don't create this trust that this technology is here to give benefits for all, it's never going to be adopted. So I don't see like on the one hand, trust safety topics and on the other hand, innovation topics because it can only work together. And I think companies should and should understand that and only have an interest into, pushing safety topics.
Because if we don't answer those very normal questions, I mean, it's a new technology. It raises so much concerns. The science behind it is not fully developed to understand. Also, the extent of, of the potential of this technology and the risks of this technology, we're never going to get the full benefit from it. Yeah. And and that is fascinating to me that they're expressing this concern about AI replacing their jobs. And one thing I'm concerned about is they may have no choice.
They may not have the autonomy to push back with the companies just adopting this technology and rolling it out. What is the role of regulation in ensuring these protections? Yeah, that's where our view is, is is probably like more, optimistic. I think what we're trying to achieve. And we were also talking with the state of California earlier today, is seeing how this technology can empower, a lot of, workers to do their job better. So take a very concrete example that they were discussing this morning.
In all the states in the world, you have a lot of admin administrative work that goes into, I don't know, auditing a health facility, making sure that everything is, check balanced. There's a lot of this work that gets in a lot of hours that gets spent on just writing reports, comparing reports with, like, what needs to be, done from a regulatory perspective. If I can help those workers, fill those reports like analyzes reports instantly, it means that the work is that we're spending countless hours just filling papers can now have time to think what is, a way I could improve safety in those, health facilities or what is the way I could improve processes. So the the the biggest challenge, I think, is not how jobs are going to be displaced, but it's more how we are going to manage to train workers all around the world to use those tools for their great benefits.
And to that extent, the ministry of of AI in France is linked to the Ministry of Higher Education and Research. Because there is a big focus that will need to be put into how do we bring people towards, gaining confidence in how they can use a technology and how can we, collectively make sure that we adapt the way people, work, but also the ways to learn? Someone yesterday was telling me that, even in some of the best universities like that, a lot of students are now wondering whether or not they should go to computing sciences. Because if I can code for me, why do I still need to learn computing? And really, who saw that coming? When that was probably the most attractive job that they could. Exactly.
So I'm not saying we have all the answers, but I'm saying the role of governments is to obviously create safety around the use of those models. But beyond that, also understand what types of transformation, the society overall needs to take the benefit of this technology. And education is a big piece of it. Absolutely.
And I've often heard it's not that I will replace you, it's an individual who knows how to harness. Exactly. I will replace you. Yeah. And to that point, that's exactly the vision we have for the AI for Action Summit. If I, if I may, if I may go to this direction because, so so we will host this summit, February 10th and 11th, 2025. It's a follow up summit of what happened back in Bletchley in the UK. A bit more than a year ago now, where government heads came together when it was just starting to, be widely used globally, and said, how do we regulate this thing? That's the vision we have as a country.
Is that, it's such an important topic that it cannot the way we think about it, globally cannot be only focused on regulation, but needs to be focused on exactly the question. You say, like, how do we make sure that contrary to, whatever has happened before with technology, we don't end up in a world where it's going to be always the same. People can take the full benefit of the technology is a countries work building. These technologies are people who are more tech savvy and and who it will be easier to use it and get the benefits out of it.
So that's why we want to make sure in our AI for action Summit that we get first everyone around the table, not just the governments that are building the technologies, but all countries north and south and try to see, all together. How do we bring those governments heads? And how do we create discussion with scientists from all over the world with companies, with experts, non-profits, because everything is moving so fast. We need to create those intended disciplinary discussions so that we, as governments can stay ahead of the curve and really think through, what direction we want to go is and the reason we want to have a summit that is so inclusive is because we want to make sure that out of the summit, we can create commitments towards using AI for the greater good. And how do we do that? We do that by, trying to get commitments, for example, on, on, on the foundations that would help, countries who don't have access to, that much AI infrastructure now get access to better infrastructure, better data, language, so that we also can get all those countries up to speed and, and get them the benefits of this technology. But this is not going to happen.
We believe if we don't have a very proactive, view and action, to make it happen. I haven't heard of this, plan before of sharing data and infrastructure. Is there a plan already in place within the EU to share this data and models? This is what we want to pledge for during the AI for Action Summit. So you'll have like, concrete, commitments that would be taken, at a time. And we we're hoping also we've been testing this idea with the companies or organization we've met during this trip to see, how our people are responding to it.
And, and I think this it's much well understood today the extent towards which we need to be thinking a little bit more thoroughly on how is it technology, goes into people's hand is, is, is, I believe, much, much, much better understood that probably. It was like ten, 15 years ago. So we're hoping to get commitments in this direction. Yeah. And in the United States, we have the national AI Research Resource, which is a public private partnership between the large, AI companies where they're sharing compute power data and models.
So we have a bit of a model being developed here. So it's great to see this initiative happening also in the EU. And what we want to do is really go beyond EU, but get get all countries, together during the AI for action summit. And if, if, if we can be a little bit more concrete, the three things we want to get out of summit is first and foremost, governance. I think everyone agrees that we need to, have a way to, bring safety towards the usage of those models.
But there's been so many initiatives around the world. We also need to, come together has we've been with the GPI, and, and the fact that, this initiative is now linked to OSD. But there's more we can do inside direction. And we want to push this effort to I think there is a big question around common good that we've discussed. How do we make sure that we, get everyone the benefits of this technology and, sorry, sustainability? So we touched on it when I talked about nuclear energy. But yesterday, we, we were discussing with a bunch of professionals, around those topics.
And what we're expecting to get out of the summit is, standards. How do we improve the science behind measuring the impact on those models? Because if we don't measure it, we can, not probably, like, make it advance in a way that is more sustainable. Small models. We've talked so much about large language models over the last 24 months, but there's a lot of things we can do is much smaller models that are not necessarily, giving worse performance. But guess what? They're consuming, which is, which is probably a good thing.
So trying to also get the science advance into this direction is something we're going to aim for. And reasoning around, data consumption and, and, and clean energy is definitely going to be something else that's going to push forward. I could not agree with you more that there has been this hyper focus over the last two years on these significantly advanced frontier models, but there are all forms of machine learning that can be used to address societal or business challenges. The governance mechanism, though. So the European Union, of course, passed the AI act and the United States at the federal level. We don't have a comprehensive AI law.
We have some, bills that have been passed into law. For example, in the state of California, Governor Newsom signed 18 bills into law that address AI. And it's sort of creating a patchwork. What can we do moving forward to ensure that there is this global alignment, international coordination on a responsible AI strategy? It's a it's a very important question because, back home in France or here in the US, I think all the, research labs companies we've, we've discussed with, the, the they understand the need for safety if we want to drive adoption, but they're always raising these concerns. How am I going to handle having to navigate through, a lot of different ways, different countries states are handling this issue.
First, I think collaboration is very important. The reason we came during this week initially is because, it's a convening of the AI safety institutes. So the ten safety institutes are, gathering. And because this technology is advancing so fast, having experts in the room, scientists in the room, all thinking together, what is the right way, to define and categorize models today? Maybe it's not exactly the same as it was yesterday.
AI is a science evolving behind that. How do we, evaluate models, having, like, people working on this from the company side, from scientific side? Getting everyone in the room is is very important to make sure that, the, the way we're thinking of evaluating models is, is aligned with the way the models themselves are, are evolving. So collaboration is is extremely important. And to that as that aspect, it was it was great to see that the AI Safety Institute started working at this network and really collaborating and, during this week, and gathered here. And we want to do more of that during the AI Chan summit. But then, definitely there will be a need to make sure we stay as aligned as possible countries to countries.
That's why the GPI, initiative working hand in hand with OCD is also very important. So that we can, make sure that whatever is, getting, written down, built in one area of the word is spread across and try to, align on international standards. The EU, is working on a code of conduct at the moment. And during this process there is, definitely a piece of how do we align, and how do we make sure we can push towards, having standards that are internationally used? But we also need to be realistic that this technology is not new, but the way it has, gotten so, involved into everyone's daily life is pretty new. And so I think that's going to be a big, big, challenge, which we need to keep in our head at all times when we're thinking about safety.
And we're thinking about regulation is how do we make sure that the way we go after regulating is, not too static so that we can adapt, the way we understand the technology, because there's still so much to understand, to, to the way we, we make sure that we understand the risks of the technology and we, make sure that we, have a way to regulate those risks. Yeah, I often think about that. The greatest innovative component of large language models and large multimodal models was not necessarily their advancements in machine learning, but the user interface. It made it incredibly easy for anybody to use that tool, which then spurred adoption and awareness. Now the European Union, they take a risk based approach to assessing AI systems, not necessarily focused on how significantly advanced that model is. Over the past couple of years.
There's much more concern about frontier models. Is these large models. As you've pointed out, there are many forms of machine learning. They can perform just as powerfully in some ways for certain use cases as these large, frontier models. How do you see that evolving in the international discussions about whether we're going to continue to focus on the technical capabilities versus thinking more about a risk based approach? Yeah, it's a it's a good discussion.
And good question. Our approach at European level and the views at our president, Emmanuel Macron, pushed for his, to find the right balance between, regulation and innovation, and to do to do so really making sure that where we focus on is, regulate the risks because the technology itself is not bad. It's the way you use it that, could cause a problem. So we'll keep on, on, on pushing for this vision, and hoping to, to now pay very much attention on how we implement the EU. I act because the text is written, but over the next few years, we also have to see concretely how this is going to get implemented across the union.
One of the things where we need to pay attention to is, a bit going back to the conversation we're having before is, how do we make sure that every EU member states doesn't, implement it in a way that is widely different from the others? I think you have the same here with the states having their own regulation. So this will be something that we will be very careful with some of that coordination. And earlier I had the pleasure of meeting with the minister. Before today, we were talking about a very controversial California bill, SB 1047, introduced by Scott Wiener. This bill would have put in place requirements for risk assurance pre-deployment for these significantly advanced frontier models. The bill was vetoed by Governor Newsom.
However, I wonder if it will be reintroduced in the new term. But you also say that it's quite famous in France. And I'm curious, how does Europe think about a bill like SB 1047 that is primarily focused on the risks of these large models? Yeah, I think the, the vision we've, we've had over the last two years is, is again this vision of collaboration because everything is is moving so fast. We've been obviously very and keen to understand how the conversation was evolving here. And that's why as well, for us, it's important that we keep on having this big summit where we bring everyone in the room because, this is only by having collaborative, interdisciplinary, discussions that will be able to evolve in the right direction, around this discussion, and not by having, like, each government head in their own country trying to assess, which direction to go. So that's why, it was it was quite famous for us because it was good to see, what approach was taken, but also what was the reception? In the country and, and following also the work that has been done, since this bill was vetoed.
What I understand from the conversation I've been having with, some official is in the state of the state of California, earlier this week is that there's been a lot of work done in, trying to get pilots, using this technology with, some very concrete use case, like, administrative, work that we talked about, but also transportation, language. Helping people who are living in the state access services in their own languages. So their approach, of trying to get the technology in the hands of, of the people and trying to also understand how is it use it. It was also interesting to, to watch. Going back to your question, I think that's that's definitely like where, we we want to make sure we can have global conversation during the AI for Action Summit because, every country is going a bit in their own, direction on how do we evaluate and regulate those models, but we all have to learn from each other.
So, another very controversial topic is around open versus closed AI models. And I remember when that you AI act was being debated, France pushed back on some limitations on open access, saying and you should say, I don't mean to put words in your mouth, but essentially that they can support innovation. What is France's stance right now on open versus closed? Large models. Yeah.
We believe open as, a very great role to play when it comes to, accelerating innovation, but also when it comes to all the discussion around safety and evaluation. There has been a lot of work done preparing the summit. That will happen in February. Actually, I was at an event two days ago where, there were discussing some of the latest discussions around open. But, but, but similarly, what came out of this working group and all the discussion is also the need for common standards, to help like evaluate, models that comes out of the, out of, of this field.
So, but but but yeah, I mean, one of the, the, the fastest way we've seen, we've have companies like hugging face that maybe you, you, you're aware of that has helped the entire community make so much progress on, open, Mistral AI, which is the one that is, creating, some of the most famous LMS now is also very, strongly in favor of, of of, of open innovation. And, and from a philosophical perspective, definitely, in support of this type of work, because that can help us get so many more people involved into the way we build the technology, that it can help us accelerate the understanding behind it. Yeah, I think that that is a really important component of, open. But there are also some risks, right? It can enable certain scammers, schemers and nefarious actors to exploit.
That's why standards, I think, is. Yeah. Definitely one of the next big thing that, we will want to make concrete steps towards, especially during the summit. How do we, yeah, create common standards to be able to evaluate this type of behavior is going to be very important.
Yeah, I completely agree with you. Standards are core to this, like I believe since and the ISO, the international standards organization that's helping to guide not only the technical standards for these systems, but also socio technical standards, which are incredibly important. So standards is that thread that brings everybody together where they can actually get on the same page about implementing, appropriate risk mitigation. One thing I'm very concerned about is that we have very high level discussions around AI governance with different countries, with also the private sector and we're not necessarily speaking the same language. So to say, how do we truly incentivize collaboration between both the public and private sector with the shared goal of truly achieving responsible AI governance? I think it's it's, it's it's very, it's exactly the thinking that we have had when we've thought about, how this technology is evolving and how can we build a more responsible and sustainable, I, and that's what collectively we need to pay attention to. Like there is nothing that could, be, as powerful as if we maintain to get everyone, speaking in the same room, and collaborate.
But I'm very I'm quite optimistic because when I see what was happening here this week and the interest that came from all around the world, all the safety institutes, being gathered in the same room with people will share very different perspectives. Companies, scientists, nonprofits really wanted to, get into the same direction and help the conversation move forward. When I'm seeing the interest at the AI for Action Summit, in February is, is now having, we're having people really want to chip in and really want to be part of the conversation from all the different angle because, I think everyone realize that this is only to your point.
If we manage to collaborate, that we will talk the same language and we will make, the technology progress, because ultimately, I don't think anyone can be, against having a more responsible and ethical use of this technology. The question is, how do we do that? In a way that, doesn't hinder and doesn't slow down innovation. And there's plenty of, of way we can take steps towards that direction. And that's really what we're aiming to do, with the summit.
Yeah. And I think it's showing the incentives. So for the private sector, you know, building trust through safe and secure systems enables not only individuals to adopt your technology, but also other companies. Yeah. Yesterday someone, from Salesforce was telling us our customers are asking for transparency. It's not just because we want to be a nice company, like when we are selling those models. Like, I mean, like any other technology, you want to know what's going on behind it, what data is, inputted to train the model? What are the different ways that data is used? What data you're going to, give access to? And companies are asking for it.
So, incentives are aligned and companies are understood that if they want, even from a pure commercial perspective, I mean, if they want to get the most out of the technology, they have to build products that are responsible, transparent, ethical, because this is the only products that people are going to buy. So yeah, meet that market demand of responsibility. I also think, you know, for those consumers, their ability to question this in the first place has been, to me, a significant achievement. Yeah. I've been working in the AI governance space for, quite some time now, over five years, maybe even closer to ten. And came out. Everybody's excited.
Everybody's talking about large language models, large multi-modal models. Nobody's talking about simpler forms of machine learning, which sort of upset me. But now I think that the discussion is going back.
People are realizing through the introduction to these large models that, oh, there are other forms of machine learning I should also be concerned about and question and ask for transparency. And I think that's also due to the fact that you have more and more product companies, that are taking advantage of the technology when it potentially came, the way the the product was built was, to the end user, probably a bit of a black box because you would ask a question, you will get an answer, but you would not have a way to really understand what is the reasoning behind the answer that is given to you, which is a big difference compared to the way we accessed information so far, because you had to kind of do the work on on where you would click on what links you would consume, and you would have to December of that. You're having more and more work done by all those companies now, trying to, give you more information on how the technology behind it works. And I think that is helping also. Again, the incentives are aligned because that that helps brings more, bring more confidence, more understanding, more transparency into, the information I'm retrieving.
How is it actually getting, put in front of me. And that can only increase adoption. So in a way, this is a bit of a natural, cycle, where we're aligning the technology, the need for confidence that will drive adoption again. And everyone can only push for that because at the end, if if we don't have an increase adoption, we are not going to take the most of the technology. And when you look at we've talked a lot about, conversational agents, but, when you talk to, scientists using technology to find an advanced research in, medical drugs, curing some of the, this is, that we are not able to cure at the moment.
On this planet, when you think about education, when you think about all those big topics, what is the last thing you want is having people not feel comfortable advancing, in that direction. So everyone is, should be quiet. Allowances, incentives. We all need to be aligned with the goal of trust in these systems and making sure they're more responsible.
Reminder to the audience, if you have questions, please raise your hand. And the, Sarina will hand you a note card and we'll collect that, and we'll start the Q&A in a few minutes. Now, I have a fun bonus question for you. Who in here has seen Emily in Paris? Who in here loves it as much as I do? Okay, I have to say, I cannot say that in France, because in France we think it's so cheesy. But, I like Emily in Paris. Say, please don't share this part to me.
How cheesy is? Because then I can scoff at the screen. But, Minister Sherpas is not only a leader in responsible AI governance, but a very successful businesswoman, having first launched the lullaby, which is a second hand, market for children's clothing. And then also, that is a vestiaire, collective, which is a really booming, pre-loved fashion platform which features prominently in the show. No, I'm I'm curious. Your work, you've always prioritized social and environmental sustainability, and equity, these conscious business practices. How can we best institutionalize those values into AI? I, it's it's such an important question because we were talking about that yesterday during this roundtable around sustainability and AI.
But, coming back to your point on, we might not need such large models for any question. Like every time, I don't know, I use Lucia, which is Mistral chat or chat typically to ask, a very simple question. I always think like, okay, is that really needed the impact on the environment that is created by this request that I could basically find, in a way that doesn't need such a large language rich model. But I think for that, discussion to happen, again, we need to come to standards of measurements, because we need to have a much better sense on what's the impact of those models, on the energy consumption. And it's not energy, it's also water.
Because if we want to have more efficient data centers and I mean, heating is a big part of the equation. And to build on those data centers, you need more water. So this also has an impact on the on the environment.
So making sure for that we understand much better the impact then making sure we can educate people in general that small is beautiful and that there's a lot of, a lot of ways that, small models can answer a lot of our needs. I think during this summit, what we'll do is we'll do a contest on, on small models trying to find, like, some of the, pressing questions or creating a challenge around, how we can get companies hugging faces very involved into this, trying to build small models for questions that so far have been answered with much, much bigger models. So it's it's it's a beginning, but it gives me comfort when I see that this, this these questions are, have been discussed here as well, during my trip. And, and again, we need science because, science will help us solve some of those, big questions for sure. Yeah, exactly.
And I'll just mention this has been an issue in the United States. We've been looking at the use of nuclear. I think Microsoft has now is using, nuclear reactor to power some of its tools. But then also at the federal level, there was a proposal under the Biden administration around green data centers looking at alternative energy.
But we'll see how that will pan out in a second. Trump administration, if green energy will still be prioritized, I remain hopeful that it will. We have great green energy in France, so we can host the training and inference of of have quite some few models, we hope. I do have a question from the audience. So we have been discussing AI safety quite extensive.
How do we define AI safety? What does it actually mean? And aligning toward what goals. And and I'll add whose goals. Big question. The view we've been taken in Europe is, safety needs to happen in the way the models are used.
So it's really like, the risk associated with the models that we are trying to, give confidence to the users that it has been evaluated in a way that can provide the safety. So behind safety, there is quite a lot going on, which is the ability to gain transparency into what is included in those models, the ability to understand the responses of those models and the ability to, create a framework towards by different usages, we get, different requirements, so that we can make sure that, we can use this technology in a way that is not going to harm humans, basically. Yeah.
And ensuring security of these systems and and data. Q grabbing another question, if I could maybe just processes. Yeah.
So is there is a lot of pieces that needs to be looked after data. For example, we we spent last week quite a bit of time with, cybersecurity experts, in France because there's and a lot of, advanced science around cyber, in the country and, and the way, that, models and, are, are able today to, make sure they didn't get data poisoned. Is, for example, one of the aspect of safety.
So it really goes all, all the way around the whole, the next question is, is quite interesting. And there are several lawsuits in the United States, touching on this issue, the role of training data, especially the role of training data in plays in open versus closed models and access. So in the United States, the New York Times is suing, OpenAI for use of its data, claiming that it has infringed on their copyright and that ChatGPT produces articles that are nearly verbatim the same, you know, not transformative of the original work and thus affected the value of The New York Times.
I'm curious, what do you think about the role of training data and what do we do? It's a big topic as well. At European, at the European level and obviously in France, because, France is a country where culture has been, quite, an important part of our history. So the way we're looking at this is, we're having now a group of experts, doing some work on to from the transparency of the data that is used to go into those models, but also, indeed, what what is a fair way that we can think of, contributing to the ones that are building those data. So authors, journalists, etcetera, and the models themselves. So it's a bit early to have conclusions. I think we will get the reports in in a few months.
But it's a topic of attention where we need to continue, building the thinking because, yeah, it's it raises valid questions right now. Yeah. We're all grappling with this. And right now we'll see how, these high profile court cases pan out in the United States.
And also what's happening in the, in the EU. A question about whether or not you think that the EU might be acting too hasty in its adoption of regulation at this stage, or is the United States just too slow? We're that lost a fair let the market. It just is a vision we have and how we think technologies should, should be a tool to, get society progress. And how do we create, a way that we can, we can, we can ensure this vision. So, I often hear. And that drives me a little bit nuts.
I have to say, America, innovates, China copies and and Europe regulates. We don't regulate for the sake of regulation. And, I mean, this is not an end.
It's it's a means to an end. And the end is we believe that, consumers needs to have a certain framework by which they have more competition. That brings a better product, cheaper products, because there is a fairer market. Data privacy, because that's the way we think. We can guarantee that consumers have a freedom that is, the one that, is defined by is data privacy.
I mean, all of those, those, those elements that are part of the vision on how we want to, create, the, the framework around technologies just, the ways that we are seeing the world in a way. And I know there's been a lot of discussion around freedom lately, but, we believe that this regulation helps, just bring together this vision of, of freedom. Then what I think is right is, the EU AI act, has been now quite, well written.
And our president has been very active in saying that it should not be at the expense of innovation. And we are pretty happy with the result. It's quite balanced. But what will be very important is how we implement it, because one of the things that we need to be very cautious with, and I'll be very cautious about that, was a new European Commission was just nominated yesterday or the day before. So, that some of the work we will do is, with them, is how do we implement, actually the EU AI so that we don't come into, some things that will slow down companies because, we talked about it. But if if every country has a different way of implementing it, for example, I mean, even a company like Meta or Google is going to spend a very important amount of their resources navigating all those, regulations.
So can you imagine how you do it if you're a small company? And innovation also needs to come from small companies, we are quite convinced that they actually often have, some of the best ideas. Around, around the planet. So the way we will now implement this, these bills, is going to be the most important thing we need to pay attention so that, we don't fall into, situation where, we would slow down, innovation especially. We will make it harder for new entrants to enter the market and especially the role of these smaller companies. I have heard from a few that have said that they are waiting for the larger companies to essentially mess up, fall on the sword, get the, you know, their risk slapped by the EU.
But there's also that timing mechanism for the EU to do the enforcement. So then the other companies can better understand what is allowed, what isn't allowed. How am I doing? I maneuver this uncertain terrain and that's that's a point as well where, the work we need to happen is, making sure that we create, we give certainty to companies on on what they have to do. And, and so the EU, I office is working now on this code of conduct that will answer a lot of, of those questions.
Yeah. And how do we best ensure that companies are able to grow as this area is quite uncertain? Like I'm thinking about the smaller companies and trying to navigate this space and these larger companies, moving ahead first, how do we best support the smaller companies? So what what what's been exciting this trip, but also also back home is actually I've seen quite a few, players that are newer than the, older players and they've been, they've been able to I was impressed how fast they've been able to put together products that have gained so much traction. So for example, yesterday where we spent 60, and the speed at which this company is growing, that just two years old, I think, just gives you, I think, a sense, that we are still very early. And when it comes to bringing this technology into the hands of consumers, and there's still so much that can, happen and use market opportunity, to newer players because simply the way they will be, the product is better. The reason we were with, a few VCs yesterday or the day before and, and some of them American VCs that invested into Mistral AI, which is building LMS out of Paris. And this is the reason we invested in Mistral is simply because.
There's a best team around the planet. So because this market is still pretty new and there is a lot of, opportunities for new players to really, build products that will be widely adopted. And we're talking here about like, more infrastructure products.
But if you look into applications like healthcare, new chemistry material, education, etc. similarly, like the the amount of opportunities you can even think about is is huge. So there is no reason why, people should not get, in the race and, and try to bring benefits to more people thanks to the technology.
Yeah. It's now is the time for action to jump in, right, with the AI Action Summit, to occur February 10th and 11th in France, in Paris. Thank you so much for joining us today for this TecHype LIVE! recording. Very grateful Minister Chappaz. Thank you. Thank you. Thank you for joining us for this special episode of TecHype Live.
Want to learn more about the AI Action Summit or are you concerned about the role emerging technologies will play in your life, and what governments doing to protect you? Check out our other episodes at TecHype.org.
2024-12-01 22:02