
[ad_1]
The Forum on Information and Democracy wants to create a voluntary certification mechanism for “public interest AI”. The idea: build a label similar to fair trade certification in order to raise consumer awareness and encourage the development of ethical AI systems. Katharina Zügel, Policy Manager within the organization, explains to us what this mechanism would consist of.
A label for ethical AI systems? This is what the Forum on Information and Democracy, an organization founded in 2019 by several NGOs including Reporters Without Borders, offers. Like the “Fair Trade” label, the idea is to create a “ voluntary certification mechanism for public interest AI », Explains Katharina Zügel, Policy Manager within the organization. She is the author of a research paper seen as a first building block towards a future “public interest AI” label system.
01net.com: The AI sector is full of draft regulations and code of conduct : AI Act, EU Pact on AI, SB 1047 in the United States… what does this certification project consist of, and how does it differ from all current initiatives?
Katharina Zügel: “Today, a large number of AI tools are made available to the general public, but also to public administrations. These users choose systems a little randomly or based on the best features. We saw it with the launch of ChatGPTeveryone went there without necessarily understanding what it entailed: how was this system trained, what is happening with my data, what does this AI offer, what type underlying worldview? And so our idea is to draw inspiration from what has happened in other sectors, where the creation of labels has had an impact on consumers, but also on the companies that create the products. »
What does public interest AI mean?
The majority of AI systems have been developed by large private companies who have done so with economic profit in mind. For us, public interest AI is AI whose main objective is to serve the public interest. Which excludes AI systems developed without respecting labor rights, or which have a huge environmental impact, or which do not even try to reduce it… Or which do not respect questions of diversity and representativeness.
The objective of this certification is therefore to encourage companies to create systems that are more positive for society and democracy, but also to create demand for this type of tool. We want to put in place an acronym that is very easy for the user to recognize, because today, there are, of course, especially in the European context, certification mechanisms that will be created on security. But this remains a regulatory perspective, the idea here is to create something that is accessible to consumersand which goes beyond the scale of the European Union, the only region where legislation is very advanced. The future label will make it possible to have a solution that is more on a global scale.
Today, there are hundreds of AI systems like ChatGPT, Gemini, Llama, Claude, Granite, Mistral: does any of them correspond to what you call “general interest AI”?
We have not yet been able to make an evaluation of existing AI tools, but yes, there are AI of general interest today, but on a smaller scale. For example, we are very close to Reporters Without Borders, which is one of the organizations that created the Forum. And they are working on creating an AI tool dedicated to media and journalists on climate, which will allow journalists to do much more targeted research on climate issues.
And so this is a tool which is created with a view to bringing something to our society, which has agreements with all the institutions which gave the data to train the AI, even if its algorithm was created by one of the giants of the sector. But future certification would not be limited to these particular AIs, the idea is also to propose that large AI systems (the most popular, Editor’s note) be developed in this way.
Concretely, you published “a research article” on September 19: what are your next steps?
The idea of creating a certification mechanism for AI came from our last report published last February. It was one of the researchers, Martha Cantero Gamito (professor of information technology law and researcher at the Florence School, Editor’s note), who had proposed this idea of an AI label of public interest, and who had made this proposal. This research paper is only the very first step.
We are going to organize a series of workshops with different actors such as UNESCO, other associations and Partnership countries, the idea being that this results in something concrete, kicking off the creation of this mechanism , for the AI Summit which is planned for February in Paris (the Forum on Information and Democracy aims to implement the principles of the “Partnership for Information and Democracy”, a text approved by 52 States, Editor’s note).
To launch this label, there are two steps. First, we must first define the governance system and structure. Once that is done, it is necessary to define its evaluation criteria, and launch a standardization process (i.e., define what must be met to benefit from this label, Editor’s note) – and this could easily take a year to really define the criteria, in detail.
Exactly, what would be the criteria for this label, who could pilot and manage it?
We propose to create an independent institution, which will be responsible for defining the standards and revising them. It could work with certification institutions that are already highly regulated today, such as ISO standards.
Basically, there would be two types of criteria. The first will focus more on the company that offers and deploys the AI tool, and how it works. Does it respect labor law, does it provide for democratic governance, does it protect whistleblowers, does it have an environmental impact, can it create a tool in the public interest?
The second revolves around the AI systems themselves and their databases, are copyright and privacy respected? Is there a certain transparency, were other actors involved in creating this system and defining what are acceptable and unacceptable risks? Today the important questions are decided behind closed doors, and only by the company.
Building and then implementing this certification will take time: won’t it arrive too late?
Indeed, it would have been better to have had it five years ago, and from the start of the launch of AI systems. But for us, it’s never too late. We see it today with social networks, legislation with stricter rules arrived well after (their launch and mass adoption, Editor’s note). In addition, the development of AI has not stopped, their power and capacity will increase.
The idea of this label is also to move away from the approach adopted for example by European law (the AI Act) which is to “avoid risks” or “control risks”. On the contrary, it is a question of providing something complementary with the legislation currently being adopted, of creating a mechanism which consists of encouraging something positive.
🔴 To not miss any 01net news, follow us on Google News And WhatsApp.
[ad_2]
Source link