Generative artificial intelligence (AI) systems are increasingly at the heart of our economy and societies. They must be explicitly regulated in the European AI Act at a time when a reckless race is unleashed. France and Europe can fully engage in the deployment and economy of AI, provided they capitalize on their strengths: the protection of fundamental rights, cutting-edge industry and trustworthy AI.
Generative AI systems, increasingly used for producing texts or images, have begun to flood our markets. They are developed in two main phases: first, the development of a model through learning from large amounts of data and significant computing capacities, known as a foundation model or generative model, and then, in a second phase, its implementation in a system. This can be a general-purpose system like ChatGPT, which produces responses to queries, or systems tailored to different industrial sectors, after fine-tuning with domain-specific data.
Foundation models are notoriously unreliable and not robust. Even their designers do not understand precisely how they work. However, they are powerful because they include a phenomenal amount of information and have thus become the basis on which a plethora of systems and applications are built.
Placing the regulatory burden almost exclusively on the providers of second-phase systems, who build and deploy solutions based on foundation models, is neither fair nor desirable. It would even be detrimental to French and European industrial innovation. Providers of these foundation models, regardless of their nationality, must also take their share of responsibility right from the design stage.
This is the current challenge of the trilogues, negotiations between the three bodies of the European Union the Council, the Commission and the Parliament to reach the final version of the European AI Act. Mainly at stake is the position of France, Germany and Italy concerning articles voted in June 2023 by the European Parliament to lay the foundations for regulating the development and deployment of foundation models whose effects could threaten our democracies. Indeed, these models are capable of producing false information and performing unwanted actions, potentially generating a tsunami of disinformation, fraud and cybersecurity accidents in the coming years.
The first version of the AI Act proposed in April 2021 by the Commission, prior to the rise and spread of these types of models, did not mention them. It adopted a risk-based approach, foreseeing a gradation of legal constraints according to the level of risk presented by the intended use of a given system. To account for the rapid rise of generative AI systems, the Council very rightly introduced in its amendments in November 2022 the so-called general-purpose systems. The Parliament then wished to propose its own amendments in June 2023 to specify the obligations of the providers of foundation models themselves.
You have 50% of this article left to read. The rest is for subscribers only.
(0)Commentaires