Generative AI: recent legislative effort in the EU

Views: 29

Generative artificial intelligence (GenAI, for short) is artificial intelligence capable of generating text, images or other media while using generative models. As a rule, Gen AI focuses on creating new and original content, on chat responses, designs, synthetic data, etc. GenAIs are already transforming businesses and allowing humans to be more creative and productive. The AI Act is going to be elaborated in the EU recently and, probably, be effective in 2024-25.   

Famous GenAIs
Among some most famous GenAIs world-wide during the last decade are the following platforms: – Lightricks from 2013 (used to power various tools for artistic expression); – OpenAI created in 2015; – Hugging Face from 2016 used for machine learning solutions; – Synthesis AI from 2019 which combines cutting-edge technologies with innovative generative AI models; – Glean and Stability AIs from 2020 providing AI models for image, language, code, music, video, life sciences and various other scientific areas; – Anthropic from 2021 working in the fields of machine learning, science, legislation and goods; – Jasper from 202, and most recent – Inflection AI from 2022 which serves to improve general interaction between humans and computers.

  Open AI -one of the first in the GenAI’s family – is regarded as one of the top companies in the generative AI sphere: e.g. it is well-known for its innovative language models like GPT-3, and it has expanded the whole spectrum of artificial intelligence capabilities. Presently, its models are able to produce writing that resembles human product, conduct and have conversations and even write poetry. OpenAI has received a lot of recognition and attention for its dedication to expanding AI while making it available to ordinary individuals.

  GenAIs are already transforming businesses and allowing humans to be more creative and productive; the mentioned top ten generative AI startups have already greatly expanded the initial AIs functions and possibilities, including software that generates lifelike graphics and music, streamlines design processes and boosts customer experience.
The GenAIs startups are often coming up with innovative methods to use significant language models in order to, e.g. improving video production and image creation and copywriting.
It would be interesting to see the progress of generative language models…
Source: https://www.analyticsvidhya.com/blog/2023/10/top-generative-ai-startups-in-the-world/

GenAIs perspectives
Some experts predicted that generative AIs may explore the following two trends:
= On one side, companies aim to leverage generative AI internally in their companies to reduce costs of partnerships with third actors. According to McKinsey, 33% of organizations aim to use generative AI to reduce costs in core business (what might reduce the diversification of partners and suppliers from other industries and countries), and 12% aim to create new businesses and/or sources of revenue.
= On the other hand, generative AI infrastructure will require cross-industry collaborations. Even if a company aims to reduce its partnership costs with third countries, become much more competitive and create higher revenues. Still the creation of the AI-infrastructure requires to set up deals with other industries to make that goal feasible.
Source: The geopolitics of Generative AI: international implications and the role of the European Union, in: https://media.realinstitutoelcano.org/wp-content/uploads/2023/11/jorge-alvarez-geopolitics-of-generative-ai-international-implications-and-the-role-of-the-european-union.pdf

   Presently, the world’s favorite cloud marketplace for IT professionals used to buy, sell and manage best-in-class technology solutions is the Pax8. Pioneering the future of modern business, Pax8 has cloud-enabled more than 350,000 enterprises through its channel partners and processes one million monthly transactions. Pax8’s award-winning technology enables managed service providers (so-called MSPs) to accelerate growth, increase efficiency and reduce risk so their businesses can thrive.
More in: https://www.pax8.com/en-eu/

European efforts
After long deliberations and complicated negotiations, the European Parliament and the EU Council (two main legislative institutions) reached in the beginning of December 2023 an agreement on artificial intelligence (although initially as a “provisional political agreement”) on the draft legislation on artificial intelligence, so-called AI Act.
Thanks to the resilience of the European Parliament’s nature and long and intense deliberations, the world’s first horizontal legislation on artificial intelligence will deliver on the EU-wide promise to ensure the rights and freedoms incurred into the development of the revolutionary digital technology.
The “political decision” is regarded as a historical achievement and a huge milestone towards the perspective digital future: the agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of the European societies and economies, commented the member states officials.
In the beginning, the European co-legislators have cleared the “thorny issues of foundation models and general-purpose artificial intelligence systems”. On this point, although France, Germany and Italy had put pressure on the EU Council to replace the existing rules with the “codes of conduct”, the European Parliament’s approach seems to have prevailed. The codes of conduct called for by Paris, Berlin and Rome will still see the light of day, but will complement the AI legislation by serving as a support for providers of systems and models representing systemic risks so that they can comply with the future rules.
AI systems (as well as the models on which they are based) often represent systemic risks subject to elaborate strict rules, including model evaluation, systemic risk assessment, mitigation and adversarial testing. In due time, the reports will be made to the Commission on serious incidents and measures which should also be taken into consideration to ensure cyber security. In addition, reports on the energy efficiency of the models will be required.
Models representing systemic risks are defined by the amount of calculation used in training. Below the threshold of 10 yottaflops (10^25), which is used to calculate computing speed, systems and models will be required to comply with more flexible transparency requirements, such as updating technical documentation, complying with the provisions of the EU copyright directive and disseminating detailed summaries of the content used to train them.
High-risk AI systems, which represent high risks in terms of health, safety, fundamental rights, the environment, democracy and the rule of law, will also have to comply with a set of strict rules, failing which the product concerned could be withdrawn from the European market.
An impact assessment on fundamental rights should be carried out, and users should be informed of the ‘high-risk’ nature of the systems they are using. Public entities using this type of system should register in the EU database. However, in areas deemed critical, exemptions may be granted if suppliers can prove that the system in question does not pose significant risks.
Open source models will be exempt, unless they have been identified as systemic or are marketing a system that could be considered high-risk.

  The co-legislators also agreed on the future European AI Office, responsible for overseeing the implementation of the text, contributing to the development of standards and testing practices, and enforcing common rules in all EU member states. A scientific group of independent experts will guide the ‘European Office’ by developing standards, evaluation methods and advice for the designation and emergence of high-impact foundation models. The future ‘European AI Office’ will be made up of representatives from the states and will be involved in drawing up codes of practice for the “foundation models”.

Some other AI Act’s issues
The AI Act’s draft also provides for penalties to be imposed: these could be capped at €35 million or 7% of a company’s annual worldwide turnover for breaches of prohibited AI applications. Companies could face fines of €15 million or 3% of annual global turnover for breaching obligations under AI legislation. If the information provided is inaccurate, the fine could amount to €7.5 million or 1.5% of sales.
Finally, the text also provides that individuals may lodge a complaint with the relevant market supervisory authority if the provisions of the AI Act are not complied with.
The text still has to be put to the vote in the European Parliament and finally approved by the member states. Before that, the EU27 ambassadors to the EU are due to examine the document by the end of December 2023, before officially approving expected in January 2024. A series of technical meetings will be held in the meantime to finalize the last aspects of the text.
The provisions relating to bans will come into force 6 months after the publication of the regulation in the Official Journal of the EU. Six months later, the rules on foundation models and conformity assessment bodies will come into effect; however, the text will be fully implemented in a year’s time.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

thirteen − one =