Artificial intelligence (AI): the EU and world-wide regulatory efforts

Views: 19

Existing and potential AI’s benefits are enormous to all walks of human life, i.e. for households, business and the economy. However, the acceleration and AIs implementation brings new challenges. As the AI’s regulatory front-runner, the EU is contributing to AI governance world-wide. By setting the AI standards, the digital community can pave the way to fundamental revision of global technology and ensure that the states AI competitiveness.

   More advanced machines, which are becoming less dependent on human operators, have been introduced on the digital market during the last decades. Including numerous AIs, these machines, some are known as collaborative robots or cobots, are working on defined tasks and in structured environments, yet they can learn to perform new actions in this context and become more autonomous. Further refinements to machines, already in place or to be expected, include real-time processing of information files, problem solving, mobility, sensor systems, learning adaptability and capability of operating in unstructured environments (e.g. construction sites). The emergence of new digital technologies, like artificial intelligence, the Internet of things and robotics, etc. raises new challenges in terms of product safety and human security.

European approach
For years, the Commission has been facilitating and enhancing cooperation on AI across the EU states to boost its competitiveness and ensure trust based on EU values. Following publication of the European Strategy on AI in 2018 and after extensive stakeholder consultation, the High-Level Expert Group on Artificial Intelligence, HLEG developed Guidelines for Trustworthy AI in 2019 and an Assessment List for Trustworthy AI in 2020.
The first Coordinated Plan on AI was published in December 2018 as a joint commitment with the EU member states.
The Commission’s White Paper on AI, published in 2020, set out a clear vision for the EU-wide AI as a system of excellence and trust, setting the scene for today’s political agreement. The White Paper was accompanied by a report on the AI’s safety and liability implications, on the Internet of Things, IoT and robotics, etc.

  Additional info is available at:

   Present achievements have concluded that the current digital product-safety legislation contained a number of gaps that needed to be addressed; they were collected in the draft of the so-called “machinery directive”. Thus, new risks connected to emerging digital technologies include: = first, potential risks that originate from direct human-robot collaboration as the collaborative robots (so-called, co-bots) that are designed to work alongside human and employees are exponentially increasing; = second, potential risk originates from connected machinery-devisees in the digital market; = third, an area of concern lies with the way software updates affects the ‘behavior’ of the machinery after its placed on the market; = fourth risk relates to the manufacturers’ performing full risk assessment on machine learning applications before the product is placed on the market; = finally, the autonomous machines and remote supervisory stations are regarded as “drivers and/or operators” responsible for the “digital movement”: i.e. the driver may be transported by the machinery or may be accompanying the machinery, or may guide the machinery by remote control, etc. which do not set up requirements for autonomous machines.

More on the proposal for a new Regulation on “machinery and digital products” in: file:///C:/Users/Eugene/Downloads/Regulation%20machinery.pdf

    The Commission suggested in April 2021 additional rules and actions aimed to turn the EU into the global hub for trustworthy AI; the combination of the first-ever legal framework on AI and a new EU-wide coordinated plan with the member states will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation. Besides, the so-called new rules on “machinery” will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of digital products.
Independent and evidence-based research by the Joint Research Center has been fundamental in shaping the EU-wide AI policies and ensuring their effective implementation. Through rigorous research and analysis, the center supported perspective development of the AI Act, as well as clarifying the AI terminology, risk classification, technical requirements and contributing to the ongoing development of harmonized AI standards.
Political agreement on AI and formal approval by the European Parliament and the Council reached in December 2023 paves the way to the adoption of the AI Act; then, there will be a transitional period before the draft Regulation becomes applicable in the EU. In the mean time, the Commission will convene AI developers in the EU and around the world who commit on a voluntary basis to implement key obligations of the AI Act ahead of the legal deadlines.
To promote rules on trustworthy AI at international level, the EU will continue to work in numerous international organisations, e.g. the G7, the OECD, the Council of Europe, the G20 and the UN. Recently, the EU supported the agreement by G7 leaders under the Hiroshima AI process on International Guiding Principles and a voluntary Code of Conduct for Advanced AI systems.

Note. The G7 Hiroshima Artificial Intelligence Process was established at the G7 Summit in May 2023 to promote advanced AI systems on a global level. The initiative was part of a wider range of international discussions on AI, including at the OECD, the Global Partnership on Artificial Intelligence (GPAI), the EU-US Trade and Technology Council, etc.

    The AI Act introduces dedicated rules for the AI’s of the general purpose models that will ensure transparency along the value chain. For very powerful models that could pose systemic risks, there will be additional binding obligations related to managing risks and monitoring serious incidents, performing model evaluation and adversarial testing. These new obligations will be operationalised through codes of practices developed by industry, the scientific community, civil society and other stakeholders together with the European Commission.
More on avoiding risks in the general reference:

Coordinated plan
In April 2021, the EU revealed a coordinated plan on artificial intelligence, after intensive cooperation between the Commission and the member states. The plan has set actually a EU-wide strategy aimed to: a) accelerate investments in AI technologies to drive resilient economic and social recovery aided by the uptake of new digital solutions; b) act on AI strategies and programs by fully and timely implementing them to ensure that the member states fully benefit from first-mover-adopter advantages; and c) align AI policy to remove fragmentation and address global challenges.
These strategy’s goals are to be reached by the following means: a) setting enabling conditions for AI development and uptake in the EU member states; b) making the European Union the place in the world where excellence thrives from the lab to market; c) ensuring that AI works for people and is a driving force for society’s wellbeing; and d) building strategic leadership in high-impact sectors.
The coordinated plan goes hand in hand with the recent proposal for a regulation on AI (instead of a directive) which dates to April 2021 and is aimed at addressing risks of specific AI usages and “categorizing” them into 4 different levels: unacceptable risk, high risk, limited risk, and minimal risk.
In this way, the AI regulation will make sure that Europeans can trust the AI they are using; besides, the regulation is also key to building a EU-wide system of excellence in AI and strengthening the Union’s ability to compete globally.

    In mid-November 2023, the European AI Alliance Assembly (for the fourth time) gathered in Madrid within the context of the Spanish Presidency of the Council of the EU. The European AI Alliance is the EU’s flagship initiative that since 2018 brings together policymakers and stakeholders to contribute to shaping Europe’s artificial intelligence policy. The 2023-Assembly marks another important milestone in the European AI strategy, with the AI Act heading towards adoption and the updated Coordinated Plan on AI in its second year of implementation.
Under the theme of “Leading trustworthy AI globally”, the event was an opportunity to exchange views about significant legislative developments and inform stakeholders about the next steps regarding the AI Pact.
The Assembly’s work included panels dedicated to generative AI, cybersecurity and AI, as well as revealing existing means to help AI innovators getting to the digital market. The discussions featured the EU’s upcoming boost on AI uptake by letting startups access the EU high-performance supercomputers.

   More on European AI in the following press releases: = New rules for Artificial Intelligence – Facts page; = Hiroshima AI Process: Guiding Principles and a Code of Conduct on Artificial Intelligence; = Coordinated Plan on Artificial Intelligence; = Regulation on Machinery Products; = Liability Rules for Artificial Intelligence; and = European Centre for Algorithmic Transparency, Joint Research Centre.

Global approach
The global community is unanimous of the importance of adapting the digital technology, taking account of the digital divide, especially among the regions most vulnerable to climate change. Integration of already existing AI devises and chatbots in these emerging technology tools is one of the solutions that would ensure the “bridging” of the digital divide.
The US officials acknowledge the need to manage the risks as well in the process of seizing opportunities that the AI is promising. The US is committed to deal with the challenge and the President Biden’s recent Executive Order on AI demonstrates it.
By working together, the global community can responsibly harness the power of this emerging technology to develop AI tools that would help mitigate climate change risks, make communities more sustainable and resilient, and build an equitable clean energy future for all.


   The UN Climate Change Technology Executive Committee, TEC together with Enterprise Neurosystem, a non-profit open-source artificial intelligence community, launched in December 2023 the AI Innovation Grand Challenge to identify and support the development of AI-powered solutions for climate action in developing countries.
The launch was part of a COP28 high-level event organized by the UN Climate Change Technology Mechanism in collaboration with the COP28 Presidency.
“We are seeing increasing evidence that artificial intelligence can prove an invaluable instrument in tackling climate change. While we remain mindful of the associated challenges and risks of AI, the Innovation Grand Challenge is a promising step forward in harnessing the power of artificial intelligence and empowering innovators in developing countries,” said UN Climate Change Executive Secretary Simon Stiell.
More in:


Leave a Reply

Your email address will not be published. Required fields are marked *

eighteen − 16 =