Artificial Intelligence’s safety summit: formulating perspective priorities

Views: 23

The global community is entering a completely new and different digital era witnessing a coming period where computers start acting human-kind and hopefully “intelligently”. With the complexity of the AIs facilities, the issues of safety are becoming more complex. At the recent AI safety summit in London the EU suggested four perspective priorities aimed to constitute an effective system of AI-governance. 

At the recent global AI summit in the start of November 2023, the Commission President underlined the following important lessons concerning present AI’s evolution and priorities for 2024 and beyond:
First, a vital factor is scientific community’s independence forming a system of objective scientific checks and balances, i.e. through a “community of outstanding, independent scientists with access to resources to evaluate the risks of AI and free to call out those risks”.
Secondly, the need to establish AI safety standards that are accepted worldwide: scientific community together with experts in political economy are supposed to create world-wide standard practice with a possibility to identify publicly any AI incident or error with a possible public followed-up procedures. Such practice shall not be seen as a system’s failure but as responsible and appropriate means to remedy any errors for scrutiny and investigation. Such an approach would show the value of globally accepted and shared standards and procedures.
Thirdly, the states’ governance and scientists are better prepared against any dangers when an effective information sharing exists around the world: shared security alerts prevent a viral spread of dangerous solutions: i.e. the AI systems also evolve and learn. However, complex algorithms can never be exhaustively tested; hence, “global AI governance system” must make sure that AI-developers act swiftly when problems occur, both before and after their models are put on the market.
The Commission President expressed a wish that in five-year’ time, the world will have systems in place that implement those lessons and in this way “provide a key to unlock the huge benefits of AI”.
Source and citation:

    Note: Since the last G7 meeting in Japan this spring, complex AI issues have attracted additional global attention: G7 leaders agreed on international “guiding principles on artificial intelligence (AI) and a voluntary “code of conduct for AI developers”. Presently, the so-called AI-security summit in the UK makes a further step towards shared AI-security.
More in:

The EU perspective AI-plan
The European framework to “understand and mitigate” the AI’s complex risks is based on four pillars, constituting an effective system of computer’s governance.
= First, the need of public funding to provide an access to the best supercomputers: during last five years, the EU has built the largest public network of supercomputers in the world. Besides, the EU is already giving access to powerful computers in Finland and Italy to start-ups and testers.
= Second, the need to develop internationally accepted procedures and standards for testing AI safety.
= Third, establishing “standard procedures” for identifying every significant incident caused by errors or misuse of AIs with a reporting and followed-up system.
= Fourth, the need of an international system of alerts fed by trusted organisations and firms.

   Interesting enough, the G7 “Hiroshima AI Process Comprehensive Policy Framework” also consists of four pillars: – analysis of priority risks, challenges and opportunities of generative AI; – international guiding principles for all AI actors in the AI-system; – international code of conduct for organizations developing advanced AI systems; and – cooperation in support of the development of responsible AI tools and best practices.
More in:

The culture of AI-responsibility
The Commission President has touched upon the issues of urgently needed “culture of responsibility”, which includes e.g. for private actors one general principle: the greater the AI model’s capability and attendant risks, the greater is the responsibility.
That also means solid and traceable corporate responsibility, embedded within corporate and business model.
However, corporate responsibility also included public authorities which are ultimately responsible for the safety and security of citizens.      Therefore the states must put in place binding “principled rules” in elaboration and controlling AIs. And public authorities must have powers of intervention, as a complement and backstop to self regulation.
These “responsibility’s guardrails” should not represent strict barriers, as in the traffic rules, they would allow the traffic to keep to “the road and proceed safely”.

The EU-wide efforts to manage AI
The Commission President also described the EU-wide initiatives concerning the AIs development, including the European AI act, which formulated some basic principles for the EU-27 in supporting digital innovation, using the AI’s benefits and focusing AI regulation on preventing high risks.
The Commission and the member states agreed to boost excellence in AI by joining forces on policy and investments. The 2021 review of the Coordinated Plan on AI outlines a vision to accelerate, act, and align priorities with the current European and global AI landscape and bring AI strategy into action. The plan sets four key policy objectives, supported by concrete actions and indicating possible funding mechanism: – enabling conditions for AI development and uptake in the EU-27; – making Europe the place where excellence thrives from the lab to market; – ensuring that AI technologies work for people; and – building strategic leadership in high-impact sectors.
The EU regulatory proposal aims to provide AI developers and users with clear requirements and obligations regarding general and specific uses of AI. At the same time, the proposal seeks to reduce administrative and financial burdens for business, in particular for SMEs. The proposal is part of a wider AI package, which also includes the updated Coordinated Plan on AI. Together, the Regulatory framework and Coordinated Plan will guarantee the safety and fundamental rights of people and businesses when it comes to AI. Besides, they will strengthen uptake, investment and innovation in AI across all EU-27 member states.
More on coordinated plan on AI in:

    The perspective European AI act would be a final stages of the legislative process in which the EU institutions and the states were discussing the foundation of a European AI Office. This Office could deal with the most advanced AI models, with responsibility for oversight, in the logic of the 4-pillar framework outlined above. Besides, the European AI Office should work with the scientific community at large and contribute to fostering standards and testing practices for frontier AI systems while complementing the private sector in investigation and testing. The Office should be able to act on the basis of alerts and make sure that developers take responsibility. And finally, a European AI Office would enforce the EU-wide common rules for the AIs most advanced models.
More on EU-wide regulatory means on AI in:

    Creation of the European AI Office would provide a safety net for wide AI use for governance and businesses in Europe. But the European AI Office should also have a global importance: it should be open for cooperation with similar entities around the world, including the US newly formed AI Safety Institute. The EU and US are welcoming input on the 65 key AI terms essential to understanding risk-based approaches to AI, along with their interpretations and shared definitions that were released as part of the Fourth TTC Ministerial in May 2023; comments on the completeness, relevance and correctness of the definitions are particularly encouraging.
More in:

Leave a Reply

Your email address will not be published. Required fields are marked *

ten − six =