Views: 70
Modern advances in science, technology and innovations, particularly in ICT and computer sciences, enable states to “revolutionize” the governance processes at global, sub regional and national levels. Some states are already developing more sophisticated, enduring, detailed and perspective socio-economic governance models with adequate digital regulatory solutions. The use of AI can serve as a valuable tool both at national and international governance levels.
Background
Global governance is regarded generally as a framework of institutions, rules, norms, and procedures that facilitate collective action and co-operation. Such system includes economic development, trade, human rights, environmental protection, peace, security, etc. with the aim of addressing global challenges that transcend national borders and require collective actions.
The concept of global governance is constantly evolving, as new challenges emerge and new actors become involved in the global system.
Some examples of global “sectoral” governance include: – the UN system, which comprises a range of specialised agencies, programs and funds that work on issues such as health, education, climate change, peace and security, etc.; – the WTO system, with the set of rules for international trade and resolution disputes among the member countries; – the global financial system, within the International Monetary Fund, IMF which provides financial assistance to countries facing economic crises and promotes international monetary co-operation; the Paris Agreement on climate change establishing a framework for countries to work together to reduce greenhouse gas emissions and mitigate the impacts of climate change; – the Universal Declaration of Human Rights, which sets out fundamental human rights that should be recognized and protected by countries around the world.
Proper management of present global challenges requires some world-wide institutions and agreed norms; however, the current international system has been unable to cope with the most pressing global issues in an acceptable way.
Source: https://globalchallenges.org/global-governance/what-is-global-governance/
National level: proactive approach
Most advanced example at the national level comes from the UK, where the government did not wait to react to some already frightening AI impacts but choose instead to explore a proactive approach in defining the AI’s development trajectory to ensure perspective public and human safety.
With this in mind, the UK government created the AI Safety Institute last year (April 2023), which was the first ever state-backed country-wide organisation focused, primarily, on advanced AI safety for the public interest to minimize adverse effects both to the UK’s consumers, as well as around the world, from rapid and unexpected advances in AI tools.
The Institute intends to develop optimal socio-technical infrastructure needed to understand and avoid risks of advanced AI and enable the adoption of some governance’s contra-measures. In this way, the institute intends, as is mentioned in its website: “to move the discussion forward from the speculative and philosophical, further towards the scientific and empirical”.
Source: https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute
Additionally, the UK government announced in September 2023 the creation of the so-called Isambard AI as the UK “AI research resource”, which has been one of Europe’s most powerful supercomputers specifically created for the “AI’s purposes”. Besides, the UK National Health Service has been running trials to help clinicians identify breast cancer sooner by using AI.
Hence, in numerous workplaces, the AI can “free people” from routine tasks, e.g. giving teachers and education providers more time to teach, and police officers more time to tackle crime, etc. The UK government intends actively explore new AI opportunities in numerous employment sectors.
However, advanced AI systems also pose significant risks, as the government’s paper on “Capabilities and Risks from Frontier AI” published in October 2023 admitted. It mentioned, among other things, that AI could be misused on various accounts: i.e. including AI in generating disinformation, conduct sophisticated cyber attacks, as well as assist in more dangerous directions, like developing of chemical weapons. Besides, the AI can cause societal harms: there have been examples of AI chatbots encouraging harmful actions, promoting skewed or radical views and providing biased advices. Quite often the AI generated content is highly realistic and feasible; but false AI could reduce public trust in digital information: some experts express concerns over peoples’ losing control of AI systems, with potentially catastrophic consequences.
Institute’s functions
The Institute adjusts its activities within its headline mission to ensure maximum impact in a rapidly evolving field. It will initially perform 3 core functions:
= Develop and conduct evaluations on advanced AI systems, aiming to characterize safety-relevant capabilities, understand the safety and security of systems, and assess their societal impacts.
= Drive foundational AI safety research, including through launching a range of exploratory research projects and convening external researchers.
= Facilitate information exchange, including by establishing – on a voluntary basis and subject to existing privacy and data regulation – clear information-sharing channels between the Institute and other national and international actors, such as policymakers, international partners, private companies, academia, civil society, and the broader public.
Source and reference to: https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute
Working definitions
As soon as there are constant debates and controversies in the world relating to key terms used in modern digital spheres, the British Institute elaborated some “working definitions” aimed to facilitate a “common understanding” of digital progress:
= Artificial Intelligence: The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. Modern AI is usually built using machine learning algorithms. The algorithms find complex patterns in data which can be used to form rules.
= Machine Learning: Algorithms that allow computers to recognise patterns in data, understanding the relationship between the data they are given and the problem the algorithm designer is trying to solve, without the rules having to be explicitly programmed by a human; for example, machine learning is a sub-field of AI.
= AI system: The complete hardware and software setup, through which one or more machine learning models is developed, deployed and/or made available to downstream users.
= Advanced/frontier AI: Such terms as ‘advanced AI’ and ‘frontier AI’ are contested; when both these terms are used, it is capturing the cutting edge of technological advancement in AI by offering the most opportunities but also presenting new risks. The scope of the AI Safety Institute includes both highly capable general-purpose AI models and narrow AI that is designed to perform specific tasks, if the narrow system has high potential for harm. This intention matches the scope of the 2023 Global AI Safety Summit; more on the summit below. Ahead of the government’s response to the AI Regulation White Paper, the UK researchers intend to work on defining these terms more clearly in the context of fast-paced digital developments.
= AI safety: The understanding, prevention, and mitigation of harms from AI. These harms could be deliberate or accidental; caused to individuals, groups, organisations, nations or globally; and of many types, including but not limited to physical, psychological, social, or economic harms.
= AI security: Protecting AI models and systems containing AI components from attacks by malicious actors that may result in the disruption of, damage to, theft of, or unauthorized leaking of information about those systems and/or their related assets. This encompasses protecting AI systems from standard cybersecurity threats as well as those arising from novel vulnerabilities associated with AI workflows and supply chains (known as adversarial machine learning).
= Socio-technical: Considering both technical and social aspects of an issue, and their interactions. For example, advanced AI systems can contain and magnify biases ingrained in the data they are trained on, or cheaply generate realistic content which can falsely portray people and events, with a risk of lowering societal trust in true information. Likewise, measures to improve safety, such as evaluating bias in AI systems or establishing a red teaming network, require multidisciplinary expertise beyond the technical.
= Evaluations: Systematic assessments of an AI system’s safety-relevant properties. This does not constitute a pass/fail test or mandates conditions for deployment, but aims to improve understanding of the system’s capabilities, behaviors, and safeguards.
Citations from: https://www.gov.uk/government/publications/ai-safety-institute-overview/introducing-the-ai-safety-institute
International level
Hiroshima’ pamphlet in 8 pages called “code of conduct” adopted during the G7 summit in 2023 has been a starting point in formulating so-called global approach to governance’s usage of AI.
The G7 countries supported the creation of an International Compute Governance Consortium, ICGC aimed at developing world-wide standards for “responsible use and distribution of compute resources in AI research and deployment”. The ICGC, as a kind of global AI governance, intends to both support extensive international cooperation efforts in AI governance, and design international digital management standards in data collection and impact assessments.
However, the G7 summit in Japan last year also underlined that states and international organizations “should not develop or deploy advanced AI systems in ways that undermine democratic values”: i.e. being harmful to individuals and/or communities, facilitating terrorism, promote criminal behavior as well as pose risks to public safety and security.
The states’ code of conduct shall be in line with their obligations under international human rights law, such as the UN Guiding Principles on Business and Human Rights and the OECD Guidelines for Multinational Enterprises. The G7 summit in Italy in summer 2024 intends to devote some discussions on AI.
Some ideas on global governance’s history in: https://globalchallenges.org//app/uploads/2023/06/Global-governance-models-in-history-2017.pdf
The global community issued some “precautionary” advises in the “code of conduct” to the states in applying AI:
a) Testing AI safety and security “throughout their entire lifecycle” to avoid unreasonable risks (with an extended enumeration of all possible risks: social, economic, environmental, etc.).
b) Revealing transparency reports with “meaningful information” for all “new significant releases of advanced AI systems”, and addressing misuse of AI deployment.
c) States should “put in place appropriate organizational mechanisms to develop, disclose and implement risk management and governance policies”, including, e.g. accountability and governance processes to identify, assess, prevent and address risks throughout the AI lifecycle.
d) Create and implement robust security controls, including physical security, cybersecurity and insider threat safeguards across the AI lifecycle. These may include securing model weights and, algorithms, servers, and datasets, such as through operational security measures for information security and appropriate cyber/physical access controls.
e) States shall prioritize the development of advanced AI systems to address the world’s greatest challenges, e.g. those of the climate crisis, global health and education; these efforts are undertaken in support of progress on the UN Sustainable Development Goals, and to encourage AI development for global benefit.
f) States shall development and, where appropriate, use of international technical standards and best practices, including for watermarking, and working with the Standards Development Organizations, SDOs when developing national testing methodologies, content authentication and provenance mechanisms, cybersecurity policies, public reporting, etc. Particularly states and organizations also are encouraged to work to develop interoperable international technical standards and frameworks to help users distinguish content generated by AI from non-AI generated content.
Source and reference to: https://www.mofa.go.jp/files/100573473.pdf
Additional global efforts
Approaches to the global efforts in AI governance are proceeding mainly through, i.e. the promotion of safety, security and trustworthy AI in sustainable development. Recent new summit affirmed the need for collaborative international approaches to respond to rapid advancements in AI technologies and their impact on societies and economies.
About thirty states around the world and the EU-27*) at the AI Seoul Summit during the last week in May 2024 have established the “approved goals” of AI governance as a safety, innovative and inclusive endeavor: particularly national governments, companies, academia and civil society have made a “commonly advanced” effort to strengthen global AI safety capabilities and explore sustainable AI development.
Several states around the world have signed up to developing proposals for assessing AI risks over the coming months, in a set of agreements during the AI Seoul Summit in May 2024.
More in: https://www.gov.uk/government/publications/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024/seoul-ministerial-statement-for-advancing-ai-safety-innovation-and-inclusivity-ai-seoul-summit-2024
*) Signatories of the Seoul Summit Statement are: Australia, Canada, Chile, France, Germany, India, Indonesia, Israel, Italy, Japan, Kenya, Mexico, Netherlands, Nigeria, New Zealand, The Philippines, Republic of Korea, Rwanda, Kingdom of Saudi Arabia, Singapore, Spain, Switzerland, Turkey, Ukraine, United Arab Emirates, United Kingdom, United States of America, and the European Union’s member states.
The UK “technology boss” underlined at the Seoul summit-2024 the following main advantages:
= The approach to create a global-type AI governance has been productive;
= The agreements reached in Seoul marked a new phase in global efforts to create “AI safety agenda”, make concrete steps towards more resilient AIs to risks, as well as deepening human understanding of science that underpin a shared approach to AI safety in the future.
= For companies, it is about establishing thresholds of risk beyond which they won’t release their models, specifically setting thresholds for states where risks become severe. The UK will continue to play the leading role on the global stage to advance such cooperation.
Citation from: https://www.gov.uk/government/news/new-commitmentto-deepen-work-on-severe-ai-risks-concludes-ai-seoul-summit
Bottom-line. However, it shall be constantly remembered the all ICTs and AI, in general, are the result of extensive researchers’ activities designed to facilitate human progress; hence, such human “digital creations” shall be under constant public surveillance. Public and private bodies can only unlock the benefits of AI if they manage all incurred risks.
Presently, human ability to develop powerful AI and ICT systems outpaces peoples’ inherent ability to make them safe. Some of the ways out are: e.g. better understanding the capabilities and risks of existing and perspective advanced AI systems, as well as developing regulatory framework for AI to ensure that all AI-devices are developed and deployed safely and responsibly.
More on the EU-wide efforts in AI’s safety in: https://www.integrin.dk/2024/05/30/european-artificial-intelligence-a-special-ai-office-in-action/
Our comment. The major super powers, as well as those adherents to them, are in the process of designing a new “big digital strategy” for the world. The “battle field” has had its long-lived binaries: ends vs. means, aspirations vs. capabilities, planning vs. improvisation, hopes vs. fears, reasonable governance vs. artificial.
However, the “stronger parts” of the world have not always been able to do what they wanted; the opposite “weaker parts” found many ways to resist outside pressures thereby retaining the right to decide things for themselves.
As to the dominating “democratic consent”, it is vital to remember that the ancient Athens, which had served as the world’s first democracy, turned out to be the last “pure democracy” for the next two millennia; no wonder, the American founding fathers decided to establish a “republican union” instead of a “democratic” rule.
However, some questions remain: has the political process -being explored in most democracies- produced agile, adaptive and widely accepted leadership and governance? As soon as the world, actually, needs in a global leadership system, could the process be assisted by the AI tools? Undoubtedly, numerous present century’s challenges have shown the necessity of an effective system of global governance and world-wide actions, the AI’s assistance can be a valuable helping hand in this process.
Inspiration sources: https://www.foreignaffairs.com/reviews/why-would-anyone-run-world-cold-war?utm_medium=newsletters&utm_source=fatoday&utm_campaign/. June 7, 2024.
Additional references to: = Gaddis J. L. Why Would Anyone Want to Run the World? The Warnings in Cold War History. – Foreign Affairs, July/August 2024. In: https://www.foreignaffairs.com/reviews/why-would-anyone-run-world-cold-war?utm_medium=newsletters&utm_source=fatoday&utm_campaign=Why%20Would%20Anyone%20Want%20to%20Run%20the%20World?&utm_content=20240607&utm_term=FA%20Today%20-%20112017; and = Radchenko S. To Run the World: The Kremlin’s Cold War Bid for Global Power. – Cambridge University Press, 2024, 768 pp.