Responsible AI in education: European approach

Views: 22

The Council of Europe approved last September the Framework Convention on Artificial Intelligence, which is regarded as a valuable addition to the already partially-functioning EU-wide AI legislation. Taking into consideration recent Commission’s “Future of Jobs Report-2025”, the combination of structural digital transformations in national political economies and the digital transition in education seems quite complicated and compelling. 

Background
The Council of Europe (CoE) Framework Convention on Artificial Intelligence is the first legal framework focused on regulating AI systems in a way that emphasizes human rights, democracy and the rule of law.
The CoE specifically noted “accelerating developments in science and technology and the profound changes brought about through activities within the lifecycle of artificial intelligence systems, which have the potential to promote human prosperity as well as individual and societal well-being, sustainable development, gender equality…as well as other important goals and interests, by enhancing progress and innovation”. However, it also raised concern “that certain activities within the lifecycle of artificial intelligence systems may undermine human dignity and individual autonomy, human rights, democracy and the rule of law”.
Besides, the CoE is based on several previous global community’s efforts, applicable to the international human rights’ protection, such as the 1948 Universal Declaration of Human Rights, the 1950 Convention for the Protection of Human Rights and Fundamental Freedoms, the 1966 International Covenant on Civil and Political Rights, the 1966 International Covenant on Economic, Social and Cultural Rights, the 1961 European Social Charter (and respective protocols attached), and the 1996 European Social Charter (Revised).
It is interesting to see the CoE’s approach to define the AI: “for the purposes of this Convention, “artificial intelligence system” means a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations or decisions that may influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment”.
Accordingly, “each party to CoE shall encourage and promote adequate digital literacy and digital skills for all segments of the population, including specific expert skills for those responsible for the identification, assessment, prevention and mitigation of risks posed by artificial intelligence systems” (art.20).
Consequently, each CoE member state shall provide a development report to the Conference of the Parties within the first two years after joining the Convention and then also periodically update it thereafter with details of the AI activities, as is noted in the article 24.
Source and citation from: https://rm.coe.int/1680afae3c

Besides, the CoE’s aim is to address AI’s broader societal implications: in this regard the CoE is distinct from the European AI regulations that focus more narrowly on technical or market-driven aspects. Hence, the new Convention provides an important roadmap for public institutions and private ones as well, with a human-centric approach to AI systems, argue experts.
Source: https://www.universityworldnews.com/post.php?story=20250626130409989

On the Council of Europe, CoE
The Council of Europe (CoE; in French: Conseil de l’Europe, CdE) is an international organization with the goal of upholding human rights, democracy and the rule of law in Europe. Founded in 1949, it is Europe’s oldest intergovernmental organization, representing 46 European member states with a population of approximately 675 million as of 2023; it operates with an annual ordinary budget of approximately 500 million euros.
The CoE is distinct from the European Union’s structures; although people sometimes confuse the two organizations, partly because the EU has adopted the original European flag, designed for the Council of Europe in 1955, as well as the European anthem. No country has ever joined the EU without first belonging to the Council of Europe. The Council of Europe is an official United Nations observer.
Unlike the EU, the Council of Europe cannot make binding laws; however, the CoE has produced a number of international treaties, including the Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, ECHR) of 1953. Provisions from the convention are incorporated in domestic law in many participating countries; the best-known body of the Council of Europe is the European Court of Human Rights, which rules on alleged violations of the ECHR.
More on the ECHR in: R. Spano (2018). “The Future of the European Court of Human Rights—Subsidiarity, Process-Based Review and the Rule of Law”. Human Rights Law Review. 18 (3). Oxford University Press: 473–494. doi:10.1093/hrlr/ngy015. Additionally, in: https://academic.oup.com/hrlr/article-abstract/18/3/473/4999870?redirectedFrom=fulltext&login=false.
The CoE headquarters, as well as its Court of Human Rights, are situated in Strasbourg, France; which is also the official seat of the EU Parliament.
Source and citation from: https://en.wikipedia.org/wiki/Council_of_Europe

CoE on ethics, human rights and curricula
As Katerina Klimoska (the EU researcher and international higher education expert) notes, “the CoE AI Convention is a forward-thinking framework that places ethics, accountability and human rights at the center of AI development”; that means universities are not only possess some legal obligations, but also have “moral and academic imperative”. Hence universities “must embrace the ethical guidelines outlined by the Convention to ensure they continue to lead in the development of AI systems that are not only innovative but also responsible. By embracing the CoE AI Convention, universities not only ensure compliance with emerging legal AI frameworks but also strengthen their role as responsible leaders in AI development”.
Additionally, universities have to update their AI-related curricula to integrate some critical AI areas in ethical, legal and societal aspects, preparing students for the ethical challenges of working with AI.
Then, universities involved in AI development, notes K. Klimoska, “must adopt risk management frameworks, ensuring that they anticipate and reduce potential harms, especially in AI applications that could affect human rights, democracy, the rule of law and security. Therefore the “universities will need to create policies that balance innovation with risk assessment” she concluded.
Citations from: https://www.universityworldnews.com/post.php?story=20250626130409989

It is well-worth mentioning recent World Economic Forum’s report on the future of jobs-2025. Thus, the report has two vital massages: a) the need for re-direction of national political-economy’s growth patterns (due to reflecting to modern challenges), hence “the landscape of work is undergoing a profound transformation”, with “emerging skills and rapidly evolving industries leading the change”, as the report indicates; and b) the need to “understand the critical skills that will define success in the workplace of the future”. The latter is going substantially define the future of most efficient national education providing policies.
Source and citations from: https://www.weforum.org/videos/what-are-the-most-essential-skills-in-the-workplace-of-tomorrow/

More on education and skills’ perspectives in: https://www.integrin.dk/2025/07/01/skills-for-the-future-global-and-european-transformations/

 

Leave a Reply

Your email address will not be published. Required fields are marked *

seventeen − ten =