Views: 248
The general opinion tends to mean that among all positive aspects, the new EU-wide AI legislation has acquired certain risks for consumers: some AI practices have been already forbidden recently as they were deemed having “unacceptable risks”. Other risk-based facilities even could be classified as high-risk in the near future if they prove a negative effect for people’s health, safety and/or fundamental rights. But the functioning of the so-called “AI companions” are not forbidden and providers can still exert “subliminal, manipulative or deceptive” influence or exploit “specific vulnerabilities” to customers and decision-makers.
Background
From the complex economics’ point of view, the AI models represent a strong “transformative” means, both in science and practice; based on extensive datasets and multiple examples of socio-economic patterns, the models provide a great analytical capacity. Advanced digital technologies have already “revolutionized” and facilitated politico-economic analysis by involving deeper insights and more accurate predictions in national economics.
Using large volumes of data, the AI models are “trained” to identify most optimal and subtle correlations in traditional methods and, consequently, can provide for more optimal decision-makings by national economists in dealing with dynamic market forces. Thus, the AI models and tools have become indispensable in understanding deep-rooted and complex market dynamics and predict future development trends.
AI in political economy
AI models and machine learning has become integral parts in key economic sectors, including manufacturing, agriculture, automotive and aerospace industries, and life sciences.
Integration of AI models into economics is not only “boosting” the efficiency of economics’ fundamentals; they are optimizing resource’s allocation and, finally, enhancing politico-economic decisions. The AI’s “transformative” facilities are still at the initial stage in economics; but these models have revealed their profound impact on economics’ theoretical and practical aspects, making AIs both an analytical tool and an integral part of national growth patterns. . As we delve into these applications, we uncover how AI is not just a tool but an intricate player in driving economic innovation and insight.
More in “10 ways AI is being used in economics” (2025) in: https://digitaldefynd.com/IQ/ai-use-in-economics/
Here are some figures: there were 1 billion ChatGPT web visits in the first two months after its launch at the end of November 2022; global revenue from AI service and sales is expected in 2026 reach $900 billion, compared with $318 bn in 2020; then, the amount that AI could contribute to the global economy by 2030 reaches $15.7 trillion.
Source: BofA Global Research, “Me, Myself and AI—Artificial Intelligence Primer,” February 28, 2023; BofA Global Research, IDC; and PwC. More in: https://camoinassociates.com/resources/ai-in-action-part-1/
There are four main types of AI: a) reactive machines, b) limited memory machines, c) neuronal applications and theory of mind, and d) self-awareness models; thus, e.g. the “retail AI” is being used to personalize the shopping experience, recommend products and manage inventory. “Transportation AI” is being used to develop self-driving cars and improve traffic management; while “energy AI” is being used to improve energy efficiency and predict energy demand.
Among five AI “big ideas” behind the “algorithmic transition” are: = Perception: computers perceive the world through sensors; = Representation and reasoning: intelligent systems construct representations of the world and use them to reason; = Learning: computers learn from data; = Natural interaction, and = Societal Impact.
Looking ahead, multi-modal AI will enable agencies to analyze local and state-level data and combine it with data from other sources like Google Earth Engine, Google Maps, Waze, and public data sets to improve decision-making, pre-empt climate-related risks and improve public infrastructure, to name a few.
Consumers’ effect; digital avatar
Decision-makers in the national governance systems and among the EU legislative institutions are presently aware that the “AI companions” could be classified as high-risk AI systems. This would impose a series of obligations on the digital providers and AI companies developing the bots, as well as assessing the ways these AI models impact people’s fundamental rights.
Thus, the Dutch Greens’ member in the European Parliament, Kim van Sparrentak (who among others was co-negotiated the EU AI Act) mentioned that when they were discussing these issues with the Commission’s AI Office in the process of drafting the GenAI guidelines, they were pretty sure that it was about “high-risk AI systems”.
Source: https://www.politico.eu/article/ai-friends-experts-worried-artificial-intelligence-chatbot-digital-technology/
The Italian data protection authority ordered in 2023 the AI chatbot company Replika’s developer Luka Inc. to suspend data processing in the country, on the ground of “too many risks for minors and emotionally vulnerable individuals.” The company unlawfully processed personal data, and Replika lacked a tool to block access to users when they declared they were underage.
Italy’s data protection agency has fined (in May 2025) the developer of artificial intelligence chatbot company Replika €5 million for breaching rules designed to protect users’ personal data. Besides, a new investigation into the training of the AI model followed that underpins Replika.
Launched in 2017, San Francisco-based startup Replika offers users customised avatars that can have conversations with them: the “virtual friend” was marketed as being able to improve the emotional wellbeing of users.
Garante is one of the European Union’s most proactive regulators in assessing AI-platform compliance with the bloc’s data privacy rules. In 2024, it fined ChatGPT maker OpenAI €15 million after briefly banning the use of the popular chatbot in Italy in 2023 over the alleged breach of EU privacy rules.
Citation from: https://www.reuters.com/sustainability/boards-policy-regulation/italys-data-watchdog-fines-ai-company-replikas-developer-56-million-2025-05-19/
But experts fear that even the EU’s extended regulatory framework could fall short in dealing with AI companion chatbots: e.g. some argue that “artificial intimacy” slips through the EU’s digital framework because it was not regarded “as a functional risk, but an emotional one”. The opponent’s argument is that “the law regulates what systems do, not how they make people feel and the meaning they ascribe to AI companions.”
Other experts also note that this is what makes it challenging: anyone who seeks to use “regulated AI models” inevitably are “touching people’s feelings, relationships and daily lives”.
The mentioned above Replika has already made its services off-limits for under-18s; it is said in their official statement that they “enforce strict protocols to prevent underage access.”
The company is, however, still in a dialogue with data protection authorities to ensure it “meets the highest standards of safety and privacy” the firm’s spokesperson concluded.