Human rights in the face of emerging AI technologies
The Council of Europe Commissioner for Human Rights brought together a group of experts working on or engaging with artificial intelligence (AI) to discuss the risks and opportunities of emerging AI technologies, and how to ensure a human-centred approach to AI governance. The interdisciplinary composition of the group enabled different perspectives to be brought to the discussion. The Commissioner chaired the meeting.
The consultation aimed to understand current trends in AI technological development and the associated risks and opportunities for human rights. It also aimed to understand the role of regulation in ensuring safeguards for human rights in the design, development, and deployment of AI systems. The consultation centred on two main themes: Theme 1: Emerging AI technologies. risks and opportunities for human rights Theme 2: Embedding human rights in AI governance. This report provides a non-exhaustive overview of the consultation’s main points and conclusions in the form of a chairperson’s summary.
- The rapid development of AI requires continued engagement from policymakers. 2025 marked an inflection point, with widespread discussions about AI-powered technology. In such a fastpaced context, it is imperative to ensure that the design, development and deployment of AI takes a human-centred approach to foster opportunities and prevent the risks it poses to human rights, democracy and the rule of law.
- The opaque use of AI technology poses a fundamental challenge for two key reasons. Firstly, individuals are often unaware of its deployment. For example, surveillance cameras with face recognition capabilities used in public assemblies interfere with individuals’ right to privacy. Secondly, individuals may be aware of the use of AI technology but be unaware of its discriminatory design. For example, biased data fed into AI systems can result in a violation of the right to non-discrimination. The absence of explicability can lead to a lack of contestability, which in turn can hinder access to justice and the right to an effective remedy. In any case, greater transparency in AI systems would contribute to better protection of human rights.
- Generative AI (AGI) has shifted the focus from the automation of tasks to the autonomy of machines, raising concerns about the potential absence of human control. Fully autonomous or agentic AI, which is subject to little or no oversight, could have profound adverse social consequences. A concerning trend is the potential use of agentic AI in military and security contexts. In this respect, human oversight is paramount to avoid gross human rights violations, including loss of life. Furthermore, individuals should always have the right to challenge decisions made by machines. Similarly, when it comes to content moderation, the discretionary power to identify harmful content may require human intervention to prevent the amplification of harm through the use of AI technology.
- There is a significant risk associated with operating large language model (LLM)-powered technology, especially when it acts as a substitute for humans in discussions or educational settings. Such technology is increasingly producing so-called “careless speech” that we could describe as a type of hallucination, whereby the information received by a user is subtly incorrect, incomplete or biased towards a particular viewpoint. In other words, inaccurate information is repackaged and presented to individuals in a clearer, more confident version, which can subsequently be consumed uncritically. This phenomenon requires specific domain expertise for its detection. The AI-powered amplification of immaterial degradation of information quality could contribute to a reduction in the plurality of ideas and opinions.
- Similarly, the use of generative AI in education can have a long-term impact on society. The way history and truth are presented can be a powerful tool for shaping a collective identity, or for fostering distinct clusters of opinions that may become disengaged from one another. If misused, such technology can rapidly amplify problems that can, in turn, impact the human rights of targeted groups, such as migrants or minorities. Similarly, the use of LLM-powered technology, particularly in an educational context, can lead to concerning levels of skill degradation, whereby individuals become reliant on such technology to make everyday decisions. This results in their inability to think, judge, discern and reason independently. All of the above also has implications for how information is controlled centrally by those who design, develop and deploy such AI systems.
- Beyond education, personalised or targeted information through algorithms can contribute to the creation of separate informational spaces. This creates two main problems. Firstly, it isolates individuals from one another. Secondly, it potentially makes individuals more susceptible to manipulation and diminishes their critical thinking skills. Recent research attests to the impact of increased isolation and decreased social interaction on individuals’ cognitive systems and resilience. In this context, using chatbots and algorithms to amplify disinformation could push individuals towards extremism. These elements affect not only individuals, but also the way our democratic societies are organised to safeguard human rights, including freedom of expression and access to information, which are vital pillars of any democratic society.
- According to the Harvard Business Review, the number one use of generative AI in 2025 is for companionship and therapy purposes. Companion AI chatbots are said to be having a positive effect on the so-called “crisis of loneliness and isolation” and these are being designed and marketed by companies in highly anthropomorphic and personified ways. However, it should be noted that the long-term effects of using such technology may exacerbate human isolation further and contribute to the breakdown of the social fabric of our societies. In other words, such technology introduces structural social distancing. While it is common for humans to ascribe human features to new things, anthropomorphising AI technology carries serious risks and implications regarding what humans expect from it. This technology could exploit those in vulnerable situations, such as those dealing with death and loss, who may develop a strong emotional attachment to it. The elderly and children are particularly at risk in this regard, as research shows that these groups are highly susceptible to algorithms and addiction. Children are adversely affected at an early stage of development, most notably when they develop expectations around human relationships and social skills.
- AI technology in general, and LLMs in particular, uses vast amounts of data, including personal data. This data originates from what individuals voluntarily and involuntarily post on the internet. LLM-powered technology is increasingly being used by individuals and the public sector and deploys LLM-based multimodal data aggregation and prediction with the help of advanced simulation techniques. Recent research demonstrates that these systems could enable the 360-degree profiling of individuals, resulting in so-called “data cages” in which individuals’ personal data is aggregated and categorised to provide detailed insights into human behaviour at scale. These “data cages”, which also facilitate collective profiling, enable LLM-powered technology to predict what individuals might wish to purchase, view or react to, thereby assisting in the targeted delivery of content with assumedly outstanding accuracy. Thus, this technology poses risks to human dignity and autonomy (e.g. disempowering individuals to make choices), as well as to human rights, particularly the right to private life (e.g. tracking individuals’ personal and sensitive data). Understanding how data is compressed within an LLM can provide a pathway for governing its outputs.
