What remains human in the age of AI?

Insights|January 7, 2026

What happens to judgment, knowledge, and community when artificial intelligence becomes a constant companion in our lives?

This question framed a conversation between anthropologist Katarina Grafman, philosopher Pii Telakivi, and professor of theoretical philosophy Åsa Wikforss.

Åsa, professor at Stockholm University and a member of both the Royal Swedish Academy of Sciences and the Swedish Academy, began by grounding the discussion in democracy. Democracy, she said, is not only about expressing preferences, but about having the knowledge needed to form them. “What you want depends on what you know,” Åsa said. Without knowledge of history, science, and society, political power becomes shallow. For decades, democratic societies relied on shared institutions to distribute reliable knowledge. That infrastructure, she noted, has been radically transformed.

The explosion of digital information has shifted responsibility from institutions to individuals. “We now have the task of finding reliable sources ourselves,” Åsa said. That task is increasingly difficult in an environment where artificial intelligence produces convincing fake images, videos, news, and even scientific claims. The risk, she emphasized, is not that people are becoming less intelligent, but that misplaced confidence makes them vulnerable to downward spirals of disinformation.

“People say the chatbot has no mind, but they treat it like a social being.”

Pii Telakivi

Katarina, an anthropologist who works primarily with companies, approached the future through observed behavior rather than prediction. Her greatest concern is what she calls the outsourcing of judgment. “People are increasingly asking AI to mirror their decisions,” she said. Where advice once came from friends or colleagues, it now comes from algorithms. In her view, this threatens independent thinking, particularly among younger generations. “The biggest changes are usually soft and social, and therefore very hard to predict,” she said.

Pii, philosopher at the University of Helsinki working on artificial intelligence in mental health care, focused on the psychological consequences of humanlike technologies. Although people insist, they know chatbots have no emotions, research shows they behave as if they do. “People say the chatbot has no mind, but they treat it like a social being,” Pii said. This creates a powerful feedback loop. Unlike objects people once anthropomorphized, artificial intelligence responds in ways that feel emotionally appropriate.

This has profound implications for loneliness. Despite being more connected than ever, people report increasing isolation. AI companions and therapy chatbots are often marketed as solutions, but Pii warned they may deepen the problem. “What we want from a friend is not just belief, but genuine compassion,” Pii said. That kind of empathy cannot be replicated by a system that does not experience emotions.

You cannot leave these decisions entirely to technology companies.

Åsa Wikforss

Across the discussion, a shared concern emerged. Artificial intelligence promises personalization and freedom of choice yet often delivers standardization. Algorithms feed users more of what they already like, narrowing perspectives and reinforcing echo chambers. “It feels like more options, but it actually limits us,” Katarina said. Health apps, recommendation systems, and productivity tools quietly push people toward the same ideals of efficiency and optimization.

Responsibility, the panel agreed, cannot rest on individuals alone. While personal awareness matters, regulation and design choices are essential. “You cannot leave these decisions entirely to technology companies,” Åsa said. Features such as simulated empathy or humanlike interfaces are ethical choices, not neutral ones, and they shape how deeply people become emotionally entangled with machines.

Despite the risks, the conversation was not pessimistic. All three expressed hope that physical presence, books, and shared spaces will regain importance. If people can no longer trust whether a text, song, or conversation partner is human, the desire for authenticity may grow stronger. “We read to know that we are not alone,” Åsa said. That meaning is lost if authorship disappears.

Looking ahead ten years, the panel agreed that artificial intelligence will drive major advances, particularly in science and medicine. But whether it strengthens or weakens society depends on how it is used. Judgment, creativity, and democratic responsibility cannot be outsourced without consequence. The most important task ahead may be the simplest one: to keep meeting, thinking, and caring together, without mediation.

Read more insights from the panel discussions held as part of our Stockholm 20 years celebrations.

Geopolitics in a tougher world: Europe, the Nordics, and the future of cooperation

Rethinking the world by moving from drama to data