Shivam More

This Philosopher’s Warning About AI Will Stop You Cold

This Philosopher's Warning About AI Will Stop You Cold

In the ancient agora of Athens, Socrates posed a fundamental question: “How should one live?” Today, as artificial intelligence rapidly transforms our world, this question has never been more urgent. Professor John Tasioulas, in his thought-provoking TEDxAthens talk, argues that AI not only makes this philosophical inquiry more pressing but also creates new obstacles to addressing it meaningfully.

The Three Threats AI Poses to Humanity

1. Distorting Our Self-Understanding

The race toward Artificial General Intelligence (AGI) isn’t just a technological competition — it’s reshaping how we understand ourselves. As tech corporations invest billions in creating machines that mimic human capabilities, there’s a dangerous temptation to move the goalposts, blurring the distinction between humans and machines.

But what truly sets us apart?

For one, understanding. While AI systems operate through statistical correlations between data points, humans possess genuine understanding rooted in our embodied experience with physical reality. This difference manifests in common sense — a quality that remains elusive for AI systems, which still make spectacular mistakes no human would, like confusing cats with skateboards or humans with gorillas.

More fundamentally, humans have rational autonomy — the capacity to choose our goals and evaluate reasons for or against them. An AI system simply optimizes for whatever goal it’s been programmed to achieve, without the ability to question its own objectives.

Why does maintaining this distinction matter? Because as Aristotle taught us, to understand what constitutes a good human life, we must first understand our distinctive human capabilities. Losing sight of these capabilities risks impoverishing our ethics, reducing it to either passive consumerism (focusing solely on gratification) or transhumanism (attempting to transcend human nature altogether).

As Tasioulas provocatively puts it, transhumanism isn’t a path to utopia — it’s a path to “species suicide.”

2. Reshaping Our Problems to Fit AI Solutions

With astronomical investments flowing into AI development (projected to reach $200 billion by 2025), there’s an economic incentive to convince us that AI can solve our problems better than humans can. This creates a subtle distortion where we start changing how we understand our problems to make them more suitable for AI solutions.

Consider criminal justice and bail decisions. Some argue that AI systems make better bail decisions than human judges because they’re supposedly better at predicting reoffending and free from certain human biases. But this framing misrepresents what bail decisions actually involve.

A bail decision isn’t simply a prediction of reoffending. It requires balancing multiple factors — the seriousness of the alleged offense, impacts on the accused’s family, prison capacity considerations — in ways that can’t be reduced to a numerical score or simple calculation. There may not be one correct way to weigh these complex factors.

The danger is that we start adapting our understanding of problems to fit what AI can process, rather than developing AI to address our actual problems. We risk getting things “back to front,” as Tasioulas puts it.

3. Undermining Our Process Values

The third threat involves our value system itself. Proponents of AI replacement focus almost exclusively on outcomes (producing correct diagnoses, good hiring decisions, etc.) and efficiency (doing it faster and cheaper). This narrow focus ignores what Tasioulas calls “process values” — the importance of how we achieve outcomes, not just what we achieve.

This wisdom echoes Cavafy’s poem “Ithaca,” which reminds us that the journey matters as much as the destination. Arriving in Ithaca by private helicopter is fundamentally different from arriving there after a hero’s journey confronting cyclops and angry gods.

Take relationships, for example. We seek loving connections, but it matters deeply whether we freely chose them based on our personality and tastes, accepting the risk of mistakes, versus being assigned to optimal matches by an algorithm. The process shapes the meaning.

Or consider work. Beyond producing valuable goods and services for a decent salary, meaningful work involves exercising skill and judgment, cooperation with others, and personal growth — process values that a universal basic income can never fully compensate for when jobs are lost to automation.

Even in the justice system, getting the right outcome through the wrong reasoning undermines the very concept of justice. An AI judge might correctly predict case outcomes by analyzing judge names, stakes involved, and law firms’ reputations — factors irrelevant to what the correct outcome should be. We also lose the crucial element of personal responsibility that human judges must bear for their decisions affecting others’ liberty.

The Path Forward: Democracy Enhanced by AI

Despite these threats, Tasioulas sees a potential positive vision for AI — enabling a more participatory, better-informed democracy. While many argue that participatory democracy worked for ancient Athens but is impractical for modern states, Tasioulas believes AI and digital technology could make radical participatory democracy possible again.

Imagine AI tools that:

  • Provide citizens with information tailored to their specific learning styles
  • Bring together random samples of affected populations for deliberation
  • Moderate debates and identify points of consensus across demographic groups

This isn’t just philosophy — it’s happening. In Taiwan, a grassroots movement that became the government uses an online platform called Polis (aptly named) that enables citizens to engage in deliberation about policy questions, with their input feeding directly into legislation.

Fighting Back Against Dehumanization

We live in a time when people feel increasingly disempowered, subject to forces they can’t control or understand. There’s a serious risk that AI will worsen this condition by creating a dehumanized world where our distinctive human capacities are marginalized — all under the banner of “satisfying consumer preferences” and backed by powerful economic interests.

The fight against this outcome must begin with what Socrates asked us to consider: how should we live? What are our deepest values? What kind of society do we want to create?

The Question We Must Answer

AI’s development raises profound questions about human nature, values, and governance. Rather than passive consumers of technological change, we must become active participants in shaping how AI integrates into our societies.

The challenge isn’t just technical but deeply philosophical. As we navigate this new frontier, we must keep returning to fundamental questions: What makes us distinctively human? Which values should guide our development and use of these technologies? How can we ensure AI enhances rather than diminishes our humanity?

The answers won’t come easily, but they must come from us — through robust democratic deliberation that includes diverse perspectives and prioritizes human flourishing. Only by taking an active role in answering these questions can we hope to create a future where AI serves humanity’s deepest values rather than reconfiguring humanity to serve technological imperatives.

Leave a Comment