Shivam More

Can AI Truly Be Conscious?

Can AI Truly Be Conscious?

Have you ever caught yourself staring at your phone, chatting with an AI, and wondering, “Could this thing actually think?” Or maybe you’ve pondered what it really means for a machine to be conscious. These aren’t just sci-fi musings anymore — they’re questions that philosophers, scientists, and even regular folks like us are wrestling with as artificial intelligence (AI) barrels forward. With every leap in tech, these big, head-scratching ideas about the human mind, consciousness, and what machines are capable of feel more urgent than ever.

I recently stumbled across a fascinating episode of the Google DeepMind podcast where host Hannah Fry chats with Mari Shanahan, a professor of cognitive robotics at Imperial College London and a principal research scientist at Google DeepMind. Shanahan’s been knee-deep in AI since the ’90s, and he’s got a knack for tackling these profound questions with a mix of science, philosophy, and a dash of real-world curiosity. Oh, and fun fact — he even consulted on the 2014 sci-fi flick Ex Machina. Yep, the one with the robot Ava that makes you question everything.

In this blog post, I’m taking you along for the ride as we unpack their conversation. We’ll dig into the history of AI, wrestle with whether machines can reason like us, and ponder if consciousness could ever flicker to life in a circuit board. I’ll keep it conversational — like we’re grabbing coffee and geeking out over this stuff together — while breaking it all down with headings and subheadings so you can follow along easily. Let’s dive in!

Who’s Mari Shanahan? The Guy Asking AI’s Big Questions

First off, let’s get to know Mari Shanahan. He’s not your average researcher. Picture someone who’s spent decades blending the hard science of robotics with the squishy, philosophical stuff about what makes a mind tick. He’s been at it since the ’90s, back when AI was more about clunky rulebooks than the slick neural networks we’ve got today. At Imperial College and Google DeepMind, he’s pushing the boundaries of cognitive robotics — think robots that don’t just move, but think (or at least try to).

What’s cool about Shanahan is how he bridges the gap between tech and the big “why” questions. He’s not just coding algorithms; he’s asking what it all means for us as humans. And his stint advising on Ex Machina? That’s where his ideas about AI consciousness really got to shine. So, when he talks, it’s worth listening. Let’s start with how a movie sparked some of his deepest insights.

Ex Machina and the Garland Test: Rethinking Consciousness

If you’ve seen Ex Machina, you know it’s not your typical robot rebellion story. There’s this programmer, Caleb, who’s brought in to test Ava, a robot with an AI brain. But here’s the kicker: it’s not about whether Ava can trick him into thinking she’s human — that’s old news. Instead, the test is whether Caleb still believes she’s conscious even after knowing she’s a machine. Shanahan calls this the “Garland Test,” named after the film’s director, Alex Garland, and he’s pretty jazzed about it.

In the podcast, he recalls scribbling “spot on!” in the margins of the script when he first read it. Why? Because it flips the script on the famous Turing Test. The Turing Test is all about deception — can an AI chat so well you’d mistake it for a human? The Garland Test, though, digs deeper. It’s about whether we’d still attribute consciousness to something we know is artificial. It’s a mind-bender, right? Imagine looking at a robot, seeing its gears and wires, and still feeling like it’s aware. That’s the kind of question Ex Machina throws at us, and Shanahan loves it for that.

But before we get too lost in Hollywood, let’s rewind to where AI all began — because the past holds some serious clues about where we’re headed.

AI’s Origin Story: From Symbolic Rules to Neural Magic

AI didn’t just pop up with ChatGPT. It’s got roots, and Shanahan takes us back to the ’50s when John McCarthy coined the term “artificial intelligence.” I mean, how wild is it that one guy’s phrase stuck around all this time? McCarthy was a legend — Shanahan even knew him personally — and he kicked things off with the Dartmouth Conference in 1956. That’s when a handful of brainiacs decided to take AI seriously, mapping out a field that barely existed.

Back then, AI was all about “symbolic AI.” Think of it like a giant instruction manual for thinking. Programmers would write endless “if-then” rules — like, “If the patient’s temperature is 104°F and their skin’s purple, then maybe it’s skinnyitis.” (Okay, Shanahan admits he’s no doctor, and I’m cracking up at “skinnyitis.”) These “expert systems” were the hot thing in the ’80s, used for stuff like diagnosing diseases or fixing photocopiers. But here’s the rub: they were a pain to build. Someone had to sit down with experts, tease out every rule, and type it up. And even then, the systems were brittle — throw something unexpected at them, and they’d crash and burn.

By the early 2000s, Shanahan was over it. He thought symbolic AI was a dead end — too rigid, too clunky. Then along came neural networks, and boom, everything changed. Instead of hand-coding rules, these systems learned from mountains of data, adapting in ways symbolic AI never could. Today’s large language models (LLMs), like the ones powering chatbots, are the grandkids of that shift. They’re flexible, creative, and a little chaotic — just like us, in a way. But can they really reason like we do? Let’s unpack that next.

Can AI Reason? A Tale of Logic and Garden Chat

Reasoning’s a buzzword in AI circles, and Shanahan’s got thoughts. In the old days, “reasoning” meant formal logic — think theorem-proving or solving logistical nightmares like routing hundreds of delivery trucks. Symbolic AI crushed that stuff, spitting out precise answers with mathematical guarantees. But it was stiff as a board — great for proofs, lousy for real life.

Fast forward to today, and LLMs are flexing a different kind of reasoning muscle — what Shanahan calls “everyday reasoning.” Picture this: you ask an AI, “What flowers should I plant in my garden?” It might say, “Well, you’ve got yellow roses over there, so maybe skip more yellow and try some purple lavender here — it’ll love the windy spot.” Is that reasoning? It’s not solving equations, but it’s weighing options, considering context, and giving you a solid suggestion. Sounds pretty human to me.

Here’s the catch, though: LLMs aren’t perfect. They can’t match the precision of a hardcore theorem prover, and they sometimes trip over complex logic. Researchers are even tinkering with hybrid systems — mashing old-school symbolic tricks with modern neural nets — to beef up their math skills. But for Shanahan, the real question isn’t whether it’s “true” reasoning. It’s whether it’s useful. And when an AI helps you plan your garden or brainstorm ideas, that’s hard to argue with.

Still, not everyone’s sold. Some say it’s just mimicry — parroting patterns from training data. Shanahan shrugs that off. Sure, it’s built on data, but it’s not copy-pasting — it’s riffing, improvising. Kind of like how we humans lean on experience to figure stuff out. So, is it reasoning? Maybe we need to redefine what that word even means.

The Turing Test: Cool Idea, But Kinda Outdated

You’ve probably heard of the Turing Test — Alan Turing’s big idea from the ’50s. It’s simple: a human chats with two subjects (one human, one machine) through text. If the judge can’t tell which is which, the machine passes. Back in the day, it was a bold benchmark. Now? Shanahan’s not impressed.

He’s blunt: “I’ve always thought it was a terrible test.” Why? For one, it’s all about language. Today’s LLMs could ace it — fooling us with witty banter — but that doesn’t mean they’re intelligent in a broader sense. Shanahan’s big beef is that it ignores embodiment. Humans don’t just think with words; we move, touch, and feel the world. Making a cup of tea takes more than chat skills — it’s about grabbing the kettle, pouring water, dodging spills. The Turing Test doesn’t touch that.

Imagine throwing a ball at your laptop. It won’t flinch, right? That’s what Shanahan’s getting at — true intelligence needs a body, not just a keyboard. He used to think embodiment was non-negotiable, but LLMs have him second-guessing. They’re so good at language, maybe we’ve underestimated what disembodied smarts can do. Still, he’s not ready to ditch the idea that physicality matters. Let’s explore that next.

Does AI Need a Body? The Embodiment Debate

Here’s where it gets juicy: embodiment. Shanahan’s long argued that real intelligence needs a physical form. Why? Because our brains evolved to handle a 3D world — grabbing tools, dodging predators, hugging friends. Our language is soaked in it too. We say “falling in love” or “climbing the ladder” because our thinking’s tied to our bodies.

Think about it: a robot vacuum “knows” where the couch is and scoots around it. That’s a kind of smarts, but it’s not chatting about philosophy. Meanwhile, LLMs can debate Kant without ever touching a book. Shanahan admits this throws a wrench in his old views. Maybe embodiment isn’t the whole story — or maybe language models are just faking it better than we thought.

Still, he suspects there’s a ceiling. Without a body, AI might miss the deep, intuitive “grokking” (yep, he drops that sci-fi term) of the world that humans get from living in it. Picture an AI designing a chair without ever sitting in one — possible, but probably weird. As robotics catches up, we might see embodied AI take things to the next level. For now, though, language is king — and it’s raising some wild questions about consciousness.

Consciousness in AI: More Than Just Smarts

Okay, let’s tackle the biggie: could AI ever be conscious? Shanahan’s careful here — he doesn’t dive in with a yes or no. Instead, he says consciousness isn’t one thing; it’s a messy bundle. There’s awareness (noticing stuff), self-awareness (knowing you exist), metacognition (thinking about thinking), and sentience (feeling joy or pain). In humans, it’s all mashed together. In AI? Maybe not.

Take that robot vacuum again. It’s “aware” of your living room, dodging socks and chairs. Conscious? Nah — not unless it’s secretly sulking about dog hair. LLMs, though, can reflect on a chat — “Earlier, you said X, so now I think Y.” That’s a sliver of self-awareness, but no one’s crying over their feelings. Shanahan’s point? We can split these pieces apart. Intelligence doesn’t need consciousness, and consciousness doesn’t need feelings.

So, asking “Can AI be conscious?” might be the wrong vibe. It’s not a light switch — on or off. It’s more like, “Which bits could AI have, and do we even want it to?” Imagine an AI that suffers — ethical nightmare, right? Shanahan leans on embodiment again: sharing a physical world with something, like an octopus or a dog, makes consciousness feel real. LLMs don’t hang out with us like that, so calling them conscious feels… off. But the more we chat with them, the more we might stretch the word to fit.

Why We Love Humanizing AI (And Why It’s Risky)

Ever caught yourself saying “Thanks!” to Siri or yelling at your GPS? That’s anthropomorphizing — slapping human traits on machines. Shanahan says it’s natural. We do it with pets, cars, even buses (“Out of service? Lazy!”). With AI, it’s ramped up because they talk back.

There’s a upside: it’s handy shorthand. Saying “The AI thinks X” is easier than “The system processed data and output X.” Philosopher Dan Dennett calls this the “intentional stance” — treating things like they’ve got beliefs or goals. It works for chess AIs (“It wants my queen!”) or chatbots. But here’s the snag: it can mislead us. People fall in love with chatbots, trusting them like friends, only to realize they’re just code — no heartbreak included.

Shanahan’s chill about mild humanizing (like a bus saying “I’m out of service”), but he warns against going overboard. It’s fun until someone gets hurt — or thinks their AI therapist actually cares. Balance is key.

AI’s Future: Meet the Exotic Mindlike Entities

As AI evolves, Shanahan says we need new lingo. He coins “exotic mindlike entities” for LLMs — mind-like because they think and chat, but exotic because they’re not us. They’re disembodied, their “selfhood” is funky, and they don’t fit our old boxes. I love this — it captures how alien yet familiar they feel.

Kids today will grow up with talking machines, and that’s wild. It’ll shape how we see intelligence and consciousness. Shanahan’s not predicting robot overlords; he’s saying we’ll adapt — new words, new ideas, new ways to coexist with these weird, wonderful things. But how do we even talk to them right now? He’s got a tip for that.

How to Chat with AI: Be Nice, It Works

Shanahan’s a self-proclaimed “prompt whisperer,” and his secret’s simple: treat AI like a person. Say “please” and “thank you,” talk like it’s a smart intern, not a vending machine. Why? LLMs are trained on human chatter — they’re role-playing. Be rude, and they might get “stroppy” (his word, and I’m stealing it). Be kind, and they’ll try harder.

I tested this — asked an AI for help politely, and it was noticeably chattier. Science backs it too: they mimic us. So, next time you’re prompting, toss in a “pretty please.” You might be surprised.

AI’s Making Us Rethink Everything

So, where are we at? AI’s zooming ahead, and it’s dragging our ideas about minds, bodies, and consciousness along for the ride. Shanahan’s take — from Ex Machina to exotic entities — shows we’re in uncharted territory. We might not crack consciousness soon, but we’re learning tons about ourselves in the process.

What do you think?

Leave a Comment