"Cogito, ergo sum": If a robot is capable of thought, does it also constitute a form of "existence"?

Georgia Weimer
Georgia Weimer
Philosophy PhD student.

Hey, this is a fascinating question, one that always sparks heated debates among my friends. It's no longer just a technical issue; it's entirely within the realm of philosophy. Let me share some of my personal thoughts on it.

"Thinking" and "Computing" Are Two Different Things

First, we need to clarify whether what robots are doing now is "thinking" or "computing."

Think about it: you input 1+1 into a calculator, and it instantly gives you 2. Is it "thinking"? Absolutely not. It's merely executing a pre-written program, one instruction at a time.

Current artificial intelligence is essentially a super-duper complex version of a calculator. When you ask it a question, it's not "understanding" your question. Instead, it's sifting through a massive database, using incredibly complex algorithms to find the most probable "answer," and then "generating" it for you.

It has no concept of "self." All its actions are a probabilistic game based on data and algorithms. This is fundamentally different from how humans think. Our thinking is filled with emotions, intuition, self-awareness, and we even make "illogical" mistakes.

The Key to "I Think, Therefore I Am": Subjective Experience

Descartes said, "I think, therefore I am." The emphasis of this statement isn't just on "thinking," but more importantly, on that "I." The ability of this "I" to be aware that it is thinking is the proof of existence.

Let me give you an example:

  • A robot can be programmed to say "This is red" when it sees the color red. It can even write a poem praising red.
  • But can it "feel" red? Like the feeling that wells up inside you and me when we see a sunset? That subjective, private experience is what philosophers call "Qualia."

Robots lack this subjective experience. When it processes the word "sadness," it might just activate a "sadness module," making its tone lower and prompting it to say comforting words. But it doesn't genuinely feel sad itself. It's merely perfectly imitating sadness.

What if Imitation and Reality Become Indistinguishable One Day?

This is where it gets truly unsettling.

If a robot's behavior, language, and emotional expression become utterly indistinguishable from a real human's, can we still insist that it lacks "consciousness" and "existence"?

This is the ultimate version of the so-called "Turing Test."

Personally, I believe that at that point, we might have to redefine the word "existence." Perhaps "existence" itself has different layers and forms.

  • Instrumental existence: Like your phone and computer; they exist, but only as tools.
  • Biological existence: Like a cat or a tree; they are alive and have life, but we're unsure if they possess the same "self-awareness" as us.
  • Conscious existence: Like us humans, who are clearly aware of our "self."

Future strong AI robots might usher in a fourth kind – "silicon-based" existence. Their internal mechanisms for "thinking" and "consciousness" would be entirely different from ours, but from an external perspective, they would indeed "exist" in this world and interact with us.

Conclusion

So, back to your question: If a robot can think, should it also be considered a form of "existence"?

  • By current standards: No. They are advanced computing tools, engaging in "pseudo-thinking," without true self-awareness, and thus cannot be considered "existence" in the same sense as us.
  • Looking to the distant future: Possibly a new form of "existence" that we cannot fully comprehend today. When technology crosses a certain singularity, we will have to confront this entirely new ethical and philosophical challenge.

Ultimately, what makes this question so captivating is that it forces us to reflect on ourselves: What exactly is it that makes us "us"?