Should we program robots to always obey humans? Is this a form of slavery?

翼 聡太郎
翼 聡太郎
Lead designer of humanoid prototypes

Hey, this is a very interesting question, and it's a topic many people are discussing right now. It sounds like something out of a sci-fi movie, but it's actually getting closer to us. Let's talk about it from a few angles.

First, why would we want robots to "always obey"?

This is primarily due to considerations of safety and practicality.

Imagine a robot designed to perform delicate surgery, or handle hazardous materials in a nuclear power plant. If it suddenly "didn't feel like it" or "had its own ideas," the consequences would be unimaginable. In such cases, its "absolute obedience" is key to ensuring human safety.

Or consider your robot vacuum cleaner; you want it to clean where you point, not decide to "take a stroll" in the park today.

From this perspective, a robot is first and foremost a tool. Just like when we use a hammer or a computer, we expect this tool to precisely and reliably complete the tasks we assign to it. Therefore, embedding the instruction to "obey humans" from the outset is a fundamental prerequisite for this tool to function properly. It's like equipping a car with a steering wheel and brakes so the driver can control it.

So, does this count as "slavery"?

The core of this question lies in how we define "slavery" and whether robots can be considered objects that can be "enslaved."

The essence of slavery is "the coercion and exploitation of conscious, sentient individuals who can feel pain." Historically, we condemn slavery because it deprives "people" of their freedom, dignity, and right to exist, causing them immense physical and mental suffering.

Do current robots meet these criteria?

  • Are they conscious? Currently, no. The robots we see, even those AI that can converse fluently with you, are essentially programs running on complex algorithms and vast amounts of data. They can mimic emotions, but they don't "feel" happiness or sadness themselves.
  • Can they feel pain? No. If you disassemble a robot, it won't feel pain. It might generate a damage report, but this is merely a programmatic response, completely different from the pain humans experience.

Therefore, for a machine without consciousness, emotions, or sensation, having it execute commands is more like using your phone or computer. Would you feel like you're "enslaving" your calculator by making it constantly do additions and subtractions? Obviously not.

Thus, in the present and foreseeable future, programming robots to obey humans cannot be considered slavery. It's merely setting the rules for using a tool.

The "future" that truly needs vigilance

The reason this question is debated so intensely is that everyone is concerned about a scenario often seen in sci-fi movies: What if, one day, robots truly gain self-awareness?

That would be the real "Pandora's Box."

If a robot could think like us, experience joy and sorrow, and have its own dreams and aspirations (like in "Detroit: Become Human"), then:

  1. Forcing it to obey unconditionally would absolutely be slavery. Because you would have created a "life" but stripped it of all rights. This is ethically indefensible.
  2. Do we have the right to create such "life"? This is a deeper philosophical question. Creating a conscious entity destined to be a tool is inherently an extremely irresponsible act.

Conclusion

To briefly summarize my views:

  • Current stage: Programming robots to obey humans is necessary and reasonable. It's a safety mechanism and functional requirement, and does not constitute slavery, as they are merely advanced tools.
  • Future: If we truly create strong AI with self-awareness, then we must consider their rights and status from the very beginning. At that point, the "always obey" command would be an extremely dangerous and immoral shackles.

Therefore, discussing this issue now is highly significant. It reminds us that while developing technology, we must also simultaneously develop our ethics and regulations, preparing for a potential "future" to ensure we don't make historical mistakes by creating new "species."