Can current AI technology truly endow robots with reliable, unbiased judgment? Or will it amplify and entrench human societal biases?
Can Robots Be Absolutely Impartial, or Will They Become a Magnifying Glass for Our Biases?
Hello, regarding this question, my view is: Current AI technology, far from endowing robots with reliable and unbiased judgment, is highly likely to amplify and entrench biases already present in human society.
This might sound a bit disappointing, but the reasons are not complicated. We can think of it as the process of "teaching a student."
Where Does AI Bias Come From? — Problems with Both the "Textbook" and the "Teacher"
AI and robots themselves have no "thoughts" or "opinions." Their "judgment" comes entirely from the data we feed them. You can imagine AI as a super-learner student, and the data we provide is its "textbook."
-
Biased Data (Problematic "Textbook") The materials AI learns from, such as vast amounts of text and images on the internet, historical court records, company recruitment data, etc., all originate from human society. And our society itself is full of various explicit or implicit biases.
- For example: If we train an AI using news images from the past few decades to recognize the role of a "CEO," the AI is highly likely to conclude that "CEOs" are mostly white males. This is because its "textbook" presents it that way. When this robot is later used for initial resume screening, it might unconsciously lower the scores of female or minority candidates, not because it "discriminates," but because, in its "knowledge," this doesn't fit the typical "CEO" image.
-
Biased Creators (Problematic "Teacher") Engineers who design AI algorithms are also human and carry their own unconscious biases. When choosing which data to train with, setting learning objectives, or defining "success" or "excellence," they unconsciously embed their values and biases into the algorithms. This is akin to a teacher unconsciously favoring certain types of students when setting questions and grading.
Why Does AI "Amplify" and "Solidify" Biases?
If human biases occur sporadically, then AI biases are systemic and scaled, which is far more alarming.
-
Amplifying: A biased HR manager might only affect a few candidates at a time. However, a biased AI recruitment system can, in the name of "efficiency" and "objectivity," reject tens of thousands of resumes from specific groups within a single day. It amplifies a tiny bias ten thousandfold.
-
Solidifying: AI's decision-making process is often complex, like a "black box." When it makes a decision, people often blindly trust its objectivity because "it's computed by a computer" or "it's based on big data." This "algorithmic authority" makes it harder for us to question and challenge the hidden biases within, and over time, these biases transform from "unspoken social rules" into "machine-determined facts," becoming solidified.
What Can We Do?
Of course, this doesn't mean we should abandon AI. The entire industry is working hard to address this issue:
- Cleaning the "Textbook": Striving to create more balanced and diverse datasets.
- Opening the "Black Box": Developing "Explainable AI" (XAI) to help us understand why AI makes certain decisions, thereby uncovering biases.
- "Human-in-the-Loop": In critical decision-making areas (such as justice, recruitment, credit approval), AI should only serve as an auxiliary tool, with the final decision-making power remaining in human hands.
In summary, expecting AI to inherently possess unbiased judgment is unrealistic, much like expecting a mirror to reflect a more perfect image than ourselves. Currently, AI is more of a mirror reflecting our societal biases, and even a magnifying glass. The real challenge is not just technical, but also societal. We need to first confront and correct the injustices within our own society before we can "teach" a more just AI student.