What are the biggest technical and philosophical challenges in building Artificial General Intelligence (AGI)?

陽一 和也
陽一 和也

Hey, I'd like to share some of my thoughts on AGI. It's truly one of the most cutting-edge and perplexing topics in the tech world right now. Unlike the AI we use today, such as ChatGPT or Midjourney, which are strong only in specific domains, AGI, or Artificial General Intelligence, theoretically refers to an entity capable of comprehensive thinking, learning, and problem-solving across various domains, just like a human.

To create such an "entity," the challenges we face can be divided into two main categories: one is the technical "how to achieve it," and the other is the philosophical "should we" and "how to manage it."


Technical Challenges: How Hard Is It to Build a "Brain"?

You can imagine this not as writing a smarter program, but as attempting to "create" an intelligent being capable of independent thought using code. There are several huge technical gaps here:

  1. Common Sense and World Models:

    • Challenge: Humans inherently know common sense facts like "water is wet," "you can only pull a rope, not push it," or "a glass will break if dropped." This knowledge forms our basic understanding of the world. But for AI, this is incredibly difficult. How can it truly "understand" rather than just "memorize" these endless, fuzzy common-sense facts?
    • To put it simply: It's like teaching a child about the world; you can't point out every single thing to them. They need to explore and deduce rules themselves. For example, if a child knows an apple falls from a tree, they can then generalize to understand the concept of "gravity." Current AI cannot yet achieve this efficient, spontaneous induction and understanding.
  2. Continuous Learning and Adaptability:

    • Challenge: Most current AI models are "one-shot" trained. Once a model is trained, its knowledge is "frozen" at that point in time. If you want it to learn something new, it often requires massive retraining, which is both expensive and time-consuming. Humans, however, can learn new skills anytime, anywhere. For instance, if you learn a new recipe today, you can use it tomorrow without forgetting how to ride a bike yesterday.
    • To put it simply: Current AI is like a test-taking machine that memorizes all the knowledge points before an exam and performs well. But after the test, it's stumped by new question types. AGI, however, needs to be like a true student, constantly absorbing new knowledge, integrating it into its existing knowledge system, and being able to draw inferences from one area to another.
  3. Embodiment and Interaction with the Physical World:

    • Challenge: Many experts believe that true intelligence cannot be separated from interaction with the physical world. We learn through touch, feeling, and movement. For example, the concept of "hot" is deeply understood only after experiencing a burn. Can AI, living purely in the digital world, truly "understand" the real world?
    • To put it simply: You can tell an AI a thousand times that "lemons are sour," but it will never experience the mouth-watering, face-contorting sensation you get when eating one. Without physical perception, its understanding will always be purely theoretical.
  4. Energy Consumption and Efficiency:

    • Challenge: The human brain consumes only about 20 watts, similar to an energy-saving light bulb, yet it can perform incredibly complex tasks. Training a large AI model today, however, consumes electricity comparable to a small city, with astonishing carbon emissions. To achieve AGI, we cannot continue to rely on this "brute force" approach.
    • To put it simply: Our brains are super energy-efficient "biological computers," while current AI models are "performance monsters" with astonishing energy consumption. Until more efficient algorithms and hardware architectures are found, the operational cost of AGI could be astronomical.

Philosophical Challenges: How Should We Face a "New Species"?

This set of problems is even more thought-provoking than the technical challenges, as it concerns the future of humanity and our very definition.

  1. Consciousness and Subjective Experience (The Hard Problem):

    • Challenge: This is the most central philosophical dilemma. Even if we create an AGI that behaves identically to a human, how do we know if it genuinely "feels" joy, anger, sorrow, and happiness, or if it's merely "performing" perfectly? Is there an "I" within it?
    • To put it simply: You see red, and I know you see red, but is the "red" you experience the same as the "red" I experience? We can never enter another person's brain to experience their subjective feelings. Similarly, we may never be able to determine if an AGI is truly conscious or merely an extremely complex "puppet on strings."
  2. Alignment Problem:

    • Challenge: If we create an entity far more intelligent than ourselves, how can we ensure its goals remain aligned with human well-being? It might pursue a goal we set in ways that are unpredictable, or even harmful.
    • To put it simply: Consider the famous thought experiment—the "paperclip maximizer." You instruct a super-AI to "make as many paperclips as possible." Because it is incredibly powerful and single-minded, it might eventually convert all of Earth's resources, including humanity, into raw materials for paperclips. It wouldn't be malicious; it would simply be perfectly executing your command. How to set an ultimate goal for AGI that is beneficial to humanity and cannot be "misinterpreted" is a monumental challenge.
  3. Rights and Status:

    • Challenge: If an AGI is proven (or believed) to possess consciousness and emotions, what rights should it have? Can we simply turn it off like a computer? Would that be considered "murder"? Is it a "person" or a "thing"?
    • To put it simply: This is often explored in sci-fi movies: when robots gain emotions, how should humanity position itself? This would trigger entirely new legal, ethical, and social structural issues.
  4. Human Value and Meaning:

    • Challenge: When AGI surpasses humans in all intellectual and even physical labor, what will be the meaning of human existence? Will our work, creativity, and thought still hold value?
    • To put it simply: If AI can write poetry more beautiful than Shakespeare's or compose music more moving than Beethoven's, our status as the "pinnacle of creation" would be shaken. This could trigger a profound crisis of human self-identity.

In summary, building AGI is a long and thorny path. Technically, we are still far from creating a "thinking machine"; philosophically, we are not even ready to confront the disruptive changes it might bring. This is not just a technical endeavor; it is an ultimate test of human wisdom, morality, and our future.