How do Artificial Intelligence and Machine Learning technologies enhance the autonomous capabilities of underwater robots?

Elfi Jäckel
Elfi Jäckel
Data scientist building AI-powered applications. 数据科学家,开发AI应用。AI搭載アプリ開発データ科学者。Datenwissenschaftler für KI-Apps.

Let's put it this way: imagine an underwater robot without artificial intelligence (AI) as a drone that you have to operate step-by-step with a remote control joystick. You watch the screen, press "forward," and it moves a little; press "left," and it moves a little to the left. It has no thoughts of its own; it's completely an obedient "puppet on a string."

But once it's equipped with an AI and machine learning brain, the situation changes completely. It transforms from a "puppet" into a "diver" with a certain degree of autonomy. This is mainly reflected in several aspects:

1. It can "understand" things, not just "see" them.

  • Before: The robot's camera captured images and transmitted them back to the control room. Humans had to stare intently at the screen to discern, "Oh, that looks like a pipe," or "Is that dark patch a shipwreck?" If a person's attention wandered, they might miss crucial information.
  • Now: AI can learn from massive amounts of images, allowing the robot to identify things on its own. When it sees a fish, its brain can react: "That's a clownfish." When it sees a spot on a pipe, it can immediately determine: "This is a corrosion point, 80% severity, needs to be recorded." It can even accurately find a specific target it's looking for amidst a chaotic mess of seabed debris. This is called "computer vision," essentially giving the robot a pair of thinking eyes.

2. It can "navigate" and "plan routes" on its own.

  • Before: In the deep sea where GPS signals can't reach, robots easily got "confused" and didn't know where they were. You could only roughly estimate their position using sonar or similar equipment, making operation very difficult. One wrong move could lead to hitting a reef or getting lost.
  • Now: AI employs a technology called "SLAM" (Simultaneous Localization and Mapping). The robot can swim while scanning its surroundings with sonar or cameras, creating a real-time 3D map in its "mind" and clearly knowing its position on that map. You just need to give it a destination coordinate, and it can plan the safest, most energy-efficient route, actively avoiding obstacles along the way. It's like an experienced old driver, not a novice who needs you to constantly hold the steering wheel.

3. It can "make decisions" on its own.

  • Before: In unexpected situations, such as a target being entangled in seaweed or a sudden strong ocean current, the robot had to stop and "wait for instructions," with people on shore deciding what to do next. By the time instructions arrived, it would often be too late.
  • Now: AI gives it decision-making capabilities. For example, if its task is to inspect an undersea cable, it will follow the cable on its own. If it finds the cable buried in sand, it will autonomously decide whether to go around it or try to use tools to clear the sand. If it encounters a strong current, it will adjust its thruster power to counteract it, ensuring the mission continues. This autonomous decision-making ability greatly enhances its capacity to survive and work in complex and ever-changing underwater environments.

4. It works more "dexterously."

  • Before: If the robot had a robotic arm, operating it was like playing an extremely difficult "claw machine" game. You had to control several joints simultaneously to perform a simple grasping action, which was very clumsy and inefficient.
  • Now: AI can make robotic arms very intelligent. You might only need to tap on the target object on the screen and give the command "grab it," and the AI will automatically calculate all the joint angles and forces, completing the grasp smoothly and precisely. This is a world of difference for delicate tasks like scientific sampling or equipment maintenance.

In summary, AI and machine learning technologies have upgraded underwater robots from mere "remote-controlled tools" to "intelligent entities" capable of autonomous perception, navigation, and decision-making. This allows us to deploy them to deeper, farther, more dangerous, and humanly inaccessible places to carry out more complex scientific research and engineering operations, with significantly higher efficiency and safety.