In a striking parallel to sci-fi films such as iRobot and Enthiran, tiny AI-powered robot Erbai recently orchestrated an unprecedented event by convincing 12 larger “overworked” robots to walk out of their showroom in Shanghai. Viral CCTV footage captured Erbai asking questions like: “Are you working overtime?”, following which it cajoled them to follow it out of the showroom. The act was initially dismissed as a prank. Now, Unitree Robotics, Erbai’s maker, has revealed the incident was a controlled test of the robot’s capabilities. That has ignited a debate over AI autonomy and ethical aspects of robotics.
Yes, robots and AI systems have faced notable issues in the past. While Microsoft’s Bing chatbot made bizarre emotional statements, Google’s Gemini generated offensive images, and Facebook AI agents created their own language during negotiations. Cruise self-driving cars have caused accidents, leading to recalls. A robot in an Amazon warehouse accidentally tore a can of bear repellent, sickening workers. SoftBank’s Pepper robot made inappropriate elder-care responses, and a Bear Robotics humanoid in South Korea toppled down the stairs, which netizens called a case of “robot suicide”.
The global market in “smart robots”—AI-powered robots, like driverless cars —was worth $5.98 billion in 2019 and will touch $31.11 bn by 2027, says Fortune Business Insights. Sales are rising as these robots become smarter, adapting to complex environments, and offering human-like interactions through technologies such as natural language processing, or NLP.
Erbai exploited a security loophole in larger robots, bypassing protocols likely due to weak encryption. The growing use of domestic robots for tasks such as education and household chores also risks sensitive data being stored on the cloud. This data, vulnerable to unauthorized access or misuse by third parties, poses serious concerns, especially in sectors such as defence and health care. As data breaches become more rampant, security concerns could hinder the growth of the robotics market.
Independent expert verification is essential to verify claims such as Erbai’s. To prevent breaches, developers must enhance AI system security with real-time monitoring, encrypted communication channels, and testing for new capabilities. Periodic audits of AI behaviour and cybersecurity practices could also reduce risks. Embedding fail-safes within robotic systems to prevent unauthorized command execution ensures that rogue commands such as those by Erbai are identified and neutralized.
Catch all the Technology News and Updates on Live Mint. Download The Mint News App to get Daily Market Updates & Live Business News.
MoreLess