A Boston Dynamics robot can now be run using ChatGPT. What does this mean?
Boston Dynamics' Spot, the dog-like robot, has been given a new feature that allows it to be run using voice commands through ChatGPT, an artificial intelligence app. This new development has been made by Levatas, an industrial software company that has partnered with Boston Dynamics. The ChatGPT feature is only being tested in-house by the company and has not yet been offered to its Fortune 100 customers who currently use Spot robots to conduct safety inspections in factories, utility plants, and oil and gas rigs.
The addition of ChatGPT and a Google speech-to-text app to the programming for controlling Spot will allow anyone to query Spot with their voice and ask simple questions related to the work it has done. According to Levatas founder and chief executive Chris Nielsen, the new feature will help "an everyday industrial worker to pause or stop the robot".
Speaking about Levatas's partnership with Boston Dynamics, Nielsen said, "We sometimes joke and say we are building ‘blue collar’ AI...We’re literally keeping humans safe in an industrial setting."
The Risks Involved in Using Generative AI in Robots
ChatGPT is known for its ability to understand human-like language commands, but it is also known for making mistakes, fabricating information, or even issuing threats. This has raised concerns among scientists and academics that the technology is being distributed too widely without adequate safeguards. A letter signed by more than one thousand scientists and academics in March asked the tech industry to delay further development of ChatGPT and similar apps for at least six months.
Gary Marcus, professor emeritus in psychology and neural science at New York University, has also warned about the risks involved in putting generative AI apps in robots. He said, "What could possibly go wrong? A lot. ChatGPT is notoriously unreliable... A hallucinating robot could pose serious issues."
Conclusion
The addition of ChatGPT to Boston Dynamics' Spot robot is a notable development that has raised concerns about the safety aspects of using generative AI in robots. While Levatas has so far trained the software using only data from the robot and its missions to avoid "hallucination," there is still a long way to go before the software can be considered fully reliable. However, many experts believe that integrating such developments is crucial to make the robots more interactive and useful.