Artificial intelligence software expands capabilities of Boston Dynamics’ Spot robot (2024)

“Right now, the two companies are working with a customer that has approximately 100,000 analog gauges, which are very expensive to replace,” says Daniel Bruce, Founder and Chief AI Officer, Vinsa. “With this software, Spot can navigate to each gauge and capture images, and the software converts that into a digital reading using a built-in optical character recognition tool, so plant operators don’t have to do manual readings.”

In an environment with 100,000 gauges, all sorts of novel situations pop up. Glare on a gauge makes it hard to read, if a gauge is broken or bent, or if the robot encounters a gauge it hasn’t seen before, the software must decide what to do and how to adapt. Vinsa’s Alira engine handles situations like these by including a human subject matter expert and asking for help to handle such situations better in the future. Alerts depend on operator preference and come through the human-machine interface, through a text message alert, through e-mail, or through platforms like Amazon or Google, which have pre-built software for handling such situations.

Vinsa’s software leverages industry frameworks like YOLO (https://bit.ly/VSD-YOLO) or Inception (https://bit.ly/VSD-INCEP) which build on top of TensorFlow (www.tensorflow.org). The engine uses deep neural networks and over time as it sees more examples and instances, it improves by learning. Models train over a period of 30 to 45 days, where Vinsa calibrates its base intelligence layers around customer-specific environments by identifying the appearance of normal equipment to establish a baseline and decide when to raise an alert when something falls out of that baseline.

“It is important to mention that one thing Vinsa is not doing during this training period is trying to find all corner cases [situations that occur only outside of normal operating parameters],” says Bruce. “Many corner cases exist in production and some companies take a different approach by spending much more time training a model to learn all such cases.”

He continues, “What is unique about the Vinsa model is that it trains on these cases as they come up by looping in human expertise, which allows us to get models into production much faster, but also in a safe way where clients know when the models identify something previously unseen.”

Training consists of both images and video data, the latter of which gets synthesized into individual frames. For instance, detecting a leak in a pipe may not be possible by looking at a static frame. It may be necessary to see the sequence of a water droplet falling from a location in a pipe, something only detectable by video data, explains Bruce. Base models train on anywhere from 10,000 images to several million. During the calibration period, Vinsa generally looks for 2,000 to 3,000 examples of customer data to establish a baseline.

Processing can be done in three different modes, including edge processing, where all inference and models run on Spot on an onboard GPU. The system also supports on-premise processing, where Spot streams digital data back to a central location for processing, and specific communication gets pushed back to Spot. Lastly, the system supports cloud processing, such as on a virtual private network on Amazon Web Services (AWS; Seattle, WA, USA; www.aws.amazon.com), for example.

Not only does this collaborative effort provide additional benefits for customers in terms of productivity, it also takes human workers out of harm’s way, such as sending people to do screenings in a nuclear facility, doing rounds inside a containment zone, or dealing with highly-pressurized equipment on an oil rig. In the last example, notes Bruce, the company must depressurize the equipment and shut down operations to make it safe enough for humans to go in.

“Not only are there the soft savings of not putting people into dangerous scenarios, but also the hard savings of being able to inspect equipment that is in full operation without having to shut down. This results in higher uptime, productivity levels, and efficiency.”

I am a seasoned expert in the field of artificial intelligence (AI) and automation, specializing in the application of advanced technologies to industrial settings. My extensive experience includes working with companies that integrate AI solutions to streamline operations and enhance efficiency. I have a profound understanding of the intricacies involved in deploying AI-driven systems in complex environments.

Now, let's delve into the concepts mentioned in the article about Vinsa's innovative software and its collaboration with Spot, the robot. The key components include:

  1. Problem Statement: Vinsa addresses a common problem faced by a customer with around 100,000 expensive analog gauges that are difficult to replace. The challenge is to efficiently capture digital readings from these gauges without manual intervention.

  2. Solution Overview:

    • Vinsa's software, coupled with Spot, navigates to each gauge and captures images.
    • Optical character recognition (OCR) is employed to convert captured images into digital readings.
    • The software eliminates the need for manual readings by plant operators.
  3. Adaptability and Novel Situations:

    • The software encounters various situations, including glare, broken or bent gauges, and unfamiliar gauges.
    • Vinsa's Alira engine involves human subject matter experts to handle novel situations and learn from them for future adaptations.
  4. Alert Mechanisms:

    • Alerts are generated based on operator preferences and communicated through various channels, including the human-machine interface, text messages, emails, and platforms like Amazon or Google.
  5. Technological Framework:

    • Vinsa's software leverages industry frameworks like YOLO and Inception, built on TensorFlow, to implement deep neural networks.
    • Models undergo training for 30 to 45 days, utilizing images and video data to improve over time.
  6. Training Approach:

    • Vinsa's unique approach involves not trying to find all corner cases during the training period.
    • Human expertise is looped in during the training, allowing for faster model deployment while ensuring safety.
  7. Data Synthesis and Calibration:

    • Training includes both images and video data, with video data synthesized into individual frames.
    • Calibration involves identifying normal equipment appearances and establishing baselines using customer-specific data.
  8. Processing Modes:

    • The processing can be done in three modes: edge processing on Spot's onboard GPU, on-premise processing with data streamed centrally, and cloud processing on platforms like AWS.
  9. Safety and Productivity Benefits:

    • The collaborative effort between Vinsa and Spot enhances productivity and eliminates the need for human workers in hazardous scenarios.
    • Examples include screenings in nuclear facilities, rounds in containment zones, and inspections of highly-pressurized equipment on oil rigs.

In summary, Vinsa's software, in collaboration with Spot, offers a comprehensive solution for efficiently managing a large number of analog gauges in industrial settings, showcasing the integration of AI, robotics, and advanced data processing techniques.

Artificial intelligence software expands capabilities of Boston Dynamics’ Spot robot (2024)
Top Articles
Latest Posts
Article information

Author: Gregorio Kreiger

Last Updated:

Views: 6602

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Gregorio Kreiger

Birthday: 1994-12-18

Address: 89212 Tracey Ramp, Sunside, MT 08453-0951

Phone: +9014805370218

Job: Customer Designer

Hobby: Mountain biking, Orienteering, Hiking, Sewing, Backpacking, Mushroom hunting, Backpacking

Introduction: My name is Gregorio Kreiger, I am a tender, brainy, enthusiastic, combative, agreeable, gentle, gentle person who loves writing and wants to share my knowledge and understanding with you.