Q&A: Boston Dynamics’ Latest Atlas Videos
Boston Dynamics are the masters of dropping amazing robot videos with no warning, and last week, we got a surprise look at the new electric Atlas going “hands on” with a practical factory task.
This video is notable because it’s the first real look we’ve had at the new Atlas doing something useful—or doing anything at all, really, as the introductory video from back in April (the first time we saw the robot) was less than a minute long. And the amount of progress that Boston Dynamics has made is immediately obvious, with the video showing a blend of autonomous perception, full body motion, and manipulation in a practical task.
We sent over some quick questions as soon as we saw the video, and we’ve got some extra detail from Scott Kuindersma, senior director of Robotics Research at Boston Dynamics.
If you haven’t seen this video yet, what kind of robotics person are you, and also here you go:
Atlas is autonomously moving engine covers between supplier containers and a mobile sequencing dolly. The robot receives as input a list of bin locations to move parts between.
Atlas uses a machine learning (ML) vision model to detect and localize the environment fixtures and individual bins [0:36]. The robot uses a specialized grasping policy and continuously estimates the state of manipulated objects to achieve the task.
There are no prescribed or teleoperated movements; all motions are generated autonomously online. The robot is able to detect and react to changes in the environment (e.g., moving fixtures) and action failures (e.g., failure to insert the cover, tripping, environment collisions [1:24]) using a combination of vision, force, and proprioceptive sensors.
Eagle-eyed viewers will have noticed that this task is very similar to what we saw hydraulic Atlas (Atlas classic?) working on just before it retired. We probably don’t need to read too much into the differences between how each robot performs that task, but it’s an interesting comparison to make.
For more details, here’s our Q&A with Kuindersma:
How many takes did this take?
Kuindersma: We ran this sequence a couple times that day, but typically we’re always filming as we continue developing and testing Atlas. Today we’re able to run that engine cover demo with high reliability, and we’re working to expand the scope and duration of tasks like these.
Is this a task that humans currently do?
Kuindersma: Yes.
What kind of world knowledge does Atlas have while doing this task?
Kuindersma: The robot has access to a CAD model of the engine cover that is used for object pose prediction from RGB images. Fixtures are represented more abstractly using a learned keypoint prediction model. The robot builds a map of the workcell at startup which is updated on the fly when changes are detected (e.g., moving fixture).
Does Atlas’ torso have a front or back in a meaningful way when it comes to how it operates?
Kuindersma: Its head/torso/pelvis/legs do have “forward” and “backward” directions, but the robot is able to rotate all of these relative to one another. The robot always knows which way is which, but sometimes the humans watching lose track.
Are the head and torso capable of unlimited rotation?
Kuindersma: Yes, many of Atlas’ joints are continuous.
How long did it take you folks to get used to the way Atlas moves?
Kuindersma: Atlas’ motions still surprise and delight the team.
OSHA recommends against squatting because it can lead to workplace injuries. How does Atlas feel about that?
Kuindersma: As might be evident by some of Atlas’ other motions, the kinds of behaviors that might be injurious for humans might be perfectly fine for robots.
Can you describe exactly what process Atlas goes through at 1:22?
Kuindersma: The engine cover gets caught on the fabric bins and triggers a learned failure detector on the robot. Right now this transitions into a general-purpose recovery controller, which results in a somewhat jarring motion (we will improve this). After recovery, the robot retries the insertion using visual feedback to estimate the state of both the part and fixture.
Were there other costume options you considered before going with the hot dog?
Kuindersma: Yes, but marketing wants to save them for next year.
How many important sensors does the hot dog costume occlude?
Kuindersma: None. The robot is using cameras in the head, proprioceptive sensors, IMU, and force sensors in the wrists and feet. We did have to cut the costume at the top so the head could still spin around.
Why are pickles always causing problems?
Kuindersma: Because pickles are pesky, polarizing pests.
From Your Site Articles
Related Articles Around the Web