Click here for the full video page and educator materials.

MATTHEW JOHNSON-ROBERSON (ASSOCIATE PROFESSOR, UNIVERSITY OF MICHIGAN): So I think one of the biggest problems still facing driverless cars is having to interact with the uncertainty of people crossing the street, people jaywalking, people crossing against the light. And knowing what someone’s going to do is an incredibly hard problem for any system.

RAM VASUDEVAN (ASSISTANT PROFESSOR, UNIVERSITY OF MICHIGAN): Self-driving cars need to figure that out. You need the cars to understand what a person is going to do.

TEXT ON SCREEN: RESEARCHERS WANT TO TEACH SELF-DRIVING CARS HOW TO PREDICT WHAT A PEDESTRIAN WILL DO NEXT.

MATTHEW JOHNSON-ROBERSON: One of the things that’s kind of been the most important thing for self-driving cars up till now has been just seeing the world around. Autonomous systems are incredibly good at identifying every single pedestrian around them, right? Is that a person? Is this a person? Is that person? But the challenge is that that’s not really how we drive. We as humans have a lot subtler understanding of the world.

We make incredibly sophisticated decisions all the time when we’re driving. So we’ll actually see somebody and not only know what they’re doing at the moment, but we have a pretty good sense of what they’re going to do in the next couple of seconds.

If I see a kid playing with a ball near the side of the street, I’m going to be more cautious because I know that maybe they’re not aware that there’s a car that’s passing near them, and they might make a move or do something that would put them at risk.

All of these things are the kind of subtle things that we incorporate as human beings when we drive, that are very difficult to code into a system.

TEXT ON SCREEN: BY USING FOOTAGE OF PEOPLE, THEY TEACH SOFTWARE TO GUESS WHERE A PERSONโ€™S NEXT STEPS WILL BE.

RAM VASUDEVAN: It allows us to predict the behavior of a person over five to ten seconds. All of their joints become an important thing.

MATTHEW JOHNSON-ROBERSON: Where are their arms? Where are they facing? Where are their shoulders? What are they looking at? And those are cues that we can build into our autonomous system to get better at predicting where we think someoneโ€™s going to be.

TEXT ON SCREEN: THE RED FIGURES SHOW WHERE PEOPLE HAVE ACTUALLY WALKED.

THE GREEN FIGURES ARE AN ALGORITHMโ€™S PREDICTION OF WHERE A PERSON WILL WALK. 

MATTHEW JOHNSON-ROBERSON: One of the things we’re aiming for is to make sure that we’re within a couple of centimeters of where the person actually ends up. And that’s really important from a driving perspective, so that we know that we can operate around them safely.

And so that really brings up this question of, how far off are we? How far off are we from a system that’s going to be reliable or safe enough, that we know that it’s going to be at least as safe as human beings โ€“ and probably, hopefully safer. Otherwise it wouldn’t make sense to replace humans with it.

TEXT ON SCREEN: THE RESEARCH IS ONGOING AND TESTING HAS EXPANDED INTO CARS, BUT IS STILL EXPERIMENTAL.

(END)