top of page

Tabletop Tracking

 

The following videos shows object tracking and HMM-based behavior estimation:

 

Videos of three simple scenarios are shown. In the first two, a robot views a tabletop that a person is using while they reach for food and homework-related items. In the third scenario, the robot watches a person extinguished a trash can fire. In all scenarios, the system tracks subject’s hand as items are engaged with the objects. The top-left image shows the robot’s point of view, with detected regions outlined using ellipses. The blue line between the hand and laptop suggest that an “interaction” has been detected between those two objects. The lower-left image displays the results of the segmentation algorithm. The list on the right shows recognized objects that are detected within the robot’s field of view. If objects are found that are part of an interaction, those names are shown to rise out of the list. The predicted action is shown below the highlighted object. All processing is done on-line, in real-time.

 

 

Homework Scenario:  The system identifies when a person reaches toward a book, a bottle, a mouse, and a laptop. The book and laptop display changes in the object's state ( “open” or “closed”), while the remaining objects are found in only one state.

 

Eating Scenario:  The system identifies when a person reaches toward a pitcher, a glass, and a plate. The plate and glass displays changes in the object’s state (“full” or “empty”), while the remaining object is found in only one state.

 

Fire Scenario:  The system identifies when a person reaches toward a bag of chips and a fire extinguisher. Both objects is found in only one state.

bottom of page