Automated sports cameras are watching human-controlled cameras to learn how to better follow the action. Automated sports cameras are less adept at filming soccer because player formations yield less information about where the ball is most likely to go next. Photo by Dziurek/Shutterstock
PITTSBURGH, June 21 (UPI) -- For mammals, including humans, much of learning is mimicry. The same goes for computers and robots.
Computer engineers with Disney Research and the California Institute of Technology are improving the performance automated cameras by having them watch and mimic the moves of human sports camera operators.
By watching humans film basketball and soccer games, researchers are teaching automated cameras to follow the action more smoothly and recover from mistakes with grace.
"Having smooth camera work is critical for creating an enjoyable sports broadcast," Peter Carr, senior research engineer at Disney Research, said in a news release. "The framing doesn't have to be perfect, but the motion has to be smooth and purposeful."
"This research demonstrates a significant advance in the use of imitation learning to improve camera planning and control during game conditions," added Jessica Hodgins, vice president at Disney Research. "This is the sort of progress we need to realize the huge potential for automated broadcasts of sports and other live events."
Currently, automated cameras don't follow the ball. They do their best to follow the action of a match by analyzing the movement of players and anticipating where the ball will travel and how the competition will unfold. When the game's flow doesn't match the anticipatory movement of the camera, the broadcast can appear jerky.
Researchers are trying to make the software that governs automated sports cameras more aware of their flaws by having them follow along with human-operated cameras. Scientists programmed the software to analyze how and why its algorithm-dictated movement deviates from the motions made by human-controlled cameras.
In doing so, researchers hope to create an automated camera that more evenly balances to the need for framing and smoothness.
Carr teamed up with researchers at Caltech to design and implement the learning software. The scientists are scheduled to present their findings this week at the IEEE Conference on Computer Vision Pattern Recognition in Las Vegas.