American football coaches and players can tell you how combing over hours of sports footage manually can be a tedious process.
The Kansas City Chiefs and Philadelphia Eagles, for example, will most likely spend many hours in the next few days poring over film to study plays, formations and weaknesses before facing off in Super Bowl LVII on Sunday, Feb. 12.
New artificial intelligence technology being developed by Brigham Young University engineering professor D.J. Lee, master’s student Jacob Newman and Ph.D. students Andrew Sumsion and Shad Torrie, however, could help automate the process of analyzing and annotating football game footage.
The deep-learning algorithm focuses on detecting player locations, labeling the players with their football player label (quarterback, safety, etc.) and identifying the offensive formation — a process that currently requires a bunch of video assistants.
The BYU algorithm — detailed in the article “Automated Pre-Play Analysis of American Football Formations Using Deep Learning” recently published in an issue of “Advances of Artificial Intelligence and Vision Applications in Electronics” — could also have applications in other sports, Lee and Newman reported in a BYU news release.
“Once you have this data there will be a lot more you can do with it; you can take it to the next level,” Lee said. “Big data can help us know the strategies of this team or the tendencies of that coach. It could help you know if they are likely to go for it on fourth down and two, or if they will punt. The idea of using AI for sports is really cool, and if we can give them even 1% of an advantage, it will be worth it.”
Read more about how the engineers developed the technology here.