In these days of social distancing, as millions cloister at home to binge-watch TV over the internet, Stanford researchers have unveiled an algorithm that demonstrates a significant improvement in streaming video technology.
This new algorithm, called Fugu, was developed with the help of volunteer viewers who watched a stream of video, served up by computer scientists who used machine learning to scrutinize this data flow in real time, looking for ways to reduce glitches and stalls.
In a
scientific paper
, the researchers describe how they created an algorithm that pushes out only as much data as the viewer’s internet connection can receive without degrading quality.
“In streaming, avoiding stalls depends heavily on these algorithms,” says Francis Yan, a doctoral candidate in computer science and first author of the paper, which received the 2020 USENIX NSDI Community Award.
Many of the prevailing systems for streaming video are based on something called the Buffer-Based Algorithm, known as BBA, which was developed seven years ago by then-Stanford graduate student Te-Yuan Huang, along with professors Nick McKeown and Ramesh Johari.
BBA simply asks the viewer’s device how much video it has in its buffer. For example, if it has less than 5 seconds stored, the algorithm sends lower quality footage to guard against interruptions. If the buffer has more than 15 seconds stored, the algorithm sends the highest quality video possible. If the number falls in between, the algorithm adjusts the quality accordingly.
Although BBA and similar algorithms are widespread in the industry, there have been repeated attempts by researchers over the years to develop more sophisticated algorithms using machine learning — a form of artificial intelligence in which computers teach themselves to optimize some process.
But in a modern variation of the old garbage-in-garbage-out computer adage, these machine learning algorithms generally require simulated data to learn from, rather than the real thing delivered over the real internet. Therein lies a problem.
“The internet turns out to be a much messier place than our simulations can model,” said Keith Winstein, an assistant professor of computer science who supervised the project and advised Yan along with associate professor of computer science and electrical engineering Philip Levis. “What Francis found is that there can be a gulf between making one of these algorithms work in simulation versus making it work on the real internet.”
To create a realistic microcosm of the TV-viewing world, Winstein’s team erected an antenna atop Stanford’s Packard Building to pull in free, over-the-air broadcast signals which they then compressed and streamed to volunteers who signed up to participate in the research project, known as Puffer.
Starting in late 2018, the volunteers streamed and watched TV programs via Puffer and the computer scientists simultaneously monitored the data stream using their own machine learning algorithm, Fugu, and four other leading contenders, including BBA, that were trained to adjust their performance based on the actual quality conditions the viewers were experiencing.
At the start of their stream, each viewer was randomly assigned one of the five streaming algorithms and the Stanford team recorded streaming data like the average video quality, the number of stalls and the length of time the viewer tuned in.
The results disagreed with some earlier research studies that had been based on simulations or on smaller tests. When the supposedly sophisticated machine learning algorithms were tested against BBA in the real world, the simpler standard held its own. By the end of the trial, however, Fugu had outperformed the other algorithms — including BBA — in terms of least interruption time, highest image resolution and the consistency of video quality. What’s more, those improvements appear to have the power to keep viewers tuned in. Viewers watching Fugu-fed video streams lingered an average of 5-9% longer than other tested algorithms.
“We’ve found some surprising ways in which the real world differs from simulation, and how machine learning can sometimes produce misleading results. That’s exciting in that it suggests a lot of interesting challenges to be solved,” Winstein says.
###
This part of information is sourced from https://www.eurekalert.org/pub_releases/2020-04/ssoe-csc042120.php