University of Illinois Professor’s Expertise in Machine Learning for Audio Benefits Creation of New Beatles Documentary

Print Friendly, PDF & Email

From the first time he used a synthesizer, Illinois Computer Science professor Paris Smaragdis knew that he wanted to learn how technology could make or alter music.

What’s followed is a career in academia that centered his Artificial Intelligence research on the question: What does it mean to take a stream of sound and then break it down into its individual components? The answers he found over the years helped him produce widely published research and more than 40 patents. But nothing he’s accomplished has been more “mind-bending” than the recent work he completed with a team of engineers to boost the audio quality of director Peter Jackson’s recent documentary titled “The Beatles: Get Back.”

“I remember growing up as a kid, listening through Beatles cassettes while I sat in the backyard. I began to understand that The Beatles weren’t just a big deal because of the music. They were lightyears ahead of others because of the music production,” Smaragdis said. “Being in a position in which I could not only see how they produced their music, but also deconstruct it and undo the mixing, was definitely a mind-bending experience.”

Smaragdis heard from the engineering team working on the documentary, at which point they asked if he could help clean up this treasure trove of old Beatles audio. The original audio mainly came from one microphone that picked up sounds from a room full of musicians, producers, friends and collaborators. The documentary wanted to extract speech and music from these sounds to properly portray the development of an iconic piece of music to today’s audience.

“That first phone call started with them asking me if this could even be done. I told them that if they tried it 10 years ago, I would say there was no way it was possible. But we’ve had several great advancements over these years that, I believed, could make this possible,” Smaragdis said. 

The entire process took the team nine months to complete.

Most of this work leaned into the research developments Smaragdis noted took place over the last 10 years. Prior to this, computers still struggled to make sense of complex signals such as music, where multiple things are happening simultaneously.  But more recently, data-orientated methods that Smaragdis works with tapped into machine learning models to produce more sophisticated and useful systems that are now good enough to produce seamless results.

“This was a large team, and I only worked on a narrow aspect of a system that is called, ‘Mal,’” Smaragdis said. “My focus was on how we could get all of the sounds we encountered separated and identifiable. We could identify Paul (McCartney) speaking over here, or George’s (Harrison) voice speaking over there.”

In the end, the months-long process resulted in a clean sound worthy of the documentary and the monumental band.

Sign up for the free insideAI News newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Speak Your Mind

*