When the composer and vocalist Jen Wang took the stage at the Monk Space in Los Angeles to perform Alvin Lucier’s “The Duke of York” (1971) earlier this year, she sang with a digital rendition of her voice, synthesized by artificial intelligence.
It was the first time she had done that.
“I thought it was going to be really disorienting,” Wang said in an interview, “but it felt like I was collaborating with this instrument that was me and was not me.”Isaac Io Schankler, a composer and music professor at Cal Poly Pomona, conceived the performance and joined Wang onstage to monitor and manipulate Realtime Audio Variational autoEncoder, or R.A.V.E., the neural audio synthesis algorithm that modeled Wang’s voice.
is an example of machine learning, a specific category of artificial intelligence technology that musicians have experimented with since the 1990s — but that now is defined by rapid development, the arrival of publicly available, A.I.-powered music tools and the dominating influence of high-profile initiatives by large tech companies.
Persons:
Jen Wang, Alvin Lucier’s “, Duke, York, ” Wang, Isaac Io Schankler, Wang
Organizations:
Cal Poly Pomona
Locations:
Los Angeles