The 2020 Joint Conference on AI Music Creativity was entirely virtual, and live-streamed here.
(Photo credit: Agata Urbaniak)
Various convincing evolutionary theories across science and humanities posit a central role for musicking in the evolution of social, biological and technical patterns of modern humanity. In our contemporary milieu - one of existential, ecological etc. crises - musical experimentation plays a similarly critical role in providing a free and fertile space for making sense of the world, ourselves and our emerging technologies. In this talk I’ll share a range of projects from singing cybernetic homeostats and ecosystemic evolutionary art, through feedback cellos to acoustic biodiversity assessment to illustrate some of the ways that techno-musicking plays a creative role in ways of knowing and making our selves and our worlds.
Dr. Alice Eldridge is interested in how sound organises systems. Her research cross-fertilises ideas and methods from music, cognitive science, technology & ecology to develop biologically-inspired approaches to new musical instruments, and acoustic methods for conservation, within the emerging science of ecoacoustics. She holds a BSc in Psychology, an MSc in Evolutionary and Adaptive Systems and a PhD in Computer Science and AI and is currently Lecturer in Music Technology at the University of Sussex, where she is Co-Director of the Sussex Humanities Lab. Alice has appeared on BBC TV and BBC 4 as a soundscape ecologist; on BBC 3 as a free jazz cellist; on BBC 6 show as a contemporary chamber composer; and on BBC 1 John Peel show as a pop bassist. www.ecila.org
Analysis by synthesis is a classical tool in many areas of research. I will talk about how I enjoyed using it in three research areas. In one, Björn Lindblom and I applied it in a study of the compositional style in nursery tunes composed by the Swedish composer Alice Tegner in the beginning of the 20th century. A second application was music performance, where musician Larf Frydén, Anders Friberg and I developed a digital performance grammar that controlled a synthesizer; thus, the synthesizer performed music examples in accordance with a set of performance rules. A third area has been the singing voice, the purpose being to explore how various acoustic properties affect the sound of a singer’s voice.
Johan Sundberg (born in 1936, Ph.D. in musicology Uppsala University 1966, doctor honoris causae 1996 University of York, UK) has a personal Chair (Emeritus) in Music Acoustics at the department of Speech Music and Hearing (KTH), Royal Institute of Technology, Stockholm. He early became interested in the acoustical aspects of music, starting with a doctoral dissertation work on organ pipes. After the dissertation, singing voice and music performance have been his main research topics. He led the music acoustics research group from 1970 to 2004. In Musikens Ljudlära (The science of Musical Sounds) Sundberg presents music acoustics in popularized form to the interested layman. In Röstlära (The Science of the Singing Voice) Sundberg explains functional anatomy as well as vocal acoustics. Both texts have been seminal works in the field of voice science.
Improvisation can be seen as a major driving force in human interactions, strategic in every aspect of communication and action. In its highest form, improvisation is a mixture of structured, planned, directed action, and of hardly predictable local decisions and deviations optimizing adaption to the context, expressing in a unique way the creative self, and stimulating the coordination and cooperation between agents. Setting up powerful and realistic human-machine environments allowing co-improvisation necessitates to go beyond state of the art signal processing or reactive systems. We have proposed the expression co-creativity between human and artificial agents in order to emphasize the fact that creativity is a phenomenon of emergence, resulting from cross feed-backs and cross learning processes between complex agents (human, artificial) in interaction. This indeed neutralizes the endless question (and philosophical aporia) of whether artificial entities can be qualified as “creative” by themselves, and shifts the research interest to a more useful approach in our view: promote the conditions of co-creative emergence in cyber-human encounters and put the musician in control of this « machine musicianship ».
AI composition has come a long way, and the technology is starting to write convincing music. But to what end? Technology serves little purpose if it isn’t used. In this talk, we look at the range of products that AI composition technology has been applied to, what some of the pitfalls are when moving from research to product development in this nascent field, and what to think about when designing AI composition products.
How can ML augment how we interact with music? In this talk, I’ll take us on a tour through a series of research projects at Magenta to illustrate how we can design generative models to make music more interactive, accessible and fun for novices, and how these models can also help extend current musical practices for musicians and enable new musical experiences.
Many AI applications on music relates to music theoretical models, in the sense of musical concepts and ideas of how music works, that are not always made explicit, explicitly discussed or questioned. Already the choice of representations of music and sound as well as the selection of data for training models can influence models and creative output greatly. Here we will give some examples of problems arising when modelling basic musical features, from the experience of developing automatic music notation.
Attention AI Music researcher! Are you trapped in the ivory tower doing 100% research and 0% music? Come down and join the fun. Get a glimpse into what it’s like being a Bot Band circa 2020: neural synthesis, soundcloud bots, viral livestreams, music videos, art exhibits, hackathons, tours, and legal hurdles. You will leave feeling inspired to cold-email your favorite musicians about high-tech collaboration.
Decades of discovery in the field of artificial intelligence have prioritized intelligence as the lodestar of research and development. What if folkloristic authenticity was substituted as the guiding goal of our collective labours? In this playful performance, involving both storytelling and song, Henry Adam Svec will explore this and other questions through the recounting of a most unlikely scenario—the time when, in Dawson City, Yukon, he co-invented the world’s first artificially intelligent database of Canadian folksong.
This talk will highlight AI algorithms as applied to musical analysis, deep learning and generative music, and the application of COVID-19 data sets to musical composition.
Over the last few years Holly Herndon has developed music for her ensemble of human and inhuman vocalists. In this artist talk, she will discuss her unique approach to machine learning and explore it’s limitations as well as opportunities it might reveal for a 21st century approach to music.
Published with ISBN 978-91-519-5560-5
MON OCT 19 16-18 CEST (Chair: Róisín Loughran)
TUE OCT 20 10-12 CEST (Chair: David Meredith)
TUE OCT 20 16-18 CEST (Chair: Shlomo Dubnov)
Carmine-Emanuele Cella, Luke Dzwonczyk, Alejandro Saldarriaga-Fuertes, Hongfu Liu and Helene-Camille Crayencour. A Study on Neural Models for Target-Based Computer-Assisted Musical Orchestration pdf video
WED OCT 21 10-12 CEST (Chair: Steven Jan)
WED OCT 21 16-18 CEST (Chair: Jean-Pierre Briot)
THU OCT 22 10-12 CEST (Chair: Ollie Bown)
THU OCT 22 16-18 CEST (Chair: Robin Laney)
WIP Manos Plitsis, Kosmas Kritsis, Maximos Kaliakatsos-Papakostas, Aggelos Pikrakis and Vassilis Katsouros. Towards a Classification and Evaluation of Symbolic Music Encodings for RNN Music Generation pdf video
FRI OCT 23 16-18 CEST (Chair: Philippe Pasquier)
WIP Darrell Conklin and Geert Maessen. Aspects of pattern discovery for Mozarabic chant realization pdf