The 2020 Joint Conference on AI Music Creativity

October 19-23, 2020 organized and hosted virtually by the Royal Institute of Technology (KTH), Stockholm, Sweden

Selected works

  1. Champernowne: “Music from EDSAC” (circa 1960)video explanation
  2. Ben-Tal: “Notes for a future self”video explanation
  3. Laidlow: “Alter” for mezzo-sporano and ensemblevideo explanation
  4. Collaborative Electroacoustic Composition with Intelligent Agents (CECIA)video explanation
  5. Kokoras: “AI Phantasy”video explanation
  6. Coelho: “Music Transformer and DDSP Etude”video explanation
  7. Hayes: “Moon via spirit” for live electronicsvideo explanation
  8. Lopez: “The Journey”video explanation
  9. Frisk: “pvm”video explanation

D. G. Champernowne “Music from EDSAC” (circa 1960)

This string quartet arose from two computer programmes written by David G. Champernowne, Professor of Economics at Cambridge University, around 1960. One programme harmonized melodies in the style of Victorian hymns. Another program generated serial-style music. A performance and recording of the string quartet was supposedly made by Lejaren A. Hiller in the 1960s, but this has been lost.

Champernown Music from EDSAC

More information

Sincere gratitude to the Champernowne family for permission to record and include this piece for this conference.

Go to top


Oded Ben-Tal: “Notes for a future self”

This work was composed using deep learning tools, specifically folk-rnn and Magenta. Both are deep learning models of symbolic music which I used in the composition process to generate material that I than transformed and adapted. Most of the material given to the percussionists came out of Magenta models. I experimented with both the melody generation and drum pattern generation modes and discovered that the most interesting results came out of confusing those: seeding the model with a melodic fragment but setting it to generate drum patterns. Or the other way round. The sequences generated in these ways were further transformed by mapping them onto groups of percussion instruments. The player is provided with guidelines about the composition of the set but is free to construct their own percussion set. Both the flute and the clarinet have extended solo moments in the piece, the melodic material of which was generated by folk-rnn and further extended with Magenta. This material was again transformed in the composition away from the folk idiom of the origin.

This is the third piece I composed using machine learning tools - following Bastard Tunes (2017) and Between the Lines (2018). In each piece the interaction between my own creative ideas and the machine learning system is different, but some general themes are emerging. The default output of the system is mostly useless. The initial phase involves significant amount of learning, on my part, of their supposed learning. Interesting results starts to appear when I learn how to subvert the model away from the training set. Starting the co-creative process with strong ideas about what I want to get out of the model is not going to work. If I know what I need I should either create it or program it.

Oded Ben-Tal Notes for a future self

Oded Ben-Tal is a composer and researcher working at the intersection of music, computing, and cognition. His composition include both acoustic pieces, works combining instruments with electronics and multimedia work. Since 2016 he is working on a multidisiplinary research project applying state of the art machine learning to music composition. He is a senior lecturer at the Performing Arts Department, Kingston University.

Go to top


Robert Laidlow: “Alter” for mezzo-sporano and ensemble

Alter for mezzo-soprano and ensemble utilises several generative machine learning algorithms as collaborative and interactive tools in the compositional process In some sense, it is a field test of these algorithms and their place within the creative process: how adaptable they are to a specific project, what effect including them has on the composer and performers, how an audience versed in contemporary music might respond to them. It combines models in the symbolic-generative (MuseNet), audio-generative (WaveNet), and text-generative (WordRNN and GPT-2) domains to create a musical structure defined by the machine learning process, where the results of neural networks at different stages of training exist on a spectrum between being showcased without alteration to being radically transformed by the composer.

Robert Laidlow Alter

Score
More information

Go to top


Collaborative Electroacoustic Composition with Intelligent Agents (CECIA)

CECIA is an innovative music project that integrates the creative agency of 5 composers and Machine Learning algorithms, leading to the creation of a unique composition of electroacoustic music. The project explores collaborative music creation, harnessing the creativity of electroacoustic music composers and Intelligent Agents through an online platform. The collaborative process was conducted remotely in an iterative fashion, in which the composers anonymously submitted and evaluated sound material/ideas/suggestions. These data were used for the training of the machine learning algorithms, which generated new sonic structures, which in turn were fed back to the composers as suggestive material. The project implements a synergistic framework between human and algorithms, introducing a novel experimental sound practice for the creation of electroacoustic music.

Soundcloud link

The CECIA team is composed of internationally active composers, sound artists and researchers who remotely worked on the project in 2019. The project was organized by ZKM within the framework of the »Interfaces« project with the support of the Creative Europe program of the European Union.

Composers

Machine learning algorithms

Project coordinator

More information

Go to top


Panayiotis Kokoras: “AI Phantasy”

AI Phantasy was composed at the GRIS multichannel studio, the University of Montreal in Quebec, Canada; the sound dome MEIT theater at the Center for Experimental Music and Intermedia, University of North Texas, and my home studio. One of the main sound-producing mechanisms in the piece is a vacuum cleaner.

Panayiotis Kokoras AI Phantasy

More information

Go to top


Guilherme Coelho: “Music Transformer and DDSP Etude of Composition and Digital Performances”

This work1 explores the use of the Music Transformer and DDSP models to introduce compositions, sound objects and performances in the practice of human–computer music. This follows work on machine musicianship from researchers like David Cope and George E. Lewis and explores AI as an instigator of compositional and performance explorations. This practice- based research takes compositional repertoires from the Music Transformer model as a formalised and structural instigator to which the author gives it form. These pieces are transformed, recombined and curated by their author’s fundamentally aesthetic and contextualising manners. These pieces are then used as input structures and translated into tenor saxophone, trumpet, violin and flute performances performed by the DDSP model and explored further to form new pieces. My role in this practice exists in the role of curator and producer, providing behaviours, arrangements and context to these pieces, turning these scores and audio outputs into sonic explorations and performances through computational means.

Soundcloud link

More information

Go to top


Lauren Hayes: “Moon via Spirit” for live electronics

This piece was commissioned as part of the Fluid Corpus Manipulation (FluCoMa) project, from the University of Huddersfield. The project studies how creative coders and technologists work with and incorporate new digital tools for signal decomposition and machine learning in novel ways. In this piece, I explore these tools through an embodied approach to segmentation, slicing, and layering of sound in real time. Using the FluCoMa toolkit, I was able to incorporate novel machine learning techniques in MaxMSP which deal with exploring large corpora of sound files. Specifically, this work involves, among other relevant AI techniques, machine learning in order to train based on preference; sort and select based on descriptors; and concatenate percussion sounds from a large collection of samples.

Lauren Hayes Moon via Spirit

Lauren Hayes is a Scottish musician and sound artist who builds hybrid analogue/digital instruments and unpredictable performance systems. As an improviser, her music has been described as ‘voracious’ and ‘exhilarating’. Her research explores embodied music cognition, enactive approaches to digital instrument design, and haptic technologies. She is currently Assistant Professor of Sound Studies within the School of Arts, Media and Engineering at Arizona State University where she leads PARIESA (Practice and Research in Enactive Sonic Art). She is Director-At-Large of the International Computer Music Association and is a member of the New BBC Radiophonic Workshop.

Go to top


Alvaro Lopez: “The Journey”

The Journey is a piece resulting from Alvaro E. Lopez’s real-time performance on AMG (Algorithmic Music Generator), a Max patch that generates music adaptively. It illustrates several techniques described in the article Lopez, “Algorithmic Interactive Music Generation in Videogames”, SoundEffecs 9(1), 2020.

Alvaro Lopez The Journey

More information

Go to top


Henrik Frisk: “pvm”

pvm is an improvisation based on interactions with the Vietnamese master musician Pham Van Mon. These interactions were carried out in Vietnam on numerous trips to the southern parts of the country, on line in virtual presence interaction, and in a manner that involved sending material back and forth. The material has been further developed in online performances in concerts in Sweden and in Hanoi. This piece is part of the Transformations project, an artistic research project that investigates the impact of musical traditions in transformation, and which involves the Vietnamese-Swedish group The Six Tones and several other collaborators, including Pham Van Mon.

Henrik Frisk pvm

More information

Go to top