People have always used new technology to experiment with new forms of music creation. Recent developments in artificial intelligence (AI) suggest, however, that machines are on the verge of becoming viewed as more than mere tools—they are increasingly seen as co-creators. A major shift in the perspective of the public is already underway in the text and image domains. There is no reason to doubt that the stage is set for a shift also within sound and music. Historically, music technology research has tended to focus on the technical aspects of development. Thus, the canon of computer music literature is heavily focused on describing new technologies and their implications from a tool-oriented perspective. Now that it is becoming increasingly accepted to ascribe creative agency to technology itself, we wish to shift the focus away from the technology and towards the musicians influenced by it.
Our vantage point is Co-Creative Spaces—an artistic research project that followed four musicians through a six-month-long collaborative process. The musicians created music by improvising with each other and with computer-based musical agents that were trained through machine learning to improvise in the style of the musicians. We wanted to explore what happens to musical co-creation when AI is included in the creative cycle. The musicians in the project are from Norway and Kenya—two countries with fundamentally different musical traditions. Therefore, there was an opportunity to examine how the collaboration could be affected by cultural biases inherent in the technology, and in the musicians themselves. In another publication (Thelle & Wærstad, 2023), these questions were examined through focus groups as part of two five-day workshops during the project in 2021–22. Analysis from this study revealed that the musicians moved between an understanding of machine as tool and machine as co-creator, and between the idea of music as object and music as process. These different interpretative repertoires were used interchangeably and painted a complex picture of what it is like being in the intersection between different musical and cultural paradigms. In the research project covered in this article, we gave the musicians an opportunity some ten months after participation in the practical part of the project to reflect upon how it had influenced their musicianship and impacted their views on creativity, ownership, cultural exchange, and technology in general. We analyse these retrospections with the goal of informing the overarching theme of reassessing the history of technology-based music.
Keywords: Music AI, co-creativity, human-computer-interaction, improvisation, creative computing.
As ever more impressive AI technologies are introduced, most of the attention is dedicated to the generative capabilities of the models, and how authentically they can replicate various genres of music. However, there is relatively little research on how the creative process itself may be affected by shifting the perspective and letting the machine act as a musical co-creative partner. What happens to musical co-creation when AI is included in the creative cycle? Co-Creative Spaces was a research project that took this question as a vantage point and followed four musicians through a six-month musical collaboration that resulted in two concert performances in 2022. The musicians created new music through interaction with each other and with virtual AI collaborators. We refer to these virtual collaborators as musical agents, defined as algorithmic entities that fully or partially perform creative music tasks (Tatar & Pasquier, 2018). The musical agents in Co-Creative Spaces were created by using machine learning on previous recordings of the musicians improvising with each other and could thus imitate the style of the musicians in the group. In addition to focusing on technology’s role in the creative process, Co-Creative Spaces also had an intercultural dimension involving musicians from countries with different music traditions (Norway and Kenya). Therefore, we directed a critical lens at cultural biases manifest both in the technology and among the musicians themselves.
This is the second publication about the Co-Creative Spaces project. The first publication (Thelle & Wærstad, 2023) focused on two five-day workshops during which the musicians co-created music with the musical agents. As part of the workshops, analysis of a series of focus group interviews showed how the musicians oscillated between traditional and new understandings of music, creativity, and culture under the influence of technology’s dual role as a tool and co-creator. In the current publication, we are retrospective. Ten months after the project’s conclusion, we asked the musicians to reflect on the implications of AI in music after having experienced human-computer co-creativity first-hand. Using this retrospective lens, we see that the use of new technology has long-lasting implications that take time to unfold fully. We argue that longitudinal studies are necessary to understand how technology and practice mutually influence one another. In a brave new world where technology itself is edging towards an appearance of creative agency, this is a perspective we should strive to uphold.
2.1 Co-creative musical agents
Although machine-generated music has a long history that stretches back several centuries to experiments with mechanical musical automata (Koetsier, 2001), musical co-creativity is a more recent phenomenon. Here, we define co-creativity as a phenomenon that occurs in collaborative contexts where both humans and machines contribute to a process or product that is considered creative (Jordanous, 2017). Early examples of co-creative interactive musical agents include CEMS (Chadabe, 1997) and The League of Automatic Composers starting up in the 1970s (Brown & Bischoff, 2002). Since then, the number of musical agents has proliferated. Many of these are identified and categorized by Tatar and Pasquier (2018), including several pioneering systems (Rowe, 1992; Lewis, 2000; Pachet, 2003; Assayag et al., 2006). Of relevance for Co-Creative Spaces is trombonist George Lewis’ improvisation system Voyager, which he developed towards the end of the 1980s (2000). Voyager was imagined as Lewis’ autonomous co-performer and is still in use more than 30 years later.
Central to Lewis’ music philosophy is the concept of multidominance, which he sets up as an opposition to the Western aesthetic that often involves letting a dramatic foreground dominate over background elements. In much African music, however, there are many discursive layers in the music: multi-rhythms and parallel melodies that do not necessarily harmonize according to the principles of Western art music. Lewis believes that Eurocentric music education does not equip its students with the ability to perceive multidominant rhythmic and melodic elements as anything other than noise or chaos. Multidominance requires an inclusive attitude towards the voices that contribute to the collective, where the music emerges from the interaction. Lewis’ focus on the musical interaction between human and machine and its social and cultural ramifications, rather than the technology itself, may be one explanation for the Voyager project’s enduring relevance.
2.2 Co-creative Spaces
Co-Creative Spaces consisted of the musicians Morten Qvenild (piano and electronics), Gyrid Nordal Kaldestad (vocals and electronics), Bernt Isak Wærstad (electric guitar and electronics), Labdi Ommes (vocals and orutu), and project leader Notto J. W. Thelle. Thelle, Qvenild, and Wærstad had experience from previous projects that experimented with various forms of musical co-creation between humans and machines (Thelle, 2022; Grydeland et al., 2020; Wærstad, 2020). The experiences, knowledge, and tools from these projects were consolidated into a piece of new software specifically developed for Co-Creative Spaces, carrying the name CCCP (Co-Creative Communication Platform). A comprehensive technical description of how this software works is beyond the scope of this paper, but here we provide a brief description.
The musical agents in the CCCP platform were trained on several recordings of the project’s four musicians engaging in collective improvisation sessions. Before training, the audio was segmented into slices based on an onset detection algorithm. The length of these slices varied depending on the style of the material but would typically be in the range of 150 to 3000 milliseconds. Using feature extraction techniques, these slices were subsequently labelled according to loudness, rhythmic, spectral, melodic, and harmonic content. The feature vectors were then categorized using a self-organizing map (SOM)—a type of artificial neural network that utilizes unsupervised learning to map high-dimensional feature vectors onto a two-dimensional topological grid (Kohonen, 1990). Thus, similar-sounding audio slices could be grouped together at the same coordinates in the SOM. Finally, the original audio files were encoded as sequences of indices serving as pointers to potential audio slices in the SOM.
In run-time, these SOM sequences could be recombined in countless ways using various sequence modelling techniques, resulting in output sometimes appearing to mimic the style of the material in the corpus and other times serving up near-matches to the audio in the input stream. The musical agents used a set of music information retrieval algorithms to “listen” to the human musicians and respond with the recombined material according to principles that could vary between pure imitation, contrasting phrases, and an independent and initiative-taking behaviour that is more independent of what the agent hears. The machine listening happened on multiple temporal levels, including onset-by-onset, phrases, and longer segments. The principles of changing between imitation, call-and-response behaviour and initiative-taking were based on a number of studies of how musicians in different genres improvise together by alternating between following and leading in the interaction (Thelle, 2022). A more in-depth description of the algorithms that constitute a significant part of these musical agents can be read in other publications (Thelle & Pasquier, 2021; Thelle, (2022).
2.3 The first study
The first publication about Co-Creative Spaces (Thelle & Wærstad, 2023) used data transcribed from ten focus group discussions between the musicians and the main author, conducted daily during two five-day workshops. The first workshop happened in December 2021 at the very start of the project, while the second workshop was held in May 2022 as the project was being finalized. In the analysis of the transcriptions from the focus group discussions, we focused on how the musicians adapted their language when discussing the creative process as interaction with the musical agents became an integrated part of their creative process. We identified two pairs of interpretive repertoires in the transcribed data material that demonstrated how the musicians balanced between different and partly conflicting ways of interpreting music-creating practice. We presented the repertoires as dichotomies where the musicians were apparently pulled between different interpretations of music, technology, and creativity. One such dichotomy was the understanding of the machine as tool versus the machine as co-creator. The other dichotomy was the sense that music as object stood in a sort of opposition to viewing music as process.
One main finding of this study was the gradual acknowledgement of machines as musical co-creators. The musicians became more willing to adapt to the aesthetics of the musical agents without necessarily manipulating them to bend to their will. By playing less and giving the musical agents more space, the musicians in Co-Creative Spaces demonstrated that striking a balance between viewing the machine as a tool and co-creator can take musical creation in directions that are different from interactions between people. When taking the agency of the machine seriously, co-creative spaces between humans and machines emerged, and this provided valuable new perspectives for musical co-creation in general. We surmised that creativity arises in the absence of full control and emerges when one’s own will is attuned to what the environment affords and leads to surprises.
Another finding was a subtle yet apparently essential difference in how the musicians conceptualised music. The Norwegian musicians were seemingly more aligned with the view of music as a structural concept, with an impetus on the progression of “form”. According to Ommes, much traditional Kenyan music is on the other hand conceived as an integral part of various activities and focuses on the repetition of themes. In the analysis, we contrasted the interpretive repertoires of music as object and music as process and found that the cultural biases manifest in the musical agents reflect the biases inherent in the developers’ (Thelle and Wærstad) aesthetic preferences. In a broader sense, it could be said that this represents the predominantly Western view of music as object, with the consequence that the musical agents were developed with the idea of creating something rather than doing something. After acknowledging that Ommes was outnumbered in terms of cultural influence, the software was tweaked towards allowing for more repetition, and the musicians agreed to let Ommes take the lead to a larger extent as an offset to the cultural asymmetry. Through the focus groups, the musicians gradually turned from formal thinking to thinking in terms of types of activity. In this regard, they drew more vocabulary from the music as process repertoire towards the end of the project.
Whereas the first study summarized in the previous section was a concurrent reflexive process, this study focused on hindsight. To acquire a retrospective dimension to Co-Creative Spaces, we asked the four musicians to answer a series of questions submitted to them via email approximately ten months after the end of the project in May 2022. The questions posed were:
- Looking back at CCS, what kind of impact would you claim your participation in the project had on your: a) musicianship?, b) creative process?, c) view of idea ownership/intellectual property?, d) view of cross-cultural collaboration?
- Considering the current pace of AI development happening in the text, image, and coding domains, what are your reflections on the implications of similar AI capabilities in the music domain?
- Do you have some examples of how participation in CCS have shaped your views in other ways than purely artistically?
- Has your participation in CCS made you think differently about your use of music technology in general, such as recording and editing software, synthesizers, instruments, etc?
- Other comments/reflections?
The questions were formulated and distributed by Thelle. The musicians were asked not to share their answers with each other to ensure that their reflections were their own. Being one of the participating musicians but also a co-author of this paper, Wærstad has more of an insider role in the research side of the project. However, the results and ensuing discussion has been written by Thelle alone to maintain a degree of separation between the researcher and the research subjects.
The results and discussion focus mainly on themes related to questions 1, 2, and 4. In particular, we have chosen to categorize the data according to five themes: artistic impact, ownership, cross-cultural collaboration, implications of AI on music, and reassessing music technology. In the discussion, we aim to use the lens of hindsight to inform the overarching theme of rethinking the history of technology-based music.
With Co-Creative Spaces in the proverbial rear-view mirror, the musicians all report that participation in the project has changed the perspective on their own musicianship and creativity, as well as their views of ownership, cross-cultural collaboration, and music technology in general.
4.1 Artistic impact
The common denominator for all the musicians involved in Co-Creative Spaces in terms of how their participation influenced their musicianship appears to be a sense of having been challenged to adopt another way of adapting to a musical situation. The subjective experience of this adaptation process was different for the individual musicians due to their widely different vantage points. For Ommes, playing free improvisation was as novel to her as playing with the musical agents. Therefore, she explains that it felt like “a deep-dive” into a musical domain she was learning as she went along. Wærstad and Qvenild both highlight that interacting with the musical agents increased their sensibilities to their own roles in the musical context, and increased their awareness to what is occurring outside their sphere of direct influence. For example, Qvenild realised that “I can’t be everyone, nor can the machine”. Similarly, Wærstad has become more conscious of what he adds to the musical situation while accepting input that is beyond his control, leading to him now trying to “play less, think less, and listen more”. He emphasises that this is not the same as avoiding initiative, but an awareness of giving space to other’s contributions. Kaldestad claims that working with the musical agents made her and her co-performers tune into each other in a different way, building a bridge between their cultures and genres, and “stretched our listening in a new direction”. As a result of the adaptation process, Qvenild points out that he has become more unafraid and not so preoccupied with being “correct” in a traditional musical sense. He mentions that the aspects of being “good”, “creative”, “emotional”, and “virtuosic” have been challenged.
In terms of creativity, the musicians report that participation in Co-Creative Spaces has influenced them in different ways. Although Qvenild has worked creatively with music technology for a long time, he claims that the interactional style he experienced with the musical agents in Co-Creative Spaces has made him more concerned with simplicity and limitations. He discovered a balancing act between “letting the machinery play out” without too much interaction and counteracting its output with what he calls a “very strong human musicality”. He describes this balance as contributing to the overall originality of the output. In Ommes’ view, both exploring free improvisation and collaborating with the musical agents increased the quality of her creativity, enticed her to explore new output, and made her more flexible in the creative process. For his part, Wærstad sums up that he has gained a better understanding of creativity as a collective process. Although this was not an entirely new discovery for him, he acknowledges that this project has lessened his ego and made him more attentive to the contribution of others and how they affect the creative process. He has also become more aware of the agency of non-human things. In her reply about how participation in the project had impacted her creativity, Kaldestad focuses on how she had to “open up my ears to other ways of working with and thinking about sound aesthetics and musical form”.
The musicians’ views on ownership or intellectual property were seemingly challenged during Co-Creative Spaces. Perhaps unsurprisingly, there is a clear sense of uncertainty to this theme among the musicians. While on the one hand maintaining a progressive outlook on the future of intellectual property rights in the face of increased automation, they are also stakeholders in the paradigm of keeping track of idea ownership and securing income based on the notion of intellectual property. Qvenild expresses ambivalence about the fact that the development takes away possibilities for humans, while at the same time being optimistic that AI could be a positive step towards forming an interactive dialogue with machines as opposed to merely reacting to them. Wærstad claims to have moved toward feeling less entitled to idea ownership after experiencing the complexity of creative interaction with the humans and musical agents involved in Co-Creative Spaces. He wonders whether the concept of intellectual ownership is becoming antiquated. He argues that the project has made him believe more firmly in the collective creative process, and is further towards advocating a change. Ommes thinks that the discussion around who owns and earns royalties when collaborating with AI highlights the value of having this discussion even within spaces in which AI is not involved. Participation in the project thus allowed her to question her own ownership and creativity also in human-to-human collaboration.
4.3 Cross-cultural collaboration
In the aforementioned paper related to Co-Creative Spaces (Thelle & Wærstad, 2023), the analysis of focus groups conducted during the collaborative process showed the group grappling with the issue of cultural bias inscribed in the software and data structure. While trying to compensate for the musical agents’ propensity towards behaving very much like Western-based experimental free improvisation musicians by tweaking the software itself, they found that the most practical strategy to obtain a balance between cultures represented by the participants was to give enough space for Ommes to take the lead in several segments of the performance. Thus, the Kenyan influence in the musical collaboration was preserved “manually”. A year later, Wærstad still feels that there is a lot of unresolved potential in the cross-cultural aspect of the project. Having worked with Ommes in previous collaborations and successfully achieved a unique blend of Western and African styles, he thinks Co-Creative Spaces did not reach the same level of cross-cultural synergy in its first iteration. He wonders if it might be impossible to separate technology from culture, and that cross-cultural collaborations really take time to be done properly. Qvenild points out that it was very interesting to experience how Ommes’ musical input played out in a domain where the tools have been designed by white males in their 40s. He reflects that he could sense the friction between the tools and the musical input, giving a new direction to the music which he found inspiring. Because the musical agents and Ommes’ music both represented logics that were alien to him, the contrast provided a space for him to express himself musically in ways he found emotionally refreshing. He adds that this has given him a perspective on how musical agents could be developed further, with other value systems, cultural predispositions, and gender balances. Ommes acknowledges that the musical agents create from a different cultural reference and found it fascinating to contribute her own musical culture into a project where the output of the agents is defined by Western culture. She suggests that this shines a light on what she refers to as a “global inhibitor”, namely equal access to technology and data representing all cultures. She expresses the opinion that AI output and data will transform as access extends to other parts of the world.
4.4 Implications of AI and reassessing music technology
In the year that has passed since the finalization of the Co-Creative Spaces project in May 2022, the development of generative AI has exploded. Little did the musicians know a year ago that ChatGPT would take the world by storm, and that prompt engineering new music would become a reality only a few months later (Agostinelli et al., 2023). As such, the timing of Co-Creative Spaces was interesting, because in many ways it could be seen as being launched at the tail end of a paradigm. Thus, looking back at the project a year on brings a perspective that would be difficult to imagine at the time. The stand-out point emanating from recent public discourse for Ommes is that humans are sometimes afraid of losing ownership and control of intellectual creativity. In online discussions, she has noted that people think AI has become so good at replicating human artwork and music that in some cases they are becoming indistinguishable. She feels there is a need for some kind of demarcation between the two. Kaldestad, for her part, does not think machines can replace humans, but after her experience with Co-Creative Spaces she is certain that AI can be used as a creative tool for making music. Wærstad points out that one of the experiences in the project was that it became boring if the musical agents were too similar to the human musicians. He thinks that one of the major reasons for doing creative work with AI is to learn something about ourselves and being pushed to explore pathways you normally wouldn’t seek out. Qvenild shares a similar view, emphasising the considerable social component in music. He thinks AI-generated music will continue to be coloured by an absence of social capabilities in interactive contexts. For him, this is what he finds intriguing, because it provokes “some kind of gaze into oneself” for the human interacting with the AI. The lack of an evaluating counterpart creates a situation where he thinks self-evaluation kicks in: “What do I want to do musically, and what is just an input in the interplay situation to gain the gaze of others?”
4.5 Reassessing music technology
Finally, the musicians were asked to reflect on how Co-Creative Spaces may have impacted their attitude to music technology in general, such as recording and editing software, synthesizers, or instruments. Both Wærstad and Qvenild state clearly that they have become more open to the agency of tools, with Wærstad advocating an attitude of “less control and more dialog”, or “a controlled lack of control”. Similarly, Qvenild becomes more open to “hearing out the instrument” while at the same time trying to critically assess whether such an attitude is beneficial or not. For Ommes and Kaldestad, participation in Co-Creative Spaces has inspired them to experiment more with synthesizers and sound processing. For instance, Ommes now wants to explore her traditional instrument—the orutu—and process the sound beyond the recognisable. She explains that the project has inspired her to develop her own live improvised show that involves the use of orutu, synthesizers, and MIDI controllers.
A remarkable common thread in the reflections offered by the participants in Co-Creative Spaces is how much they claim to have learned about themselves and their relation to other musicians. They all emphasise having gained an increased awareness of what they bring to the collective in co-performances and more acceptance of the contribution of others. Despite individual differences in terms of how they describe this heightened attentiveness, they converge on an insight of “playing less and listening more” leading to a creative transformation. Interestingly, the musical agents seem to have been catalysts in the growing appreciation of the collective creative process. Qvenild’s point of the machine’s lack of social awareness functioning as a sort of mirror is pertinent. People project intentions and desires. Machines, at least as we conceptualize them for the time being, do so only anthropomorphically. Hence, contributions from the musical agents could be heard not as projections of an internal “machine will”, but as relayed and distorted manifestations of human musicality with distinct machine aesthetics such as jarring transitions or gesture combinations that are difficult for humans to achieve. Perhaps interacting with these agnostic yet strangely expressive agents crystalizes the human contributions to a degree where the complexity of what Bown calls the “jumble of social interaction” (Bown, 2014) becomes apparent. When Wærstad claims to have less “right” of ownership, this is not a submissive statement given the context. Rather, it implies the acknowledgement of a set of conditions that perhaps always has been hidden in plain sight.
By exploring how creating music collaboratively with AI has affected musicians over a period of time both during and after the project, Co-Creative Spaces has avoided overly focusing on the technology itself. The study of human-computer musical co-creativity stands somewhat apart from what has been the mainstream of literature on technology-based music, which has tended to offer speculation about future implications of state-of-the-art technology. Even so, there are relatively few longitudinal studies on how musical agents have shaped the musicianship of their users. One notable exception is the work of George E. Lewis, whose first prominent reflections about his musical coevolution with the Voyager system was published more than ten years after he began improvising with it live, and two decades after he began programming it (Lewis, 2000). He has continued to both perform with and write about Voyager up until the present. He has also dedicated much of his authorship to issues related to cultural heritage and promotes Voyager as embodying the aesthetics of African-American music. In Co-Creative Spaces, we have also had a goal of preserving a cultural perspective. Given the multicultural background of the participants, the practice of reflecting on cultural biases in the software served a useful role in identifying both commonalities and differences between the musicians as well. We discovered that despite an intention of developing musical agents that would learn the style of any musical material submitted to the machine learning algorithms, the feature extraction and sequence modelling algorithms made the output of the musical agents take on the aesthetics of free improvisation typical of Western experimental musicians. We see this as a reflection of the programmers’ cultural bias. The fact that the musical agents were initially more or less agnostic to rhythm and groove is a good example of how algorithms can reproduce cultural asymmetry, such as has been documented in many technology fields (Buolamwini & Gebru, 2018; Noble, 2018; Benjamin, 2019; Costanza-Chock, 2020; D’Ignazio, 2020; Bender et al., 2021). It is worth noting that the software developers (who are also the authors of this paper) were not particularly diverse, so the issue of cultural biases in technology turned out to be timely.
The reflections offered by the participants of Co-Creative Spaces in terms of their own musicianship and creativity, along with their views of ownership and cross-cultural collaboration show us a process of gaining insight. While the normative view of technology is one of delivering novelty ex nihilo, it can also be seen as shining a light on pre-existing yet undiscovered creative, social, and cultural dynamics. This is one potential avenue for rethinking the history of technology-based music. Retrospective accounts of how technologies have changed the artistic outlook of their users and how cultures have been formed through coevolution between people and their tools are still very much needed. By looking back, new discoveries can be made regarding the symbiosis between humans and technology. In turn, the knowledge gained by making these discoveries may be a helpful reminder of the very humanness in creativity.
With the recent breakthroughs in generative AI, we appear to be living through a watershed moment in the creative arts. These technologies undoubtedly have huge implications for how music and art will be made in the future. It is easy to succumb to a sense of vertigo in the face of this development. By performing a retrospective analysis of reflections offered by the four musicians in Co-Creative Spaces, we have gained a better understanding of the longer-term impact of having created music in collaboration with musical agents. The consensus seems to be that participation in the project has made the musicians more keenly aware of their place in co-performative contexts, as well as more appreciative of the contribution of their peers. Although they expressed some ambivalence to how AI will affect the concept of musical ownership, there is little doom to be traced in their overall reflections. On the contrary, there is a sense of excitement about the creative affordances of new technologies. The musicians also experienced how being co-creative with the musical agents laid bare the cultural biases both in the software and in their own aesthetics. The opportunity to perform a comprehensive analysis of the musicians’ reflections almost a year after the end of the project provided a perspective that was different to the spontaneous reflexivity of the focus groups conducted during the project. As such, this study demonstrates the value of retrospection. The effects of interacting with technology are long-lasting and not always immediately apparent. In the same way as the musicians made new discoveries about themselves and their relation to other musicians, a renewed look at the history of technology-based music may reveal artistic, social, and cultural dynamics yet undiscovered.
Co-Creative Spaces has been supported by Arts and Culture Norway, Norwegian Composers’ fund, Norwegian Academy of Music, and Norsk jazzforum.
The musicians have given consent to be identified by their full names in this publication.
Norwegian research projects that collect personal information must be submitted to the Data Protection Official for Research at the Norwegian Social Centre for Research Data (NSD) for approval. NSD has approved the project, and we have abided by NSD’s guidelines for safe storage of personal information.
Agostinelli, A., Denk, T.I., Borsos, Z., Engel, J., Verzetti, M., Caillon, A., Huang, Q., Jansen, A., Roberts, A., Tagliasacchi, M., Sharifi, M., Zeghidour, N., & Frank, C. (2023). MusicLM: Generating Music From Text. https://doi.org/10.48550/arXiv.2301.11325
Assayag, G., Bloch, G., Chemillier, M., Cont, A., & Dubnov, S. (2006). OMAX brothers: A dynamic topology of agents for improvisation learning. [Conference paper]. AMCMM ‘06: Proceedings of the 1st ACM Workshop on Audio and Music Computing Multimedia, Santa Barbara, CA, USA (pp. 125–132). https://doi.org/10.1145/1178723.1178742
Bender, E. M., Gebru, T., McMillan-Major, A., & Shmitchell, S. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? [Conference Paper]. FAccT ’21:
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, https://doi.org/https://doi.org/10.1145/3442188.3445922
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Polity.
Bown, O. (2014). Empirically grounding the evaluation of creative systems: Incorporating interaction design. [Conference paper]. Proceedings of the Fifth International Conference on Computational Creativity, Ljubljana, Slovenia (pp. 112–119). http://computationalcreativity.net/iccc2014/proceedings/proceedings-pdf/
Brown, C., & Bischoff, J. (2002). Indigenous to the net: Early network music bands in the San Francisco Bay area. Crossfade. http://crossfade.walkerart.org/brownbischoff/index.html
Buolamwini, J., & Gebru, T. (2018). Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of the 1st Conference on Fairness, Accountability and Transparency, Proceedings of Machine Learning Research. https://proceedings.mlr.press/v81/buolamwini18a.html
Costanza-Chock, S. (2020). Design justice: Community-led practices to build the worlds we need. MIT Press.
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. MIT Press.
Grydeland, I., Neumann, A., Qvenild, M., Endresen, S., Frisk, H., & Pollen, B. O. (2020). Goodbye Intuition. https://www.researchcatalogue.net/view/974962/974963
Jordanous, A. (2017). Co-creativity and perceptions of computational agents in co-creativity. [Conference paper]. Proceedings of the Eighth International Conference on Computational Creativity, Atlanta, Georgia, USA (pp. 159–166). https://kar.kent.ac.uk/61658/
Koestler, A. (1964). The Act of Creation. Hutchinson & Co.
Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464–1480. https://doi.org/10.1109/5.58325
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. New York University Press.
Pachet, F. (2003). The Continuator: Musical interaction with style. Journal of New Music Research, 32(3), 333–341. https://doi.org/10.1076/jnmr.32.3.333.16861
Rowe, R. (1992). Interactive music systems: Machine listening and composing. The MIT Press.
Tatar, K., & Pasquier, P. (2018). Musical agents: A typology and state of the art towards Musical Metacreation. Journal of New Music Research, 48(1), 56–105. https://doi.org/10.1080/09298215.2018.1511736
Thelle, N. J. W., & Pasquier, P. (2021). Spire Muse: A Virtual Musical Partner for Creative Brainstorming. [Conference Paper]. New Interfaces for Musical Expression (NIME 2021), Shanghai, China. https://doi.org/10.21428/92fbeb44.84c0b364
Thelle, N. J. W. (2022). Mixed-initiative music making: Collective agency in interactive music systems. [PhD, The Norwegian Academy of Music].
Thelle, N. J. W., & Wærstad, B. I. (2023). Co-Creative Spaces: The machine as a collaborator. [Conference paper]. New Interfaces for Musical Expression (NIME 2023), Mexico City, Mexico.
Wærstad, B. I. (2020). Instrument design using machine learning and artificial intelligence. [Conference Paper]. International Conference on Live Interfaces, Trondheim, Norway. https://doi.org/10.5281/zenodo.3932927
About the authors
Notto Johannes Winju Thelle:
Notto J. W. Thelle is a researcher and a musician who is currently Head of Section at OsloMet Makerspace. His PhD thesis “Mixed-Initiative Music Making”, defended at the Norwegian Academy of Music in 2022, examined the creative trade-off between user control and computational autonomy in interactive music systems. Before starting with his PhD, he was Director of NOTAM – Norwegian Centre for Technology, Art and Music in 2012–17, and Board Chairman of PNEK – Production Network for Electronic Art in Norway 2015–19.
Bernt Isak Wærstad (1984) is an Oslo-based interdisciplinary artist, musician, and producer. He obtained a MA in Music Technology from the Norwegian University of Science and Technology (NTNU). Known for his fluidity in genres and mediums, Wærstad consistently ventures into the experimental frontiers of performance arts. His interdisciplinary approach often leads him to assume multiple roles within a production. In addition to his solo work, Wærstad is an integral part of various enduring collaborations, including the pop duo Unganisha and the experimental sound-art-technology think tank Vingelklang
Technology as a medium lies at the core of Wærstads artistic practice and he is continuously exploring new and critical perspectives of technology’s role in the arts. In the last years, one of his main focus has been artistic co-creation with algorithms and artificial intelligence — lastly in the project Co-Creative Spaces. Wærstad has been actively involved in teaching music technology courses and workshops at NTNU, NMH, and UiO since 2011. He was awarded a government working grant for younger artists in 2018.
 Orutu is a traditional Kenyan string instrument with only one string.