New developments for spatial music in the context of the ZKM Klangdom: A review of technologies and recent productions

DOI: 10.5920/divp.2015.36

Abstract

The Institute for Music and Acoustics is a production and research facility of the ZKM | Center for Art and Media Karlsruhe. In this paper, we present some general thoughts on spatial music and its implementations as a motivation for our efforts. We outline the development of the ZKM Klangdom, a multi-loudspeaker facility for spatial sound diffusion that aims to provide artists and composers with new possibilities. We present the software project for controlling the Klangdom, Zirkonium, together with the key innovations in its new version Zirkonium MK2. A survey of examples of recent productions by guest artists is presented, and the Klangdom’s achievements in the context of real-time control and diffusion are portrayed.


Composing with space and the diffusion of spatial sound

The intentional incorporation of spatiality into the composition of European art music was first undertaken by Francesconi Ruffino d’Assisi in the sixteenth century and was continued by Adrian Willaert at St. Mark’s Basilica in Venice (Blankenburg, 1995, p. 771).

The spatial division and distribution of the voices that were invented in these contexts led to the emergence of concepts that incorporated the effect of spatialisation into the musical experience. Examples of this include experiments by various composers with distant orchestras or sounds from neighbouring rooms, and so on.

The topic gained particular relevance with the emergence of electronic music, through the introduction of the loudspeaker as a universal sound converter. There were various strands in this area, from Pierre Henry and Jacques Poullin’s Pupitre d`Espace of 1951 to Stockhausen’s rotating loudspeaker of 1958 at the West German Radio, or the approaches that encompassed the Philips Pavilion in Brussels, built by Le Corbusier and Iannis Xenakis for Edgard Varèse’s electronic work Poème électronique, and the spherical auditorium at the World’s Fair in Osaka, which between them extended the tonal dimension of music into space. Soon afterwards, around 1970, John Chowning developed the first abstract method for simulating motion in sound by using digital algorithms.

All these developments in the 1960s and 70s constitute the precursors for a global trend around the turn of the century. The primary focus of research and reception was no longer found in sound synthesis, but rather in how sound is performed and triggered. It is not surprising that research institutions and artists increasingly strive to include and make use of the parameter of spatiality and push the possibilities of perception to their limits both technically and acoustically.

On the basis of this historical and practical body of knowledge, a team of software and hardware engineers at the ZKM | Institute for Music and Acoustics (IMA) [1] has developed the concept of the Klangdom under the direction of Ludger Brümmer since 2003. Keeping in mind the composers’ need for straightforward usage, together with the concrete conditions of the typical concert environment, it was maintained that the resulting environment should ease these tasks. The spatialisation software should be able to adapt easily to different spatial conditions and changing loudspeaker installations for each concert setting.

The result was the Klangdom, installed in the Cube theatre of the ZKM, and the interactive control software Zirkonium (Ramakrishnan, Gossmann, Brümmer and Sturm 2006, p. 3; Ramakrishnan 2009). The Klangdom in the Cube is composed of 47 loudspeakers that are arranged around the listeners in the shape of a dome. Using Zirkonium, it is possible to send sounds across any given room with unprecedented precision; for example, to have the sounds circle around the listener.

Stockhausen expressed his visions and conceptions of spatial music to Brümmer frequently during the production of his work LICHT-BILDER at the ZKM | IMA. These were put into practice for the first time in high quality, although unfortunately Stockhausen was never able to use the Klangdom.

This article is generally intended for a broader audience. While some general motivations for the use of space in electroacoustic composition are presented, some parts address the more technically interested reader. However, the later sections are intended to serve as accessible summaries of practical aspects of works that have been realised using this technology.

Why space?

Spatiality in music is more than a parameter for the realisation of aesthetic concepts. Spatiality aids in the presentation, the perception, and the comprehension of music, and is thus by no means an end in itself. In order to pursue this topic, different dimensions of the term “space” can be distinguished.

There are spatial phenomena that can be generally subsumed under the concept of acoustics. The size of the room, its geometry, and its reflecting surfaces are perceived in the form of reverberation and echo. For example, the perception of the size of a space is related to the time interval between the direct sound and its first reflection (Neukom, 2003, p. 85), as well as a few other parameters, such as how the reverberation progresses. If this interval is short, the listener in effect intuitively develops the impression of a small room.

The most basic aspects of the electronic music composer’s work – the construction of sounds and their display under the general conditions of architectural acoustics – are augmented by the possibility of explicitly determining the sounds’ positions in space. Auditory location is based on distance and direction from which a source sound emanates. Human hearing is capable of simultaneously perceiving several independently moving objects or detecting groups of a large number of static sound sources and following changes within them. Spatial positioning is thus well-suited for compositional use. While the acoustics of a particular room are perceived in a holistic manner, the parameters of position and movement can be polyphonically used in highly complex formations.

This relates directly to the reasons why musicians orchestrate sound events using spatial location and movement. The observation that nearly all sound sources in our environment are in motion is of particular interest in this context. The most diverse movements of sounds – such as those of birds, automobiles, speech, or song – can serve as examples for this; the sources of these sounds cannot be attributed to a fixed position. Smaller movements or gestures in space, such as those made by strings, woodwinds, and so on in the course of their musical interpretation, are constantly perceptible during the production of sound. As a result of this movement, the sound obtains a constantly changing phase configuration, which gives rise to vitality in the sound. In contrast, static, immobile instruments appear uninteresting; their sound seems flat and unreal. These aspects indicate that the movement of sound sources and the accompanying phase shifts are common in auditory perception. Thus, the movement of sounds in space corresponds to our perceptual patterns, and the use of these parameters naturally suggests itself for musical contexts.

The use of spatial information within a composition cle
arly has an impact on compositional decisions. Examples from the sixteenth century demonstrate a clear logic in the use of these parameters. Spatial information helped give additional form to musical information and helped structure the course of the music at least as effectively as phrasing, articulation, and instrumentation. The best examples of the use of spatiality within a compositional structure are the duet or the concerto grosso (Scherliess and Fordert, 1997, p. 642), whose compositional form is perceived as being clearly structured through the use of spatiality in conjunction with instrumentation. The expanded and more differentiated approach to the spatial area available to the sound, as well as the integration of motion, allow new possibilities for the dramatic development of a composition to emerge.

A further aspect in favour of the deliberate use of spatiality is the fact that human hearing is capable of perceiving more information when it is distributed in space than when it is only slightly spatially dispersed. The reason for this phenomenon is that sounds are capable of concealing or masking each other (Bregman, 1990, p. 320). For example, if one plays back the signal of a loud bang and the signal of a quiet beep at the same time, the bang would normally completely conceal the other signal. If both sounds are equally loud, they would in ideal circumstances blend together. However, both sounds would remain separately perceptible, without merging with or masking each other, if they were to be played back on the left and right sides of a listener’s head. This example can be multiplied, until a sound situation arises in which 20, 30, or more sounds are audible, distributed throughout the space. Spatially distributed, such a situation sounds transparent and clear, while the stereophonic playback of the same sounds appears muffled, with little detail. The listener becomes capable of geometrically grouping different events and perceiving spatial formations while actively interpreting the acoustic information. This “interactive listening” provides various alternatives depending on the position of the listener, resulting in multiple variants of how the sound is received. In addition to this, the human mind also has the ability to focus on specific sound objects, in what is called the “cocktail party effect” (Cherry 1953). However, it only has this ability in conjunction with spatially positioned sound objects. If the listeners are ideally surrounded by sounds, they are, so to speak, able to dissect complex sound structures. In an extreme state, sounds can also create an immersive experience when distributed in space, the sound sphere no longer being an object of perception.

If one summarises the various acoustic and psycho-acoustic aspects of human hearing, it becomes clear that the ear’s ability to differentiate sounds increases considerably with the use of spatial information. If sounds are distributed over a large area, complex sound information can be designed in a transparent and easily audible manner. The listeners are thus able to grasp more acoustic information and to structure their listening flexibly. Inside a spatial environment a listener is able to choose different locations for listening to the sound, or he or she may even move while listening. This creates a completely different attitude in the perceptual process that facilitates interactive listening.

History of the ZKM Klangdom

Topoph: An early implementation of spatial concepts at the ZKM

Space and multichannel sound has been a main focus of the ZKM | IMA right from its start in 1989. As early as 1991 Sabine Schäfer and Sukandar Kartadinata presented their system Topoph (Kartadinata, n.d.; Schäfer, 2007), which was developed in cooperation with the University of Karlsruhe. In fact, this was one of the first computer-controlled systems for sound spatialisation over a large series of loudspeakers. In this sense, it can be seen as a precursor of the Klangdom. Topoph was an open system, controlled via MIDI. Its main accomplishment was that it achieved a very precise synchronisation of sound and movement, while integrating different sound modules, such as sampler, MIDI sequencer, or tape machine. Topoph gave the composer a very flexible way to route a number of input channels to an arbitrary number of output channels at any given moment, and to create smooth transitions between different routings. The developers described this as a “path-based” approach, in which the sounds could move along a path freely defined by the composer. The composer was given control over the speed of the movement, and over the amplitude, so he or she could adjust the amplitude when using a differing number of simultaneous speakers at different points of a movement. Because of this path-based approach, Topoph is not bound to a specific speaker setup, as other systems are – most notably vector base amplitude panning (VBAP) and Ambisonics systems, which rely on a spherical set-up. Schäfer herself made extensive use of this flexibility. Many of her projects are hybrids between sound installations and concert pieces, and she typically conceived a different speaker setup for each work (Schäfer, n.d.). More recently, Schäfer has adapted some of her works to the Klangdom using Zirkonium (for more on Schäfer’s work see below).

What made Topoph a very powerful tool for spatialisation was the opportunity for the composer to use high-level instructions for the movements, such as: generate a movement along speakers 2, 4, 6, 13, 11, and 7, at speed 25, overlap 3.5 and volume 90. The system took care of calculating all the low-level parameters that were needed in order to drive the speakers. One further aspect of the system was especially important for Schäfer: the spatialisation was fully integrated into the compositional process, so that the sounds and the movements were created together. This was in contrast to other systems, where the spatialisation is done as the final step of the composition – as for instance in the breakpoint editor of the “classic” Zirkonium (see figure 6).

The first version of Topoph, Topoph16, was already able to control 16 speakers. This was extended in 1992 to 24 speakers (Topoph24). The last release of the system in 1999 comprised 40 output channels. This was an impressive number at a time when few multichannel electroacoustic pieces exceeded eight discrete output channels. This last release (Topoph40d) was completely digital, as opposed to the previous hybrid systems.

The Klangdom project

The Klangdom of the IMA (see figure 1) is based on strategies that were inspired by the spherical auditorium of Osaka in 1970. In this specific spatial arrangement, the listener is immersed in a dome-shaped loudspeaker setup that allows sounds to be placed throughout the entire space of the concert hall. The reticulated configuration of the loudspeakers enables the continuous movement of sound sources around the listener using VBAP (Pulkki, 1997), irrespective of the size of the space and the number of loudspeakers (see figure 2).

Figure 1 The Klangdom in the ZKM_Cube theatre, with 47 loudspeakers and 4 subwoofers

The Zirkonium software provides the interface between the flexible loudspeaker array and the sound movements and placements created by the composer. Particular attention was paid to the parametric definitions of the movements. Until some time ago, it was a challenge to control movements for, for example, 50 to 60 loudspeakers controlled by 38 sound sources at once. A great deal of effort was also put into the design of the interface, in order to make it intuitively understandable. Thus, for the first time, Zirkonium represents an instrument that can be used to flexibly perform spatial music, even with practicable resources and a small number of loudspeakers. Whereas composers had been dependent on a proprietary format or a strictly set loudspeaker arrangement up to this point, it was now possible for them to adapt to the individual conditions of the performance space and to work on corresponding productions without being tied to the specific location. This system has already been used internationally at several locations. Moreover, an increasing number of concert halls possess multichannel sound systems that can be transformed into a sound dome environment with little effort. An example can be seen in one setup at the Stuttgart State Opera, in which the loudspeakers were reconfigured into a sound dome for a theatre festival in less than one hour, without moving a single loudspeaker.

Zirkonium’s basic conception and development took place between 2003 and 2006 and is ongoing. Some newly developed concepts and implementations are presented in this paper.

Figure 2 The smaller 24-channel dome in the studio

Zirkonium

The IMA is developing Zirkonium as free software for controlling the ZKM Klangdom. The central aim of Zirkonium is to simplify the use of space as a compositional parameter. The positions and movements of sounds can therefore be created and arranged either in an event-based timeline or remote controlled using OSC. Zirkonium is designed to be a standalone application for Apple OS X and handles multichannel sound files or live audio. For the spatial rendering of virtual sound sources it uses VBAP within a user-defined loudspeaker setup, or a head-related transfer function (HRTF) simulation for headphones. When working with real speakers it is moreover possible to modify the size of a sound source by using a technique called sound surface panning (Ramakrishnan et al., 2006). To avoid comb-filter effects a source can optionally be snapped to the nearest speaker.

Zirkonium has proved a reliable tool for a wide variety of productions. Over the course of these distinct projects involving different developers it was constantly updated, to the extent that it had become to a certain degree a patch-like source package.

In 2012 the IMA began to re-engineer the system, taking into account the experience of the staff and guest composers. The result was a more stringent modular client–server based toolkit that includes a hybrid spatialisation server, a trajectory editor, and an application for speaker set-up creation as its core components.

Why a client–server system?

The need for a flexible system in composing spatial music is thoroughly discussed in Penha and Oliveira (2013) and Peters et al. (2009). The following motivations highlight certain aspects that coincide with experiences, with the original Zirkonium serving as the background for describing new features.

Maintainability

The audio engine and spatial rendering system in Zirkonium is implemented using a version of the Apple Core Audio library that is becoming more and more outdated. To remain compatible with contemporary OS X updates it is necessary constantly to revise a set of rather low-level functions.

Zirkonium MK2 combines these kinds of tasks within the previously mentioned spatialisation server. They are mainly realised with Max/MSP. This way the audio functionality can be easily accessed with a few clicks, while the respective bindings to the operating system and hardware updates are maintained via the Max/MSP environment.

Flexibility

The variety of compositional and technical needs, especially in the area of computer music, requires a system that can be easily modified by a programmer or the composer himself. This can be achieved through a more immediate access to the logical components of the software. This also encourages the use of creative performance techniques, including mobile devices, controllers, and sensors.

The original Zirkonium takes a relatively rudimentary approach for positioning and moving virtual sound sources: in an event-based timeline, or breakpoint editor, one can create spherical rotations with a fixed speed. Zirkonium MK2 contains a graphical trajectory editor inspired by HoloEdit (cf. Peters et al., 2009) which uses quadratic Bézier curves for the definition of movements around the speaker set-up as well as the modification of speed and acceleration along time. Furthermore it is capable of recording live panning instructions within the underlying representation.

The graphical editor is a feature that has been strongly demanded by many guest composers since it is a very intuitive and natural way of describing movements that can also be exported as compositional patterns. By maintaining the event-based data structure from the classic Zirkonium old pieces can easily imported by applying a resampling of the text-based rotational figures as Bézier curves, which can be extended or modified just like newly created paths.

Extensibility

By employing a client–server structure, different programming languages and development environments can easily be used to extend the software. New ZKM or third-party developers wouldn’t necessarily have to break into someone else’s code. This paradigm was already considered in the original Zirkonium by means of the possibility to receive and send OSC messages for a fully externalised control of the spatialisation.

When working with Max/MSP – which to a certain extent is a modular framework itself – it is much easier to re-use already existing software and libraries since its community is built on sharing. Zirkonium MK2 provides a hybrid spatial audio rendering engine through the use of modified versions of current VBAP and Ambisonics implementations. Simultaneous combinations of these techniques can be chosen according to their aesthetic properties. Furthermore, Zirkonium MK2 is optimised for the integration of several third party clients like the ZirkOSC plugin (see the following section) or the Spatium interfaces (Penha and Oliveira 2013). New project-based extensions can easily be linked with the system thanks to its distributed and open architecture.

ZirkOSC

The open source software project ZirkOSC was initiated by the Canadian composer Robert Normandeau in 2012, who worked intensively with Zirkonium over a number of visits to ZKM. The software was developed by the Groupe de recherche en immersion spatiale (GRIS) at the Faculté de musique of the Université de Montréal. ZirkOSC is an Audio Unit plugin that works for the original Zirkonium as well as for the Zirkonium MK2 system. It allows the user to record movements of sound sources in real-time as automation data in the digital audio workstation (DAW) where the plugin is launched. At the same time it forwards the positions to Zirkonium by means of the OSC protocol, while the corresponding audio tracks are connected via Jack or Soundflower. Recent developments have yielded a revised version which now includes VST support, an iPad-App for remote control of the plugin and new ways of arranging individual sources in groups.

The set-up described allows the composer to continue working on the actual sounds while implementing the spatialisation. It was used in a couple of productions at the ZKM by Normandeau himself, Gilles Gobeil, and Douglas Henderson. Since the resulting piece is strongly dependent on the respective DAW software it is not well-suited for archiving; this is mostly because project files from closed-source software companies are difficult to maintain over a long period of time, and there is no guarantee that the newest updates will always remain fully backward compatible. This problem can be countered by recording the spatialisation data into the underlying representation of the trajectory editor (similar to the solution described in IRCAM – Spatialisateur (n.d.)). Together with the exported sound files the spatialisati
on created with the plugin is now independent from the DAW and can be easily archived and recreated thanks to a text-based XML description.

Example works

This section presents a number of example works by composers who have made use of the Klangdom and Zirkonium.

Gottfried Michael Koenig

In 2010 German composer and Giga-Hertz prize winner Gottfried Michael Koenig was invited by us to adapt one of his works for the ZKM Klangdom. Koenig decided on Funktion Rot (1968), which belongs to a series of works in which all sounds and all sound-transforming signals belong to the same (mathematical) function. “It seemed appropriate to distribute the appearance of the same or closely related sounds to different positions in the room. Funktion Rot belongs to a group of four pieces (Grün, Gelb, Orange, Rot) and is the formally most structured work within these four, which made it particularly suitable to me for this purpose” (Koenig, 2013). Before Koenig started to work on the adaptation he visited ZKM and was introduced to the functionality of the Zirkonium software. He also listened to various Klangdom pieces that demonstrate the effect and capabilities of its hemispherical speaker array. “Based on this information and impression I distributed Funktion Rot’s various types of sound in relation to the speaker arrangement of the Klangdom. I divided these sound-types into five categories of Hauptklänge (main-sounds) and six categories of Nebenklänge (sub-sounds). Concerning the speakers of the Klangdom I discerned four concentric rings and two axes where I would position my sounds or along which I would let them move. What was left to do was to assign my sound categories (taking into account also their duration) to patterns, directions, and velocities of movements using groups of loudspeakers of different size and shape. The structural moments of the original score were adopted where possible, at least they were considered” (Koenig 2013). Koenig created a score for the spatial movements in the form of a table. The table’s legend is shown in figure 3. It defines shortcuts for each position, for example points (“P”), directions (“N” = north, “O” = east, etc.), and quadrants (“Q”). The figure shows the working copy that we used in the realisation of Koenig’s spatial score with Zirkonium. The handwritten part denotes the respective Zirkonium coordinates.

Figure 3 Rot für den Klangdom im ZKM Karlsruhe

Regarding the differences between the original (four-channel) version of Funktion Rot and the new Klangdom version the composer states:

The four-channel playback causes a spatial “equalisation”; superimposed sounds come from different directions and expose the complexity of the structure of the sound. In addition to that the Klangdom allows for further equalisation into more and smaller sonic areas (Schallinseln) as well as movement patterns, which, in addition, differentiate the sounds by speed and direction. (Koenig 2013)

The Klangdom version premiered in 2010 as part of the Giga-Hertz-Award ceremony.

Video Example 1

Sabine Schäfer

The Topoph system developed by Sabine Schäfer and Sukandar Kartadinata in the 1990s has been described above. In 2013 Schäfer adapted several of her works that were originally composed and spatialised with the Topoph system to the Klangdom. In the course of adaptation she followed three different principles. In the first approach, which she used most frequently, she mapped the spatial structure of the original individual speaker layout to a hemisphere and represented the original speaker positions as virtual speakers in the Klangdom. These positions showed a great similarity to the original loudspeaker arrangements, which were variations of circles, straight lines, and twisted loops. Depending on the individual piece this resulted in about 10–30 virtual speakers; figure 4 shows one of these settings in Zirkonium MK2.

Video Example 2

Schäfer used the virtual speakers for playing back the original sound files. These sound files already contained spatialised sounds, which were now moving among the virtual speakers.

Figure 4 Screenshot of the trajectory editor of Zirkonium MK2. Each colour represents a group of related virtual speakers

The second approach was based on the first, with additional movement of groups of virtual speakers. In this way Schäfer created a superposition of the original movement contained in the sound files and the additional movement of the virtual speakers.

Video Example 3

This method is quite efficient since it builds on the already existing spatialisation, and is quite popular among composers – in a later section we will see another example by Jens Hedman. In her third approach Schäfer did not use the sound files created by the Topoph system. Instead she went back to the mono source material before it had been spatialised by Topoph, and defined the movements exclusively in Zirkonium MK2; figure 5 shows an example. When converting her Topoph-based compositions to the Klangdom Schäfer often combined all three approaches in a flexible way.

Figure 5 The path of a mono source

Bruno Friedmann

One quite popular strategy for sound diffusion and spatialisation is the idea of supporting musical gestures by spatialisation. For this, the composer or performer often directly specifies the spatialisation that supports a given musical gesture. In a very interesting project conducted in 2013, Bruno Friedmann chose a different solution. He took a mono recording of Luciano Berio’s Sequenza III for woman’s voice (1965) and let the computer choose the appropriate spatialisation. Friedmann’s approach was based on audio descriptors, which created an automatic numerical analysis of the musical content of the original recording. He used IRCAM’s Max/MSP external “ircamdescriptors~” (IRCAM, n.d.), which comprises around 50 descriptors, documented in Peeters (2004); Max/MSP in turn remote-controlled the spatialisation that was realised by Zirkonium MK2. Friedmann aimed at a tight connection between the sound and the spatialisation. In a first step he manually defined the temporal boundaries of the relevant “musical gestures”. He divided the score into a number of consecutive parts, the shortest lasting around one second, longer parts up to 20–30 seconds. He then assigned two different audio descriptors to each of the consecutive parts, so that each part and the respective musical gesture was analysed by its own set of descriptors. In the last step he defined how the numbers resulting from the descriptors’ analyses should be mapped to spatial movements. In some cases the numbers were mapped directly to the space; in others the numbers were filtered or in some other way treated algorithmically before being applied to the spatialisation.

György Kurtág Jr.

One of the first large events in which the Klangdom figured prominently was the festival Remembering Newtopia – Creating our Future, a Hungarian-German cooperation that took place in September 2007 (ZKM, n.d.). An interesting project by György Kurtág Jr. originated in the context of this festival: Lux Nox 4. Kurtág took an existing production and created an instructive choreography for it, using the breakpoint editor of the “classic” Zirkonium. Kurtág followed three quite different strategies for how musical gesture and spatialisation might be combined. In the first part of the piece the spatial movements proceed fast and rather abruptly and are not in sync with the musical gestures. Instead they form a completely separate layer, functioning as counterpoints to the musical gestures, and animating them. Figure 6 shows some of the Zirkonium instructions for this part. The figure illustrates that every second a new movement instruction is issued, each lasting exactly one second.

Video Example 4

In the next part of the piece Kurtág uses the more traditional method of diffusion. Here the movements are in sync with the gestures, with the effect of underlining and emphasising them. In the third and last part of the piece gestures and movements are again out of sync, but in the opposite way: here the spatial movements have a longer duration than the musical gestures. They develop very slowly, thus unifying the consecutive musical gestures and emphasising their connectivity.

Figure 6 György Kurtág Jr. – “Lux Nox 4”, screenshot of the Zirkonium session (“classic” breakpoint editor; comments added by the authors)

Ričardas Kabelis

Another more recent project was conducted by Ričardas Kabelis in 2013. Following a rather conceptual approach he created a series of small studies with noise as the only material. In most of the studies Kabelis used white, pink, and brown noise, sustained and completely unvarying; 

Video Example 5

in the rest of the studies he worked with loops of noise bursts, again completely unvarying.

Video Example 6

This approach was especially intriguing due to the fact that the only variations in the sound emerged from the spatialisation. Because of this the listener’s attention was fully drawn to the spatial movement of the sounds. The movement followed a kind of blossom shape of loops and circles propagating through the whole Klangdom. Kabelis started by defining the movements with just one or two lines of instructions, using the classic Zirkonium breakpoint editor. Afterwards the sessions were transferred to Zirkonium MK2, where he could modify the movements and accelerate and decelerate them. Kabelis used this in order to break the perfect synchronisation of two or more different sounds moving along symmetrical paths, resulting in special phasing effects of the spatial movements.

The Morning Line

On the occasion of the opening ceremony in September 2013, Ludger Brümmer customised his piece Repetitions for diffusion in the sound pavilion The Morning Line. The piece had originally been produced for the Klangdom and consists of five distinct four-channel stems which already comprise an intra-spatialisation in terms of transitions between adjacent channels. The Morning Line is a large outdoor sound pavilion and audio-visual performance system by Matthew Ritchie, Aranda\Lasch, and Arup AGU (Advanced Geometry Unit), and commissioned by Thyssen-Bornemisza Art Contemporary TBA21. Since 2008 it has been exhibited in Sevilla (Spain), Istanbul (Turkey), and Vienna (Austria). Since 2013 it has been located at the ZKM_Forecourt in Karlsruhe, Germany. The installation consists of four 3D and two 2D loudspeaker arrangements that Tony Myatt refers to as sound fields or rooms (see figure 7). Each of these rooms renders sound objects in a virtual hemisphere by means of the VBAP algorithm, similarly to the approach used by Zirkonium.

Figure 7

For this reason, the stems of Repetitions could easily be placed within the rooms with an offset in height and horizontal orientation. The room for each stem was selected with the ulterior motive that the listener might not only experience the spatial hearing within the room he or she currently occupies, but also the relationship of the different sound fields to each other.

Real-time diffusion and the use of control interfaces

To the same extent that electronics have liberated sound from its physical requirements, thereby enabling the existence of sounds that are freed from their place of origin, technology has also broken down the physical connection between interface and sound generation. The specific physical properties of the piano keyboard result to some extent from the assignment of each of the strings and hammers to one individual key. Such physical requirements led to the development of a different interface for the harp, for example, than for the flute. Through the introduction of digital signals, these dependencies can be dissolved, and movements of sounds can be controlled using any kind of interface. If sounds are triggered by abstract control signals alone, performers and composers have the freedom to explore within the use of instruments and sounds.

Current endeavours comprise the development of interfaces that accommodate the intuition and physical capabilities of the users, while at the same time allowing for nuanced musical results. The design and implementation of interfaces is a very lengthy and complex undertaking that is influenced by traditional aspects as well as technical and solely human factors.

How long did it take to develop the technology necessary to change the size of an object by spreading two fingers, such as is possible with the iPhone? The development of such seemingly natural correspondences requires much experience, intuition, and cutting-edge technical resources.

Figure 8 8 Sensors on the wrist: Stevie Wishart – ‘The Sound of Gesture’

Another example of the use of cutting-edge developments could be seen at the already mentioned Newtopia-Festival. Several works were created specifically for the Klangdom, among them Around and above, weightless … by Todor Todoroff and The Sound of Gesture by Stevie Wishart. Todoroff and Wishart both used live diffusion for their pieces. They used different sensors, for example accelerometer, gyroscope, digital theremin and contact microphones, to control various parameters of the sound creation and processing. The same sensor data plus an additional real-time analysis of the sound was used to control the spatialisation. The goal was, as Todoroff put it, to create a strong connection between the nature of the sounds and their manifestation in space.

Figure 9 Multi-sensor system: Todor Todoroff – ‘Around and above, weightless …’

Violinist Wishart used sensors on her bow hand. This way she could communicate with the computer and continue to play the piece even though her bow was not touching the strings (see figure 8). [2] Todoroff controlled the computer using accelerometers and positioning sensors together with a theremin. Through the simultaneous use of several controllers, he created complex patterns of control data like those of Wishart. These data patterns allow the performer to achieve an intricate operation of the computer, thus enabling the generation of a vibrant sound and movement pattern (see figure 9). Other examples of advanced development can be seen in the artistic use of lasers. Through the use of laser light, dancers can produce sounds by touching the light beams, thereby translating their movement into sound and spatial position, as in Ludger Brümmer’s work Shine (see figure 10).

Figure 10 Csaba Horvaáth and Andrea Ladanyi in Ludger Brümmer’s Shine

IRMAT 2.0

IRMAT 2.0 is a research project from the Hochschule für Musik Basel (HSM) that deals with the question of performance practice in contemporary electronic music. The research team is defining and developing concepts and experimental environments to enable performers of electronic and electroacoustic music to interact with sound more intuitively and naturally. 

The collaboration between ZKM and HSM resulted in the development of software for a large multi-touchscreen surface that allows the performer to spatialise many sound sources concurrently and in real time. The goal was to create a meta-instrument to control the overall behaviour of the movement and dynamics rather than individual parameters like gain or x/y-position for each sound source. The implementation is based on a physics engine (a mass-spring model) in which the sound sources can either move around freely on a 2D-plane or be attracted by virtual gravities arranged on predetermined grids. The user can set the characteristics of the sound sources on the multi-touch surface by adjusting attributes like frictiongravitycollision-energy, and size. With the selectable interaction modes of forcepush, or catch, the performer can decide whether to attract the sound sources or push them away by touching the surface, or to catch and move them around the Klangdom. The IRMAT software sends out its data as an OSC stream to ZKM’s Zirkonium playback and spatialisation software, which applies the spatialisation calculations accordingly.

The IRMAT 2.0 Klangdom application was first used by Brümmer at the 2013 Beyond 3D-Festival, where he interpreted one of his own 24-channel acousmatic pieces, Repetitions. The application and an early prototype of a large multitouch-screen are available to IMA’s guest artists for experimentation. 

Live interpretation using Zirkonium

When the concept of the Klangdom was first developed, one important aspect was that it should make it easily possible to play the existing repertoire of electroacoustic pieces. A large number of existing pieces are either stereo or they use speakers placed in a circle around the audience. At a very basic level they are directly compatible with the Klangdom, in that their original speaker locations are subsets of the Klangdom. However, we were looking for ways to exploit the additional features provided by the Klangdom in order to enhance the presentation of these pieces. Ramakrishnan (2009, section 8.1) has already described a method for translating existing multichannel fixed-media pieces to the Klangdom very effectively. This requires the DAW session of the piece to be still accessible in a state just before its final mix. Using this method, the composer puts several sub-mixes as separate layers onto the Klangdom and moves these layers around independently from each other, using Zirkonium. This is usually done prior to the performance, so that the result is a fixed-media Klangdom version of the piece. However, often the DAW session is no longer accessible and the only material available is the final mixdown of the piece. In this case we tend to do the diffusion live, especially if the composer is not present to authorise a given adaptation of his or her piece. In this way the composition remains untouched and the diffusion is part of the performance. Having only the final mix at our disposal, we usually use only a single layer in the Klangdom. We define the original speaker channels as virtual speakers in this layer. The relative spatial arrangements of the original speakers are kept intact but we constantly change the position and the width of the group as a whole. This is done live using Zirkonium and additional software like Max/MSP or SuperCollider, together with a MIDI or OSC controller such as a fader box or a touch screen like the Lemur interface. There are typically four parameters available, namely the horizontal and the vertical position and the horizontal and the vertical span. We believe that this method is perfectly suited to many traditional pieces that have been composed for a ring of eight speakers or a Quad setup.

More recent fixed-media pieces can also be candidates for this type of live diffusion. One example is Jens Hedman’s piece THE BEAST WITH TWO HEADS (2012). Hedman played this piece on the ZKM-Klangdom during a festival in June 2014. He used a fixed 12-channel rendering of the piece, comprising an 8-channel ring on ear-level and a 4-channel ring on the ceiling. However, the 12 channels were not simply placed onto dedicated speakers in the Klangdom. Instead, we put the channels as virtual speakers onto the Klangdom and Hedman controlled the horizontal orientation of the complete virtual 12-channel structure. What made this piece especially suited for this treatment was the fact that it actually consists of two pieces that can be played simultaneously. So Hedman used four faders to control the diffusion: one for the horizontal orientation of each 12-channel group and one for the gain of each group. This worked remarkably well and might serve as a model for future projects. It also demonstrates how stem-based compositions may be played live on the Klangdom, similarly to the multichannel diffusion of stem-based compositions on the BEAST system as described by Wilson and Harrison (2010, section 6.3).

Traditional diffusion in the Klangdom

This section addresses another method, which adapts ideas of systems like the BEAST (Harrison, 1999) or the Acousmonium (GRM, n.d.) and transfers them to the Klangdom. This method mainly targets stereo pieces. The standard Klangdom speaker layout is axially symmetrical along the longitudinal axis. Thus opposite speakers form left/right pairs that can naturally be used as stereo pairs. An obvious approach would use several horizontal rings of eight speakers, each on top of the other. We found positive experiences from using three such rings as the core of the resulting loudspeaker instrument. This instrument can be extended by applying methods from the Acousmonium and similar systems, such as turning selected speakers outwards to the wall in order to get indirect sound, applying filtering or delay effects to certain speakers, adding speakers of a different model, and adding speakers to locations separate from the Klangdom surface (closer to the audience or more distant, even outside the main concert hall, audible through an open door). The performer responsible for the diffusion would control all the levels in the traditional Acousmonium manner from a mixing desk. Experts in the field of acousmatic music, like Francis Dhomont, Daniel Teruggi, and Gilles Gobeil, have used the Klangdom in this way.

Fortunately, it is very easy to combine pieces played in this way with dedicated Klangdom pieces in the same concert. In our concert practice in the ZKM_Cube it proved to be very easy to switch between different pieces from such a setup to a standard Klangdom setup and back again, even if some turning of speakers was involved.

Conclusion

Ignited by the possibilities of digital control, and in parallel with the consumer domain, multichannel technology has made itself particularly heard
among composers of electroacoustic music. It is clearly evident that new approaches are being tested and adapted for the normal concert setting. In addition to the Klangdom, the traditional Acousmonium (Jaschinski and Mielke-Gerdes 1999) and wave field synthesis (Berkhout 1988) must also be mentioned here, if only as well modified, extended, and combined versions of spatial sound projection. Against this background it seems promising to combine, for example, Ambisonics and VBAP techniques in one and the same speaker environment, using each technique to optimise the respective purpose of the sound projection. Another extension has already been mentioned with respect to the sound pavilion The Morning Line and its combining of multiple VBAP Klangdom units.

The compositions and productions presented in this article prove the interest of composers in the topic of spatial sound environments as well as the desire to control spatially mapped parameters through interactive gestural devices. Yet the musical potential of spatiality is only beginning to unfold. The ability to listen consciously to and make use of space will continue to be developed in the future, larger installations will become more flexible and more readily available overall, and the capabilities of the parameter space will be further explored through research and artistic practice. This will make it easier for composers and event organisers to stimulate and challenge the audience’s capacity for experience, as the introduction of the recently introduced object-based Dolby Atmos standard indicates. But composers will also find more refined techniques and aesthetics that will take advantage of the full power of spatial distribution. If this happens the audience will follow, looking for new excitements in the perception of sound and music.


Artists who have realised works at the ZKM Klangdom include:

Lars Åkerlund, Natasha Barrett, Jérôme Bertholon, Maurilio Cacciatore, Omer Chatziserif, Marko Ciciliani, Alvin Curran, Michael Edwards, Aaron Einbond, Arturo Fuentes, James Gille, Gilles Gobeil, Helene Hedsund, John Helsberg, Douglas Henderson, Michael Iber, Christoph Illing , Hiromi Ishii, Wilfried Jentzsch, Shinji Kanki, Orestis Karamanlis, Mr Koenders, Panayiotis Kokoras, Phivos-Angelos Kollias, Joachim Krebs, Yannis Kyriakides, Leigh Landy, Chelsea Leventhal, Fernando Lopez-Lezcano, Eric Lyon, Damian Marhulets, Daniel Mayer, Andrea Molino, Chikashi Miyama, Valerio Murat, Lise-Lotte Norelius, Robert Normandeau, Matthias Ockert, Junya Oikawa, Åke Parmerud, Hugo Paquete, Sean Reed, Oliver Schneller, Alexander Schubert, Bernd Schultheis, Gerriet K. Sharma, John S. Mann’s Sister Gerhard Stabler, Reto Stadelmann, Kotoka Suzuki, Andrea Szigetvári, His Tutschku, Shing-Kwei Tzeng, Horacio Vaggione, Trevor Wishart, Gerhard Wolf Stieg, YongJoon Yang

Special thanks to:

Bruno Friedmann, Sukandar Kartadinata, Gottfried Michael Koenig, and Sabine Schäfer


Copyrights

All images: © ZKM; Figures 1 and 2: Photographer: Bernhard Sturm

All videos: © ZKM; the materials have been used with kind permission of the artists


References

Berkhout, A.J. (1988) A Holographic Approach to Acoustic Control. Journal of the Audio Engineering Society, 36, pp. 977–995.

Cherry, E.C. (1953) Some Experiments on the Recognition of Speech, with One and with Two Ears. The Journal of the Acoustical Society of America, 25(5), pp. 975–979.

Bregman, A.S. (1990) Auditory Scene Analysis: The Perceptual Organization of Sound. Cambridge, Mass.: MIT Press.

GRM (nd) The acousmonium. An orchestra of loudspeakers. [Online] Available at:
<http://www.inagrm.com/accueil/concerts/lacousmonium > [Accessed 27 November 2013].

Harrison, J. (1999) Diffusion: Theories and Practices, with Particular Reference to the BEAST System. [Online]. Available at: <http://cec.sonus.ca/econtact/Diffusion/Beast.htm> [Accessed 27 November 2013].

IRCAM (n.d.) Max Sound Box [Online]. Available at: <http://forumnet.ircam.fr/product/max-sound-box/?lang=en> [Accessed 27 July 2014].

IRCAM (n.d.) Spatialisateur [Online]. Available at: <http://www.ircam.fr/1043.html?&L=1> [Accessed 27 November 2013].

Jaschinski, A. and Blankenburg, W. (1995) choir and choral music. The music past and present . Kassel: Bärenreiter.

Jaschinski, A. Mielke-Gerdes, D. (1999) Bayle, Francois. Die Musik in Geschichte und Gegenwart . Kassel: Bärenreiter.

Kartadinata, S. (n.d.) The Topoph24 [Online]. Available at: <http://www.topophonien.de/2.5-w-e-topoph24.html> [Accessed 27 November 2013].

Koenig, G.M. (2013) E-mail correspondence with Holger Stenschke.

Neukom, M. (2003) signals, systems and sound synthesis: basics of computer music, Zurich music studies . Bern: Peter Lang.

Penha, R. and Oliveira, J.P. (2013) Spatium, Tools for Sound Spatialization. [Online]. Available at: <http://cycling74.com/project/spatium-%C2%B7-tools-for-sound-spatialization/> [Accessed 8 August 2014].

Peeters, G. (2004) A Large Set of Audio Features for Sound Description (Similarity and Classification) in the CUIDADO Project. [Online]. Available at: 
<http://recherche.ircam.fr/anasyn/peeters/ARTICLES/Peeters_2003_cuidadoaudiofeatures.pdf> [Accessed 27 July 2014].

Peters, N., Lossius, T., Schacher, J., Baltazar, P., Bascou, C., and Place, T. (2009) A Stratified Approach for Sound Spatialization. In Gouyon, F., Barbosa, Á., and Serra, X. (eds) SMC 2009. Proceedings of the 6th Sound and Music Computing Conference. 23–25 July 2009 Casa Da Música, Porto, pp. 219–224.

Pulkki, V. (1997) Virtual Source Positioning Using Vector Base Amplitude Panning. Journal of the Audio Engineering Society, 45, pp. 456–466.

Ramakrishnan, C. (2009) Zirkonium: Noninvasive Software for Sound Spatialisation. Organised Sound, 14, pp. 269–276.

Ramakrishnan, C., Gossmann, J., Brümmer, L., and Sturm, B. (2006) The ZKM Klangdom. In Proceedings of the 2006 Conference on New Interfaces for Musical Expression. IRCAM – Cent
re Pompidou, Paris, France
, pp. 140–143.

Roads, C. (1996) The Computer Music Tutorial. Cambridge, Mass.: MIT Press.

Schäfer, S., 2007. TopoPhonien . Heidelberg: sweepers.

Schäfer, S. (n.d.) Homepage. Available at: <http://www.sabineschaefer.de/index_en.php> [Accessed 29 November 2013].

Scherliess, V. and Forchert, A. (1996) Konzert. Die Musik in Geschichte und Gegenwart . Kassel: Bärenreiter.

Wilson, S. and J. Harrison (2010) Rethinking the BEAST: Recent Developments in Multichannel Composition at Birmingham ElectroAcoustic Sound Theatre. Organised Sound, 15(3), pp. 239–250.

ZirkOSC (n.d.) Audio Unit plug-in to control the Zirkonium. [Online]. Available at: 
<http://code.google.com/p/zirkosc/> [Accessed 27 November 2013].

ZKM (n.d.) Remembering Newtopia: Creating our Future. [Online]. Available at: 
<http://on1.zkm.de/zkm/musikfestivals/newtopia> [Accessed 3 December 2013].


Notes

[1] The Institute for Music and Acoustics (IMA) is a production and research institute of the ZKM | Center for Art and Media Karlsruhe. The early development team comprised Chandrashekar Ramakrishnan, Bernhard Sturm, and Joachim Gossmann.

[2] Video material for Stevie Wishart’s The Sound of Gesture can be seen at <http://youtu.be/5OJw0s9ou-s>.


About the Authors: 

Ludger Brümmer. Born and raised in Werne, Germany. Masters in psychology/sociology at University Dortmund. Composition studies with Nicolaus A. Huber and Dirk Reith at the Folkwang Hochschule Essen. Collaboration with choreographer Susanne Linke and the Nederlands Dans Theater for “Ruhrort” with his work “Riti Contour” for orchestra. International Performances at GRM, Paris at ICMC’s in San Jose , Tokyo, Banff, Thessaloniki. Visiting Scholar at CCRMA Stanford University, teaching Assistant at the Folkwang Hochschule, TU Berlin, School of Design Karlsruhe, research fellow at Kingston University, lecturer for composition at the Sonic ArtsResearch Centre Belfast. Since 2003 head of the Institute for Music and Acoustics at ZKM|Karlsruhe and guest professor at School of Design. Member of the „Academy of the arts“ Berlin. 

Götz Dipper, born 1966 in Stuttgart, studied cello at the Hanover Academy of Music and at the Mozarteum, in Salzburg. Afterwards he turned his interests to computer music, with a focus on sound installations. Since 2001 he has been working as a member of the artistic/scientific staff at the ZKM | Institute for Music and Acoustics.

David Wagner studied Media Technologies at the Technical University of Ilmenau and received his engineering degree in 2011. In his studies he focused on Audio Technologies and Interactive Music Applications while collecting early working experience at the Fraunhofer IDMT and the Celemony Software GmbH. In 2012 he started working for the ZKM | IMA as a software developer.

Holger Stenschke (*1975 in Kaiserslauern, Germany) studied Sound Engineering at the Art University in Graz and Audiodesign at Basel Music University where he received his Diploma in 2006. From 2003 to 2007 Holger was employed as Audio Consultant at ‘Fabrica’, Benetton’s creative think-tank near Venice. In 2007 he founded ‘avcreatives’, a media production network specialised in contemporary music, film and theatre. Since April 2009 Holger is Tonmeister at the ZKM | Zentrum für Kunst und Medientechnologie in Karlsruhe. In his role as sound director he is responsible for many concerts, festivals and productions of the Institute for Music and Acoustics. Since 2011 he works as research associate at Basel Music University, conducting his own research and co-ordinating projects in the field of Human-Machine-Interaction. In 2013 he started his PhD in Science and Technology of the Arts at Universidade Católica in Porto.

Jochen Arne Otto studied systematic musicology with Uwe Seifert. With a focus on cognitive musicology, he is interested in the cognitive basis of musical meaning, particularly in the context of body representations. He is currently a project coordinator and editor at the Institute for Music and Acoustics of the ZKM | Center for Art and Media Karlsruhe.