Reinventing Harmony in Splintered Echoes (2014)

DOI: 10.5920/divp.2015.46

Abstract

This paper discusses the new harmonic possibilities enabled through the implementation of Sethares’ theory of the dissonance curve in MAX and its use in a live electronic composition Splintered Echoes with Monty Adkins (composer), Jonny Axelsson (composer and percussionist) and Adrian Gierakowski (programmer).


Introduction

Much of the research informing the composition of Shards (Sundin & Adkins, 2012) has been informed by the notions of consonance and dissonance outlined by William A. Sethares in his book ‘Tuning, Timbre, Spectrum, Scale’. Sethares writes that the notion of sensory consonance and dissonance has two implications. Firstly, individual complex tones will have an intrinsic or inherent dissonance:

Since dissonance is caused by interacting partials, any tone with more than one partial inevitably has some dissonance. This is a stark contrast to all the previous notions, in which consonance and dissonance were properties of relationship between tones. [1]

Secondly, that consonance and dissonance will depend not only on the interval between tones, but also on the spectrum of the tones used,

Since intervals are dissonant when the partials interact, the exact placement of these partials is crucial. [2]

The latter is something that Pierce was already aware of more than thirty years earlier in the 1960s when working with his arbitrary scales and correlating sounds. [3] Based on these experiments, our research examines how sounds with other kinds of spectral relationships work together to derive a new sense of harmonic consonance and dissonance in our electroacoustic compositional practice.

Sethares Dissonance Curves

Composer and theorist Harry Partch begins chapter nine of his ‘Genesis of a Music’ with the following,

According to Galileo, “agreeable consonances are pairs of tones which strikes the ear with a certain regularity; this irregularity consists in the fact that the pulses delivered by the two tones, in the same interval of time, shall be commensurable in number, so as not to keep the eardrum in perpetual torment, bending in two different directions in order to yield to the ever-discordant impulses.” The fairly “perpetual” torment which is our heritage in Equal Temperament has long obscured this aural axiom. [4]

Partch’s work and research is based on a tradition dating back to the ancient Greeks, the Pythagoreans and Ptolemy in particular, through music theorists and mathematicians such as Zarlino, Rameau, Galileo, Kepler, Helmholtz until the early 1900s. Partch was interested in creating music based on scales with more than 12 notes per octave. He built his own instruments, such as the Chromelodeon, a reed organ, in order to play the music he had composed with a scale with 43 scale steps per octave. He tuned his reed organ with “no other aid than the ability of the ear to distinguish pulsations ‘commensurable in number’ and those which bend its tympanum ‘in two different directions’” [5] in this 43 tone per octave scale with the focus on Just Intonation. By doing this he, as summarised by Sethares, “classified and categorised all the 43 intervals in terms of their comparative consonance”. [6]

A consonance curve portrays the perceived consonance and dissonance versus musical intervals. Helmholtz’ s roughness curve [7], Plomp and Levelt’ s consonance curve [8] as well as Partch’s ‘One Footed Bride’ [9] are examples of dissonance curves. All of these dissonance curves show how the ear perceives sounds with harmonic or no (pure sine tones) spectra as sensory consonant at certain traditionally “consonant” scale steps, if the scale is tuned in Just Intonation (rather than the equally tempered tuning). The points of maximum sensory consonance occur on these scale steps, which shows the correspondence between spectrum and scale. Sethares’ dissonance curve is however, mathematically constructed to portray the perceived consonance and dissonance versus musical intervals with sounds containing any spectra. A comparison with Sethares’ dissonance curve (see Figure 2) and an experiment carried out by Kameoka and Kuriyagawa [10] show that Sethares’ calculations are related to the results of their experiment (see Figure 1). In Kameoka and Kuriyagawa’s third experiment presented in 1969, chords of two identical complex tones were used. One of the tones containing eight partials was fixed at 440 Hz and the other tone was played together with the first from 440 Hz to 880 Hz (an octave) divided into fifteen steps. The degree of dissonance was calculated for each step according to the circles seen in Figure 1. The results from the experiment showed that the degree of consonance and dissonance seemed to occur on the same minima and maxima steps that were calculated in advance.

Figure 1: Kameoka and Kuriyagawa’s experiment where chords of two identical complex tones were used. The solid line represents the calculated values, and the circles represents the experimental values. The graph is turned upside-down compared to the original in order to clarify the similarities between Sethares’ calculations. 

In order to compare with Sethares’ dissonance curve I used the same partials and respective amplitudes as Kameoka and Kuriyagawa used in their experiment and explored what the results would be if used with Sethares’ algorithm. I turned the graph of the Kameoka and Kuriyagawa experiment upside down with the y-axis portraying the degree of dissonance instead of consonance so that the two graphs could be compared. The result was that both dissonance curves showed minima and maxima steps in the same places. My conclusion was then that Sethares’ dissonance curve agreed with Kameoka and Kuriyagawa’s research.

Figure 2: Sethares’ dissonance curve using the same input as Kameoka and Kuriyagawa’s experiment (see Figure 1). 

Sethares states that “a spectrum and a scale are related if the dissonance curve for the spectrum has minima at the scale steps”. [11] Sethares’ dissonance curve allows further investigation concerning the relationship between inharmonic spectra and scales. His approach is that:

The idea of relating spectra and scales is useful to the electronic musician who wants precise control over the amount of perceived dissonance in a musical passage. For instance, non-harmonic sounds are often extremely dissonant when played in the standard 12-tet tuning. By adjusting the intervals of the scale, it is often possible to reduce (more properly, to have control over) the amount of perceived dissonance. It can also be useful to the experimental musician or the instrument builder. Imagine being in the process of creating a new instrument with an unusual (i.e., non-harmonic) tonal quality. How should the instrument be tuned? To what scale should the finger holes (or frets, or whatever) be tuned? The correlation between spectrum and scale answers these questions in a concrete way. [12]

Sethares’ research is based on using the original, analysed sound as the basic sound material for a piece. Therefore he suggests creating a “virtual” instrument:

Sound begins in a digital sampling keyboard (a sampler) as a waveform stored in a computer-like memory. This is processed, filtered and modulated in a variety of ways, and then spread across the keyboard so that each key plays back the “same” sound, but at a different fundamental frequency. [13]

The dissonance curve is used as a tool in order to explore new harmonies, to work with intervals based on inharmonic spectra in both a linear and vertical manner – as a melodic scale as well as sensory consonant chords.

How to establish sensory consonance

Sethares summarizes with reference to Norman Cazden, that sensory consonance and dissonance plays no role concerning the important aspects of musical movement. Traditional functional musical consonance does, but is irrelevant when composing with sounds with a spectrum that differs from those with simple integers (harmonic spectrum) and with scales that are constructed from these non-harmonic spectrums. However, the notion of creating a whole new functional musical consonance based on new scales and spectrum is an appealing thought, but beyond the scope of this paper. Therefore the task has been to explore different scales constructed from the spectrum of non-harmonic sounds and to see how these scale steps work together when played as chords. The idea has also been to use the scale steps and chords as a way of structuring the harmonic development in Shards (2012) and Splintered Echoes (2014).

A musical interval is generally considered to be consonant if it sounds pleasant or restful; a consonant interval has little or no musical tension or tendency to change. Dissonance is the degree to which an interval sounds unpleasant or rough; dissonant intervals generally feel tense and unresolved. [14]

In order to create a harmony based on an inharmonic spectrum, the first step was to look into how to establish some kind of consonance. This was needed in order to structure the sound material on a vertical axis, as chords. In order to use chords, there needed to be a way of creating intervals so that when played at the same time they would interact in a sensory consonant manner. Creating sensory dissonant chords is no challenge but to create a chord based on two or more sounds with an inharmonic overtone spectrum that is not perceived as sensory dissonant is. It is here Sethares’ dissonance curve was a helpful tool to explore further the correlation between inharmonic spectra and sensory consonance. Sethares suggests in his research related to tuning and scales that one must do the following to get the most accurate results from the dissonance curve: [15]

1. Choose a sound;

2. Find the spectrum of the sound;

3. Simplify the spectrum;

4. Draw the dissonance curve and choose a set of intervals (a scale) from the minima;

5. Create an instrument that can play the sound at the appropriate scale steps;

6. Play music.

Elements A to D were the starting point for Splintered Echoes. The piece then further explores how to develop music using Sethares’ dissonance curve as part of the first stage of the compositional process and then at a more structural level using transposition and interpolations of the dissonance curves analysed.

Sethares Patches in MAX

The following patches (see Figures 3 and 4) built in MAX have been developed from the project ‘From dissonance to consonance’, a research project supported by the Royal College of Music in Stockholm led by Paulina Sundin (2009-2011). The ‘Sethares’ algorithm to calculate dissonance curves was first implemented by Sten-Olof Hellström and Paulina Sundin into a Max/MSP-patch that would work in non-real time as an analysis tool. The project was then extended further (2011-14) by Monty Adkins and Adrian Gierakowski re- implemented the algorithm as a Max external, and created patches, which allow realtime analysis and resynthesis and therefore can be used as a tool for live electronic compositions. The main patch (see Figure 3) is divided into three steps: 1) spectral analysis of the chosen sound; 2) calculation of the Sethares’ dissonance curve based on the spectral analysis; 3) finding the most suitable scale steps based on the dips in the curve. The additive synthesis patch (see Figure 4) is designed so that transformations between different spectra are possible. One may import data from the analysis patch (freq, amp) of the spectra that one wants to use (Partial set 1) and data for the spectra that one wants to use for transformation (Partial set 2). In order to avoid the transformation between to spectra to sound like a glissando, transposition of either spectrum is possible (for instance, transpose Partial set 2 to match the fundamental of Partial set 1).

The interpolation of each partial from one set to the other can be controlled using a break point function. The global control of moving from one data set to the other is managed with a single (automatable) slider but each partial will follow its own break point function over the course of transition. This way different partials can be programmed to reach their destinations at different times. Another important feature is the algorithm for matching partials for interpolation, based on the interval between them. The algorithm will find all the pairs of partials between two sets, which are closest to each other and then check how far apart they are from each other. If the interval between them is below a given (user- variable) threshold, they will be paired and both their frequency and amplitudes will be interpolated during the transition from one set to the other. The partials, which are not matched, will be just fad
ed in and out. This avoids sliding of partials over large intervals during interpolation. Partial matching can be disabled in case a glissando effect is desired.

Figure 3: ‘Discurve’ – main analysis patch for calculating Sethares dissonance curves from sound input.

Shards

Shards (2012) is a composition by Sundin and Adkins for objects and electronics. Two glass vases were chosen as source objects due to their rich but differing spectra. The recordings of the vases being struck were analysed and the resulting spectrum simplified. Two dissonance curves resulted. The composition uses a variety of glass objects (including wine/cocktail glasses, marbles and the original two vases) as sonic materials to be processed through the dissonance curves.

Figure 4: Additive synthesis patch to enable sound interpolation. 

Multiple articulations of the objects were explored to develop a rich gestural articulation. The dissonance curves filter the gestural and textural materials generated creating unified harmonic fields. The structure of the piece is then governed by the transposition of the dissonance curve onto each of the constituent harmonics in each curve and the interpolation between the two data sets originating from the analysis of each vase. Through such a method a rigorous and perceivable harmonic language is developed that is unique to the composition and the objects used within it.

 

Splintered Echoes 

In this second work, the technical and artistic ambition of the work was extended further. Following the compositional success of Shards which acted as a proof of concept work, Sundin and Adkins collaborated with percussionist/composer Jonny Axelsson and programmer Adrian Gierakowski to produce a new 20-minute work for percussion and electronics. The work is constructed in five movements and is the result of an intensive period of collaboration. As a result of this working method there is no score for the work. Sundin, Adkins and Axelsson have created a compositional framework for the piece which acts as the skeletal frame for a bounded-improvisation. This framework comprises an harmonic foundation for each movement, the sound world and gesture types for each, as well as the instruments to be used for each movement. Sundin and Adkins visited Axelsson’s percussion studio in XXX, Sweden and made some initial recordings of a wide variety of instruments. Whilst these materials were being edited and analysed to see what harmonies could be developed from them Sundin and Gierakowski finalised the analysis patch in MAX. The final patch optimised the analysis of sounds and the way in which the frequencies and amplitude of partials are displayed. The front page of the patch enables the extraction of dissonance curve scale steps from the analysis of any sound.

Figure 5: Main page of final analysis patch. 

The patch allows the user to see which is the most prominent partial in the analysis window and then transpose the spectrum of the sound from this fundamental onto the other scale steps. In addition the amplitude of resonators and filters applied to the sound could be independently adjusted (see Figure 6).

Figure 6: Sub-window of patch that allows amplitude scaling of filters and resonators.

Following a detailed analysis of many of the instruments recorded by Axelsson interesting spectra that could form the basis of individual sections within to work began to emerge. These spectra were derived from the analysis of five instruments: a broken temple block hit with wooden sticks, a gong, two small bells (which Alexsson performed by having one in each hand and used the opening and closing of his hand to act as filters); a drum; and the recording of a bell tree transposed down four octaves. We used the dissonance curve data to produce a large data set of frequencies. Figure 7 shows a working spreadsheet of the frequency data with colour coding to highlight perceptually close pitches which could act as modulation points between data-sets.

During this period we also worked on the performance tool. This was again coded in MAX and utilised the iPad MIRA app to control the capture, processing and playing of sound materials.

Figure 7: Frequency data used in Splintered Echoes.

The performance of the work involved the live manipulation of the sounds of the percussionist filtered through the scale steps of the analysis as well as the triggering of materials made in the studio. The MIRA app programmed by Gierakowski also enabled the transition between different scales to enable ‘modulation’ between the frequency data-sets. Each partial in a given set of pitch data could be controlled independently to govern the rate of transition to a new set (see Figure 8).

It is this ability to ‘modulate’ and transform from one set of analysis data to another that distinguishes this form of work from more traditional spectral music. The data allows relational sonorities to be established and prominence given to the fundamental. The creation of differing centres of specific sonorities allows large-scale formal process to be developed that rely on these relationships and their transitions through common scale-steps.

Figure 8: Sub-patch for controlling partial movement from one pitch data-set to another .

The final part in the realisation of the work was to work with Axelsson to develop ‘families’ of percussion for each movement of the work. Each of the five moments (which play continuously) were based around one of four pitch data-sets chosen (the ‘dry bell’ scale was not used in the final composition) with the final movement freely mixing elements from the previous four. Some of these movements also superimpose the data-sets so enriching the ‘harmonic’ tension in the work rather than presenting ‘harmonic fields’ within which events happen. In each of the movements the aim was to use instruments that not merely fitted the spectra chosen but also complimented and added to it.

Figure 9 : Instruments chosen for the ‘gong scale’.

The ‘gong scale’ movement uses one big cowbell; Almglocken (d-sharp); one metal plate (low f-sharp)  one burma gong (low f); two Javanese gongs, (low d and e-1/4tone). On all of the instruments a variety of hard and soft sticks were employed in order to bring emphasise either lower or higher harmonics.

Figure 10: Instruments chosen for the ‘temple block scale’. 

The ‘temple block scale’ movement uses various small woodblocks as well as two medium sized ceramic flower pots worked perfectly with this scale. In addition five mongolian cymbals were used. These cymbals were bowed and delicately hit with soft sticks.

Figure 11: Instruments chosen for the ‘low bells scale’. 

The ‘low bells scale’ was the richest and as a result the most complex to deal with compositionally. The analysis software produced a wealth of interesting spectra which had to be carefully filtered with regard to their amplitude in order to produce a useful pitch data-set. This movement used, three korean gongs; one burma gong (low f); two dubacci (high d and g); two pairs of tibetan bells; one bell tree.

Figure 12: Instruments chosen for the ‘drum scale’. 

Finally, the ‘drum scale’ uses four octobans; one african drum; three medium congas; one bass drum. The sound of the african drum and octobans (above left) was also interesting when the resonance was stopped by covering them with a blanket. The opening movement of the work make use of this before removing it later in the work.

Figure 13: Broken temple block that is the basis for movement 3. 

Sound Example 2, 3 & 4:

audio Block
Double-click here to upload or link to a .mp3.
Learn more

Future work

Having created Shards which utilised sounds processed in the studio with amplified objects played live, and then Splintered Echoes which uses live electronic processing of a range of percussion instruments as well as triggered sound files, the next project in this cycle will be a composition by Sundin and Adkins commissioned by the Stockholm Saxophone Quartet (2015). Obviously the move to predominantly pitch-based instruments has its challenges but offers an exciting opportunity to demonstrate the validity of this system that extends it from being a means of developing bespoke harmonic relationships created within a studio environment to one which can be used for all types of new music.


Notes

[1] Sethares, W. A., Tuning, Timbre, Spectrum, Scale, Springer-Verlag London Limited, (1998), p. 75.

[2] Ibid 49.

[3] Pierce, J. R., ‘Attaining Consonance in Arbitrary Scales’, Journal of the Acoustical Society of America, Vol. 40, No 1, (1966), p. 249. http://dx.doi.org/10.1121/1.1910051

[4] Partch, H., Genesis of a Music – an account of a creative work, its roots and its fulfilments, Second Edition, Da Capo Press, New York, (1974), p. 138.

[5] Ibid 4. p. 138

[6] Ibid 4. p. 83

[7] Helmholtz, H, On the Sensations of Tone, (1877).

[8] Trans. A. J. Ellis, Dover, New York, (1954).

[9] Plomp, R. and Levelt, W. M., ‘Tonal consonance and critical bandwidth’, Journal of the Acoustical

Society of America, Vol. 38, (1965), pp. 548-560. http://dx.doi.org/10.1121/1.1909741 Partch, H., Genesis of a Music, Da Capo Press, New York, (1974).

[10] Kameoka, A. & Kuriyagawa, M., ‘Consonance theory, part II: Consonance of complex tones and its computation method’, Journal of the Acoustical Society of America, Vol. 45, No. 6, (1969b), pp. 1464-1465. http://dx.doi.org/10.1121/1.1911624

[11] Ibid1.p.89

[12] Ibid1.p.90

[13] Ibid1.p.130

[14] Sethares, W.A., ‘Local consonance relationship between timbre and scale’, Journal of the Acoustical Society of America 94, (3), Part 1, (September 1993), p. 1218. http://dx.doi.org/10.1121/1.408175

[15] Ibid1.p.124


About the Author:

Paulina Sundin is a freelance composer of instrumental and electroacoustic music based in Sweden. It is her PhD research that sought to reinvent harmony in electroacoustic music through the work of William Sethares that has led to this symposium. Her research funded by Kulturbryggan, Statens Musikverk and Helge Ax:Son Johnsons Stiftelse led to the original Beyond Pythagoras compositional project (https://beyondpythagoras.wordpress.com), to this symposium and the publication of selected papers in this journal edition. Sundin’s paper documents her theoretical research and on-going collaboration with Monty Adkins, Jonny Axelsson and Adrian Geirakowski.