Residential symposium at Royal Holloway, University of London, Egham - 12-14 April 2007

 

PROGRAMME

Day 1

Thursday 12 April 2007
4.15-4.45pm: Tea/coffee
7-9pm: Dinner

Day 2

Friday 13 April 2007
10.45-11.15am: Tea/coffee
1.30-2.30pm: Lunch
4.45-5.00pm: Tea/coffee
7-9pm: Dinner

Day 3

Saturday 14 April 2007
11.00-11.30am: Tea/coffee
  • 11.30-1.00pm: Final discussion
1.00pm onwards: Lunch

ABSTRACTS

 

Daniel Barolsky

Josef Hofmann's Beethoven and "Beethoven's" Hofmann

On April 7, 1938, the pianist Josef Hofmann gave a performance of Beethoven’s Waldstein Sonata op. 53 in the Casimir Hall at the Curtis Institute. Recorded by the Curtis librarian and subsequently released on LP and CD, this performance offers us a unique perspective on this pianist’s distinctive style.

In the third movement, Hofmann isolates and projects three notes in an utterly non-conventional way. This paper will consider what kinds of questions we can ask of this moment. How does Hofmann’s interpretation comment on the musical phrase structure or steer us through a unique formal narrative? What does this moment show us about contrasting performances and changing perceptions of Beethoven? My goal is to consider how recordings give us a window into the creative world of canonic performers and to recognize how their performances both reflect and contribute to conversations on the history of musical style, analysis, and aesthetics.

Daniel Barolsky is currently a postdoctoral fellow at Lawrence University in Appleton, Wisconsin where he has taught courses on Glenn Gould and on the relationship between performance and analysis. Daniel’s work focuses on the creative role of the performer, especially his/her influence on musical criticism and analysis. In particular, he is interested in the reciprocal relationship between the development of compositional styles and the changing styles of performance. Daniel received his BA in History from Swarthmore College and his PhD in Music History and Theory from the University of Chicago.

<back to top>

 

Michael Casey and Tim Crawford

Automatic analysis of recorded collections in OMRAS2

The new OMRAS2 project (Goldsmiths and Queen Mary, University of London, 2007-10, funded by the EPSRC) aims to provide user-oriented tools, principally intended for music researchers, that integrate many methods and resources of music information retrieval (MIR) on large, distributed collections of digital music, both audio and score. It is hoped that OMRAS2 will lead to a major enrichment of the way musicology is carried out. In this talk, we will show some examples of the kind of collection-level analysis that is being undertaken using the tools within OMRAS2.

OMRAS2 is closely associated with CHARM and the Mazurkas project, and has been looking at repeat structures in recorded performances of Chopin’s piano music. In particular, we have been able to determine automatically, and very quickly, which pianists performed which repeats in 1400 recordings of the Mazurkas. Similar tools may be used on other recorded repertoires, such as the vast and growing number of downloaded MP3s consisting largely of older music sampled or ‘mashed-up’ to create new works. Certain aspects of historical performance practice, as interpreted by modern players from written treatises and scores, may also be studied in this way; a basic requirement is to identify places where a performer adds notes that are not written in the score, for example when playing trills and other ornaments.

Professor Michael Casey is the Principal Investigator for OMRAS2 at Goldsmiths, University of London, and Director of the Media Futures Laboratory at Goldsmiths Digital Studios. His research investigates new ways to organise large multimedia content collections for research in digital humanities, end user interfaces and new creative production processes. Michael Casey is also a composer who employs the techniques of his research in his own musical works.

Tim Crawford is Senior Lecturer in Computational Musicology at Goldsmiths. From the mid 1970s he worked for 15 years as a lute and theorbo player before devoting himself to the musicological study of lute music; he is currently co-editor of the works of the 18th-century lutenist, Sylvius Leopold Weiss. In the conviction that computers could offer the means of doing such work in ways that were not hitherto possible, he was active at King’s College, London, in setting up the original OMRAS project (1999-2002) with Mark Sandler and others as well as the series of International Symposia on Music Information Retrieval (ISMIR), of which the seventh is due to be held in Vienna, September 2007.

<back to top>

 

Martin Clayton

Analysing video recordings of musical performances

It is hardly surprising that most methods for analysing recordings of music involve the manipulation of audio data only. There is more to musical performance, however, than the sounds we produce, and recordings can and do frequently include a visual record of musicians at work. Video recordings present many opportunities – and challenges – for the study of musical performance, and in this paper I will present some examples of the ways in which video analysis can complement and clarify analyses of audio recordings and contribute to a broader understanding of musical performance.

Martin Clayton is Senior Lecturer in Music at the Open University, UK. He has written on numerous topics including rhythm and metre and the analysis of early field recordings. Clayton’s publications include Time in Indian Music (Oxford, 2000), The Cultural Study of Music (co-editor, New York, 2003) and Music, Time and Place (Delhi, 2007). He is currently director of the AHRC-funded research project Experience and Meaning in Music Performance, working on the synthesis of ethnographic research with audio and video-based analysis of musical performance.

<back to top>

 

Per Dahl

Tidying up tempo variations in Grieg's op. 5 no. 3

I have used the commercial gramophone recordings of Grieg’s “Jeg elsker Dig!” in order to study how the interpretation of this song has changed in the 20th century. In my analysis I wanted to see if I could find empirical data for Robert Philip’s convincing statement that ‘the most basic trend of all was a process of tidying up performance.’

There have been great individual differences all through the century, but the arithmetic mean for the tempo of performances have been the same throughout the century. By defining the tempo in the first verse line as the basic tempo of the interpretation, I was able to calculate the deviations for each singer’s performance and get a deviation profile of the whole song. In addition I could compare deviations in each section of the song in all recordings independent of their basic tempo. Then I linked the data from each section together to get a total of the standard deviations in each era of the recording technology. This total showed a decline in all four eras indicating a tidying up of performances.

Dr Per Dahl (b. 1952) was educated at the University of Trondheim (musicology, philosophy and psychology), and has been working in Stavanger since 1979 (Music Conservatoire, now Department of Music and Dance). He has been consultant to The Norwegian Institute of Recorded Sound, Stavanger, since the opening in 1985. He became Associate Professor in 1986 and was Principal (Rector) at Stavanger University College 2000-03 when SUC developed its basis to become a fully fledged university. He finished his dissertation entitled ‘Jeg elsker Dig! Lytterens argument. Grammofoninnspillinger av Edvard Griegs op. 5 no. 3’ (‘I love thee! The listener’s approach. Recordings of Edvard Grieg’s op. 5 no. 3’).

<back to top>

 

Simon Dixon

Extraction of musical timing from audio recordings

Studies of expressive music performance require precise measurements of the parameters (such as timing, dynamics and articulation) of individual notes and chords. Particularly in the case of the great performers, the only data usually available to researchers are audio recordings and the score, and digital signal processing techniques are employed to estimate the higher level “control parameters” from the audio signal. In this presentation, I describe two techniques for extraction of timing information from audio recordings. The first technique involves finding the times of the beats in the music, for which the interactive beat tracking and annotation system BeatRoot was developed, which was rated best in audio beat tracking in the MIREX 2006 evaluation. The second method is audio alignment, which has been implemented in the software MATCH, whereby multiple interpretations of a musical excerpt are synchronised, giving and index of corresponding locations in the different content-based metadata from one recording to another. MATCH can also be used for following a live performance, and is currently being extended to implement an automatic page turner for musicians.

Simon Dixon is a lecturer at the Centre for Digital Music, in the Department of Electronic Engineering at Queen Mary, University of London. He has a PhD in computer science from the University of Sydney, as well as AMusA and LMusA diplomas in classical guitar. He lectured in computer science at Flinders University of South Australia, before moving to Vienna to work as a research scientist in the Intelligent Music Processing and Machine Learning Group at the Austrian Research Institute for Artificial Intelligence (OFAI) from 1999 to 2006. His research interests focus on the extraction and processing of musical content (particularly rhythmic content) in audio signals, and he has published over 40 papers covering areas such as tempo induction, beat tracking, onset detection, automated transcription, genre classification and the measurement and visualisation of expression in music performance.

<back to top>

 

Nicolas Donin

Studying recordings of performances, capturing the musical experience of the analyst

Performance analyses mostly convey not only data about the musicians under study, but also elements of another individual musical experience as well, that of the analyst: as a violinist, a singer or a pianist, he/she would have been able to finely inspect the performance considered thanks to his/her own instrumental skills; as a listener, he/she will have developed a very specific listening practice comprising the repeated listening of chosen extracts, close reading of the score, comparisons of different performances of a piece, etc. Once summarized in graphs and verbal assertions, the dynamics of that specific kind of musicological experience often remains hidden behind analytical facts. This would not be a problem if the reader of the analysis, then, was able to understand the facts by re-doing some of the decisive musical operations which gave birth to the analytical conclusions of the original analyst: for example, listening to the same sounds by reading the same duration graphs, having the same possibilities of varying visualization interfaces, etc. But, alas for scientific (or simply musical) contestation, one rarely can re-do what the analyst did.

Nowadays promising tools are being developed for performance analysis, but in parallel some kind of applied phenomenology of the analyst’s musical activity must be encouraged – not as a way to strengthen the narcissism of the musicologist, but as a methodological hygiene for the growing sub-discipline of performance analysis. As a first step towards this ambitious programme, these issues will be illustrated by a few examples drawn from recent work carried out by the Analysis of Musical Practices research group at IRCAM, in the fields of both performance studies and design of multimedia tools for music analysis.

Nicolas Donin is Head of the IRCAM Analysis of Musical Practices research group (www.ircam.fr/apm.html). His research centres on the history and practices of music analysis and attentive listening from the end of the 19th century, and on the analysis of contemporary musical practices, particularly composition and performance. His recent work has been published in Acta Musicologica, Circuit, Musiques contemporaines, Musurgia and Revue d’histoire des sciences humaines.

<back to top>

 

Andrew Earis

Semi-automated extraction of expressive performance information from acoustic recordings of piano music

Measurable features of expressive musical performance include timing, dynamics, articulation and pedalling. This paper concerns the measurement of expressive timing and dynamics in acoustic recordings of piano music with reference to a digitized musical score of the work being performed. A multi-stage semi-automated expression extraction process is described. Initial synchronisation of score and recording is achieved using a simple manual beat tapping system. The continuous wavelet transform (CWT) is then employed, with a Morlet wavelet, to measure the beat tapped times more precisely, and any errors are then corrected manually. Precise note and chord onset times and dynamics of all events in performance are then calculated using the CWT. The different analysis parameters are described in detail. Sample results of the analysis of expression in keyboard music by Bach and Chopin are given.

Andrew Earis graduated in 2000 with a first class honours degree from Imperial College London and the Royal College of Music. He has recently completed a PhD at the University of Manchester on the measurement and analysis of expression in recorded piano performance. Within the RCM’s Centre for Performance History, Andrew works as Research Associate in the Museum and is responsible for coordinating the joint research into the acoustics of historical harpsichords with Imperial College. He was on the staff of the AHRC Centre for the History and Analysis of Recorded Music (CHARM) from 2005-06. In addition to his research and teaching work, Andrew is Director of Music of St Sepulchre-without-Newgate, the National Musicians’ Church in the the City of London, Treasurer of the Church Music Society, a visiting organ tutor at Dulwich College, and Editor of the newsletter and website of the Royal Musical Association (RMA). Andrew has given organ recitals in venues including King’s College Chapel, Cambridge and Westminster Abbey. He is a Fellow of Trinity College London and an Associate of the Royal College of Organists.

<back to top>

 

Dorottya Fabian

Early recordings of Chopin's Nocturne in E flat op. 9 no. 2: tools to measure individual difference

In a series of case studies the usefulness of the tapping method versus note-onset measurements as well as different spectrogram analysis of bowing and vibrato are explored. The goal of the analyses is twofold: 1) to investigate the accuracy and usefulness of the tools and 2) to account for individual difference.

The results indicate that 1) error (lack of consistency) within my own tapping is not significant, and 2) the correlation between tapped and measured duration is not significantly different when graphed but is significantly different when psychoacoustic criterion is considered.

Regarding bowing and vibrato the observed practice in these recordings of late 19th- and early 20th-century artists (Sarasate, Drdla, Elman, Heifetz, Casals) seems to support only marginally the view that posits a decisive change in vibrato usage at the turn of the century. The Spectrogram program is effective for the study of vibrato, but SIA Acoustic Tools may be better for scrutinizing portamento.

Dorottya Fabian lectures in musicology at the University of New South Wales in Sydney. Her research first focused on J. S. Bach performance on record but recently she has started studying the performance of 19th-century repertoire. Apart from analyzing interpretative trends evidenced in sound recordings she is also involved in studying listeners’ perception of performance parameters. Currently she has an Australian Research Council grant to model expressiveness in baroque and romantic music performance. In collaboration with Emery Schubert they use the novel methodology of continuously measuring listeners’ responses to features as the commercially recorded performances unfold. The perceptual data is subjected to time series analysis and then compared with acoustic measurements of the same interpretations to identify stylistic characteristics.

<back to top>

 

Werner Goebl

Aural and visual display of expression in recordings of a Rachmaninoff Prelude

Computational methods for extracting expressive performance data from audio recordings have developed considerably over the past years so that obtaining reliable information on onset timing and intensity from a CD has become comparably easy. However, when it comes to making sense of the obtained tempo and loudness information, we still rely mainly on static tempo and loudness curve graphs. In order to accommodate the main property of music – its evolution over time – display techniques should make use of animation capabilities of computers. In this presentation, I will explore and discuss ways to aurally and visually display performance information introducing computational tools that use animated graphics in parallel with the music. I will start with an expressive click track (with and without loudness and metrical information), follow with animated curve displays, and finally show the “Performance Worm”. To demonstrate these techniques, recordings of Rachmaninoff’s Prelude op. 23 no 6 will be used.

Werner Goebl is currently a post-doctoral fellow at the McGill University in Montréal under auspices of Caroline Palmer. He holds both a master in piano performance (Vienna Music University) and in Systematic Musicology (University of Vienna) and obtained his PhD at the University of Graz with Richard Parncutt. Over a period of six years, he worked in the research group of Gerhard Widmer at the Austrian Research Institute for Artificial Intelligence in Vienna. His work addresses aspects of expressive performance from analysis to visualisation as well as aspects of piano acoustics. For his current research on movement analysis in piano performance at McGill, he received a prestigious Erwin Schrödinger Fellowship from the Austrian Science Fund. For more information, see http://www.ofai.at/~werner.goebl.

<back to top>

 

Nicolas Gold and Neta Spiro

In search of performance motives - a computational approach

Motives have been fundamental in many approaches to the analysis of music. Here the term motive’s fundamental interpretation, as a unit identified through its repetition is concentrated on. Moreover, it is explored in the new context of performance. This combination leads to the investigation of ‘performance motives’ which are therefore explored by searching for repetition of temporal patterns in recorded performances.

Computational techniques of data analysis are tested and harnessed in this CHARM project in order to develop an approach for the identification of repeated temporal patterns for the study of motives. Here we present initial considerations, methodologies, and results of a pattern-matching algorithm implemented for the Etude op. 10 no. 3 by Chopin.

Nicolas Gold is a lecturer in the Software Engineering Group in the Department of Computer Science at King’s College London. He is Learning and Teaching Co-ordinator for the School of Physical Sciences and Engineering. He is also the deputy director of Centre for Research in Evolution Search and Testing (CREST). The centre undertakes research in a wide range of topics in Software Engineering. His interests are in software evolution “in the small” and “in the large” and in music and computing. Prior to joining King’s, he was a lecturer in the School of Informatics at the University of Manchester.

Neta Spiro is a member of the CHARM project on Motive in Performance in which, together with John Rink, she is exploring the relationships between music theory, performance and perception. Previously, Neta studied for her BA/Mus at St Edmund Hall, Oxford (1997-2001), and Master in Cognitive Science and Natural Language at the University of Edinburgh (2001-2). Her PhD is on the perception of phrasing which she explored under the joint supervision of Rens Bod (University of Amsterdam) and Ian Cross (Centre for Music and Science, University of Cambridge, UK).

<back to top>

 

Colin Gough

A quantitative analysis of pitch, amplitude and timbre fluctuations in recorded violin tones

In parallel with an acoustical analysis of the fluctuations in pitch, amplitude and timbre of violin vibrato sounds and the influence of both player and instrument on such fluctuations, we have addressed the development of violin vibrato over the last 150 or so years, using evidence from recorded performances. Our analysis includes the use of Praat software to extract numerical data on the temporal variations in pitch, and audio filtering and additional curve-fitting software to extract quantitative information on vibrato rates and amplitudes of selected recordings. In particular, we compare the use of vibrato in early recordings of Joachim, Auer and Kreisler with those of modern performers. The preliminary results of such studies will be prefaced with a few remarks on why vibrato makes such a distinctive contribution to the perceived quality of sound of the violin.

Colin Gough is an Emeritus Professor of Physics at the University of Birmingham, where he led the UK's largest interdisciplinary research group on high temperature superconductors, in addition to teaching courses and undertaking research in Musical Acoustics. As a violinist, he let the NYO, the University Haywood String Quartet and a number of other chamber and orchestral groups. In 2001 he was awarded the American Acoustical Society's Science Writing Award for Professionals in Acoustics for an article on violin acoustics and has just completed the article on Musical Acoustics for a major new Handbook on Acoustics, to be published by Springer this summer.

<back to top>

 

Serge Lacasse

Phonographic narrative strategies in Eminem's "Stan"

Despite a large number of writings questioning the relevance of applying narrative theory to music (e.g. Nattiez 1990), very few scholars have attempted to approach popular music from a narratological perspective. Moreover, the few who have (e.g. Frith, Hirschi) seem to have neglected a crucial aspect of the genre: its phonographic nature. At the same time, scholars who do study narrativity in non literary media have also neglected the case of audio recording (e.g. Marie-Laure Ryan). In this paper, I will propose a model for analysing popular music recordings from a narratological perspective that takes into account recorded music’s phonographic mode of existence. Using Eminem’s "Stan" as a study case, I shall expand notions proposed by Simon Frith/Philip Auslander (person, persona, character) and Rick Altman (supradiegetic space). Following Gérard Genette’s narrative categories I will thus discuss in turn aspects of narrative time, space, mood and voice.

A popular music specialist, Serge Lacasse is Associate Professor of Musicology at Laval University in Quebec City, where he teaches popular music analysis, theory and history. In addition to his teaching activities, Serge is a researcher and member of the Executive for both the Centre de recherché interuniversitaire sur la littérature et la culture québécoises (CRILCQ) and the Observatoire international sur la création musicale (OICM), as well as member of OMF's scientific committee (La Sorbonne) and CHARM's International Advisory Panel. Favouring an interdisciplinary approach, his research projects deal with many aspects of recorded popular music aesthetics and culture. He has published many chapters and articles, and is co-editor (with Patrick Roy) of Groove: Enquête sur les phénomènes musicaux contemporain (PUL, 2006).

<back to top>

 

Daniel Leech-Wilkinson

Schubert's young nun: a tale of two singers

The recorded performance history of Schubert’s ‘Die junge Nonne’ offers radically contrasting views of what the song’s text may mean. Is this young woman putting her stormy past behind her, entering the convent to be married to God; is she confusing sex and religion; or is she dying? How could such readings be implied in musical sound? As well as performances that seem to take the middle course, I shall look closely at Arleen Auger (1990) and Lula Mysz-Gmeiner (1928) who make the first and last surprisingly convincing. Using Sonic Visualiser, developed by Queen Mary University of London, with input from CHARM, I shall examine details of a number of recorded performances, and interpreting them in the light of theories of music perception I shall aim to gain a clearer understanding of how signs of emotional state are deployed in performance by singers.

Daniel Leech-Wilkinson is Professor of Music at King's College London and an Associate Director of CHARM. He studied at the Royal College of Music, King's College London, and did his PhD at University of Cambridge on 14th-century compositional procedures. As a medievalist he has written books on Compositional Techniques in the Four-Part Isorhythmic Motets of Philippe de Vitry and his Contemporaries (Garland, 1989), Machaut's Mass: An Introduction (Oxford University Press 1990, pb 1992), Guillaume de Machaut, Le Livre dou Voir Dit (with Barton Palmer) (Garland, 1998), and most recently The Modern Invention of Medieval Music (Cambridge University Press, 2002) which won the Royal Philharmonic Society Book Award for 2002. His first work on recorded performance dates from 1984 (an article in Early Music). While a lecturer at the University of Southampton (1985-97) he was instrumental in acquiring the Del Mar Collection of historic recordings and introduced (with José Bowen) a course in performance on record. At King's College he founded the King's Sound Archive with a donation of 150,000 78rpm discs from the BBC, and has introduced undergraduate and postgraduate courses in the study of music as performance. In 2001/02 he held an AHRB Innovation Award which enabled the development of techniques for studying details of performance expression, using original 78rpm discs as source material. He is currently completing a book on approaches to studying recorded performances.

<back to top>

 

Nicolas Magriel

Navigating intertonal space in recordings of North Indian khyal

For the last four years I have been working on an AHRC-funded project transcribing, translating and analysing around five hundred songs in the khyal genre of Hindustani vocal music. The recordings we have been working with have all been digitised, mostly from 78 rpm and LP commercial recordings. Transcription of each song has been accomplished by marking the rhythmic cycles on the waveform and repeated listening to short segments, often at quarter speed. The challenge has been to evolve a symbolic language which can approximate an accurate description of melodic nuance in this music: the bends, curves, shakes and plunges which shape the contours of tonal space in khyal. This paper will discuss some of the issues which have arisen in confronting the ambiguities of specific sound-shapes and the ambiguities of symbolic notation. It will also look at some of the discoveries which have been made possible by this way of working with recordings.

Nicolas Magriel has been playing the North Indian sarangi since 1970. He has spent around ten years in India studying both sarangi and vocal music with several eminent musicians. He has performed throughout the UK both as a soloist and as an accompanist to vocalists and Kathak dancers, appeared many times on television and contributed sarangi for numerous film and theatre scores. In 2001 Nicolas completed his PhD at the School of Oriental and African Studies in London, analysing sarangi style and its relationship with vocal music. Since 2002, he has been working on an AHRC-funded project transcribing and analysing the songs of khyal, the pre-eminent genre of Hindustani classical vocal music. He is also a psychotherapist.

<back to top>

 

Craig Sapp

Beat-level comparative performance analysis

Computational methods for observing similarities between multiple performances of the same music will be presented. First, beat timings and loudness are extracted from audio recordings of the performances. This data is then correlated at all timescales of the performances from single-beat level to the entire-work level. The raw analysis data is then used to construct plots showing the best performance match at each analysis measurement.

The resulting two-dimensional plots show both small-scale and large-scale relations between performances. Using these plots, inferences can be hypothesized about the influences of one pianist on another’s performance. If a single performance maintains the best match to another performance over a wide range of timescales and duration in the music, then there is more likely to be a non-coincidental relationship between the two performances. For example the following types of relations can be examined: two re-releases of the same original recording, two performances by the same pianist, performances by student and teacher, and performances by two pianists with a similar aesthetic.

Craig Sapp joined CHARM in 2005 working alongside Nicholas Cook and Andrew Earis on the Style, performance, and meaning in Chopin’s Mazurkas project. Craig was educated at the University of Virginia and then Stanford University where he completed his PhD in computer-based music theory and acoustics. An avid composer and pianist, he enjoys hitting the Thames in his one-person foldable kayak.

<back to top>

 

Wim van der Meer

What you hear and what you see

The AUTRIM (Automatic Transcription for Indian Music) project, a collaboration between the National Centre for the Performing Arts in Mumbai and the University of Amsterdam has focussed on refining pitch-line representation of classical Indian music. This project originally started in 1983 with the development of Bernard Bel’s Melodic Movement Analyser that was later ported to Wim van der Meer’s PitchXtractor and Pitchplotter software. Presently PRAAT is used to do the basic extractions, while the final presentations are built in Flash/Quicktime. The method of constructing visual representation of melodic movements is based on open-ended collaborations with the musicians who have recorded the pieces. Rather than using laboratory experiments that are typical of cognitive research we prefer to work with the participant observation and commentary approach of the traditional anthropologist. In this lecture the current state of melographic representation will briefly be shown. The main issue however will be the very tricky subject of visual representation of fast melodic movements. Apparently, the auditory system is capable of making very rapid shifts from detailed hearing (zooming in) to large interpretation (overview), probably even doing both at the same time. Whereas slow music can easily give the impression that “what you see is what you hear”, in fast music we lose track of the relation between graph and sound. Of course this means that the model for creating the graph may not be correct, which in turn raises the question in how far the correspondence between them in slow music really is an illusion.

Wim van der Meer studied anthropology and musicology at the University of Amsterdam. He did his PhD at the University of Utrecht in 1977 on Hindustani music. He has received training in Hindustani classical vocal music from 1970 under Pandit Dilip Chandra Vedi (1902-1992). Wim van der Meer presently teaches musicology at the University of Amsterdam. His publications include Hindustani Music in Twentieth Century (1980) and the Raga Guide (1999, co-author). Since 1983 he worked with Bernard Bel at the NCPA to set up a computer assisted research program. The work at NCPA continues in the form of the AUTRIM project, in collaboration with Dr Suvarnalata Rao. Since 2005 van der Meer has been chief editor of the Journal of the Indian Musicological Society.

<back to top>

 

Gerhard Widmer

On the use of AI and machine learning for music performance analysis

The importance of computational methods in the field of music performance research is growing (and increasingly being recognised also in the world of musicology), as documented by projects like CHARM. Computers help in gathering empirical data related to performance, and computational data analysis methods are required to effectively organise and make sense of these data.

In this presentation, special attention will be given to computational methods from fields like Artificial Intelligence and Machine Learning. A recent project will be reviewed that studied expressive music performance in a data-intensive way, by gathering large amounts of empirical measurements, and analysing these data with AI and machine learning methods. Some example results will be presented, the specific potential of intelligent data analysis methods will be demonstrated, and potential pitfalls will be discussed. A new research project will be introduced that is based on an unprecedented corpus of empirical performance data, and opportunities for interdisciplinary cooperation between computer scientists and musicologists will be identified.

Gerhard Widmer is Professor and Head of the Department of Computational Perception at Johannes Kepler University Linz, Austria, and also leads a research group at the Austrian Research Institute for Artificial Intelligence in Vienna. He has been active both in ‘mainstream’ Artificial Intelligence and machine learning, and their application to musical questions.

In 1998, he was the recipient of one of Austria’s most highly funded research awards (the START Prize) for his work on Artificial Intelligence and expressive music performance. He also won a national piano competition at age 13, but soon afterwards decided that classical music was ‘horrible’, and quit music school.

<back to top>

 

Simon Zagorski-Thomas

The analysis of multi track master recordings in the musicology of record production

Almost all commercial recordings over the past 40 years have involved a multi track recording that is then mixed down to the final stereo master. Access to these recordings can be very restricted and in many instances they may no longer exist or be playable. In this presentation I will seek to demonstrate the importance of the study of these recordings to the understanding of both the final musical product and the process of record production that creates them.

Drawing on examples from popular music including Marvin Gaye and the Beatles, I shall discuss the kinds of information that the study of multi track recordings can produce. This will include a demonstration of how studies of musical phenomena such as microtiming (e.g. Danielsen 2006, Butler 2006) might benefit from adopting this approach. I will also examine how this approach can be useful in the development of the musicology of record production.

Simon Zagorski-Thomas is a senior lecturer in music and music technology at the London College of Music, Thames Valley University. His research centres on the musicology of record production and microtiming and groove in popular music. He has been instrumental in establishing and running the annual Art of Record Production conferences at Westminster University (2005), Edinburgh University (2006) and Queensland University of Technology (2007). He has also recently co-founded the on-line Journal of the Art of Record Production.

<back to top>