Musical Instrument Interfaces

Mikkel Bech-Hansen,Dept. of Aesthetics and Communication, Aarhus University
January 2013
 
Controlling digital tools, instruments or appliances can be a quite tedious task. It could seem as if the huge computational and technological potentials of digital technologies—often internalized and inaccessible—in many cases take precedence over the very interface that is to unleash its powers.  The following is a preliminary overview of my motivation and some of the main issues within the context of my research on musical instrument interfaces. My own experiences and frustrations as a musician and sound engineer is probably the primary driving force behind this project. Originally being a drummer, my approach to creating music have always had a very physical and tactile dimension to it. Problems and difficulties arose, however, when I started working with other instruments, such as analog and digital synthesizers, tape machines and computer software. I am not particularly interested in the quality of analog vs. digital sound, though this is probably one of the most prevalent discussions within music technology discourse to this day. What I am interested in, however, is the interaction between the musician and the relevant instruments or pieces of technology. Having worked with 4-track cassette tape recorders up through my early teen-age years, my excitement was naturally enormous when I first laid my hands on a computer with multitrack digital recording software. The vastness of features, and the possibilities of virtually lossless digital recording, an almost infinite number of tracks, non-destructive editing, and virtual instruments and effects processors, and so on, were astounding, coming from an analog 4-track cassette recorder with very limited technical possibilities in comparison. After a while, however, I noticed that my workflow after switching to computer-based recording had actually become significantly slower. Tracking instruments, setting monitor and track levels, figuring out signal paths, etc. were suddenly much more time-consuming tasks than before I did the switch, and furthermore I had lost the very subjective feeling of actually objectifying the sound by committing the it to a physical tape, by instead laying it down as incomprehensible 0’s and 1’s distributed on a spinning metal plate.

I won’t go into further detail about the latter of these issues, but the main reason for my working speed slowing down—I suspect—was the fact that all the buttons, wiring, switches, knobs and faders for controlling the recording and mixing of audio—when remediated to a computer interface—had to be accessed through either a 3-button mouse or the standard QWERTY-keyboard. This resulted in not only longer execution time for each task, but also in tedious puzzle-solving in trying to figure out the logic of the digital signal paths of the audio, which I used to be able to figure out by simply following the analog audio cables from inputs to outputs. Though the technical qualities of my recordings were greatly improved, the production time for each recording went up as the enjoyment of using the recording device went down. This story, however, is hardly unique and thus many peripheral control interfaces for computer music software have been developed over the years to enhance and speed up the workflow, and arguably early MIDI-controllers, such as the Roland CF-10, CN-20 and CA-30 (see fig. 1) were arguably some of the earliest examples of tangible user interfaces. The past couple of years, however, the research into tangible user interfaces for musical applications have been highly focused around tabletop interfaces and fiducial tracking technology such as Reactable (Jordá et al.), mixiTUI (Pedersen & Hornbæk) and D-Touch.

These token-based systems are highly versatile and efficient in translating the digital musical “objects” from the monitor and into real tangible objects that can be directly manipulated. What these interfaces lack, however, is a clear physical relation between the physical and digital representation. The token we manipulate may be a physical object that represents data in the digital realm, but the physical properties of the token always stays the same, as there can be fed no data or instructions back to the token itself. To put it another way – the token can change the state of the computer, but the computer cannot change the state of the token. Surely some of the above mentioned systems have visual feedback, and can project visuals onto the token, but the very physicality and tangible qualities that is the core of the interaction-mechanism is essentially static. Pedersen and Hornbæk have approached this conflict with their “Tangible Bots” actuating the physical tokens with robotics. This technology, though, seems to be primarily focused more on automatization, and much less on establishing a haptic feedback relation between the computer and the physical extremeties of the interface.

Back when electronic sound synthesis entered the world of musical instruments, a hitherto fundamental premise was instantly dissolved. Until then musical instruments had relied purely on mechanical technology and the unique sound and timbres of the various instruments was a direct result of the acoustic properties of physical components such as pipes, strings, membranes and reeds that made up the instrument. The advent of electronic and digital audio technologies severed the ties between the physical form of the mechanical instrument artifact and the actual generated sound, thus paving the way for sound generation liberated from the confinements of physical acoustics.

The invention of electronic sound synthesis made it possible to create sounds never heard before, and were adopted by sound artists, composers and musicians alike within virtually all musical genres, from experimental classical music to jazz, pop and rock. But as lush of a palette of novel and other-worldly sounds that this new electronic audio technology offered, the natural mappings between the bodily gestures of the musicians and the audible and haptic feedback determined by the very shape and materiality of acoustic instruments were nevertheless entirely missing. The musical instrument interface was no longer part of the sound generating mechanism, and would retain only its role as a control mechanism for the instrument. This fundamentally new premise for interacting with these electronic instruments naturally introduced challenges for musicians and instrument manufacturers in terms of expression, playability and performance. And up through the second half of the 20th century, when analog synthesizers became affordable, instrument manufacturers spent much effort developing interfaces and to address technical solutions to these control issues. Discourses surrounding the challenges posed by electronic sound synthesis were quite well-articulated, for instance, in synthesizer-ads throughout the seventies and eighties (ARP Instruments; Yamaha Corporation, “Yamaha DX-7 …” 42-43; Yamaha Corporation, “Freedom of Expression” 25), all focusing, on issues related to the control, performance and expression of electronic musical instruments.

My research investigates haptic feedback and how it might be integrated purposefully into digital and electronic musical instruments, however, as Chang and O’Sullivan (3) have pointed out, there is a general lack of an oral vocabulary for describing haptic phenomena and sensations, and one of my working hypotheses is that by narrowing down the span of haptic phenomena to a musical interaction context, different sensations can more easily be categorized in terms of physicality and musical significance. Such a framework could prove useful when setting up experiments for exploring various ways of integrating haptic technology in musical instruments.

Perhaps looking into augmenting the feel—or the haptics—of interfaces for digital musical instruments, and more specifically to the design of proper haptic feedback. By varying the way the interface responds mechanically by means of actuation, we can change how the handling of the interface feels. If we are to enhance expression, engagement and playability, however, this feedback should be carefully designed so that it responds in musically meaningful ways. I suggest that a framework relating musical expression to physical gestures will be of great use in this endeavor to close the gap between sound and gesture, created by electronic and digital technology. Thus proper integration and design of actuated haptic feedback in, for instance, synthesizers could be of great value. Not only would it be possible to mimic mechanical properties of acoustic instruments, making the interaction embodied aspects of the interaction bidirectional, but also for paving ways for new experimental interaction paradigms.

There should be little doubt that there are some great advantages of electronic musical instruments compared to acoustical instruments (and vice versa). There seem, nevertheless, to be a tendency in the music instrument industry to produce instruments that mimic analogue and acoustic instruments by digital means, which hint that analog and acoustical instruments have some sought–after qualities. We see heaps of virtual analogue (digital) synthesizers; vast libraries of simulated grand pianos, drum kits and symphony orchestra s for digital sampler instruments; digital effects simulating vacuum tubes and tape recorders, etc.

Admittedly, the sound quality of such digital instruments and effects units is constantly improving, but as the feature- and sound-richness expand—often packing hundreds of sounds within the same hard- or software-based instrument—the limitations of the emphasis on designing generic control interfaces (typically piano keyboard interfaces and “buttons-knobs-and-sliders interfaces”) become increasingly obvious. We may be able to assign the keys, sliders, etc. of the interface to control whichever expressive parameter we may so desire. The multitude of sound combinations offered expands exponentially, but physically and mechanically the interface looks and feels the same. As the effort to integrate ever more computing power and feature-richness into new products continues, the interface becomes ever more alienated from the internal workings. In other words; the more sounds and expressive parameters a single musical instrument interface is to support, the more generic and thus less musically significant it seems to become.

An instrument that offers vast possibilities for generating various sound, and which by its very nature completely lacks haptic feedback is the synthesizer. Being essentially a workbench for making synthetic sound, and traditionally one that liberated sound generation from its mechanical necessity, the physicality of the interaction with the synthesizer is very limited. A few tactile interaction technologies are found, nevertheless, in some synthesizers. Weighted keys are probably the most prevalent of these, and is essentially a simulation of the trigger-action found in acoustic pianos (see fig. 2), intended to give a more realistic playing feel. Another technology is aftertouch — a feature often confused with pressure-sensitve keys — which enables the player to manipulate the sound after a key has been pressed, by varying the pressure applied to the pressed-down key and thereby controlling e.g. pitch bend, filters, modulation depth, etc.

There are, nevertheless, still a number of control issues relating to the lack of haptic feedback when we take a deeper look at common synthesizer instruments. The synthesizer fundamentally changed the haptic aspects of musical performance, by essentially eliminating it. At the same time, however, the synthesizer also augmented the sonic vocabulary, paving the way for new musical expression through sounds and timbres never before heard at the time. Though the earliest experiments with synthesized sound took place in the early 1900’s, the first commercially available synthesizers emerged in 1963-1964. Attempts to explore the possibilities of interaction with these new instruments, had been going on for years, and though very interesting attempts—such as Léon Theremin’s well-known theremin, which was played by varying the distance of the hands from two antennas, continually controlling pitch and velocity of the sound—were made, the concept of the piano keyboard quickly became the all dominant interface for playing these new musical machines.

Compared to the acoustic piano, however, synthesizers offer almost indefinite possibilities for control and shaping of the sound. Hence one would think that synthesizers in terms of performance and aesthetics would offer great expressive benefits over traditional instruments. In practice. however, it is quite hard to manually control these expressive variations during performance on a synthesizer, mainly because note-triggering and sound manipulation, unlike acoustical instruments, are not part of the same gesture. The sound can easily be designed as a ‘preset’ or a ‘patch’ (denoting the fact that early synthesizers were modular systems of interconnected sound-generating and -shaping parts, ‘patched’ together with cables) but the details of sounds are often very hard to control dynamically during actual performance where the playing of notes and manipulation of expressive parameters must take place at the same time. By expressive parameters, I point specifically to the dynamic shaping of the sound during performance, such as bending, applying vibrato, modulation of timbre, etc.

In a so-called subtractive synthesizer (based on subtractive synthesis – a pioneering technique, that is still widely used in many digital synthesizers today) users can tweak and modify many aspects of the sound, such as filtering, wave form, amplitude envelope, etc., thereby (in principle) having control of all expressive parameters of the instrument. The parameters, however, are often controlled separately from the triggering of individual notes. Where the triggering of notes is mainly done by pressing the piano keys, the expressive parameters are controlled almost exclusively by sliders and knobs, or (even through menus and buttons in some digital synthesizers), all placed at a good distance from the keys (see fig. 3).

It should be clear that expressive parameters are conceptualized and controlled very differently in acoustic vs. electronic/digital instruments. In acoustical instruments the coupling between the triggering of notes and the control of the expressive parameters is very tight. A guitarist, for instance, would achieve vibrato by initially triggering a note by placing his fingers on one hand on the desired frets and strings, picking these with the other hand then – more or less gently – bending the strings back and forth with his fingers to achieve the vibration.

On most synthesizers the same effect could be achieved in a number of different ways; by programming a low frequency oscillator (or LFO – a standard function in most synthesizers) to do vibration, which would mean that the musician would be unable to arbitrarily start, stop or modify the vibration. It could also be achieved by manually jerking the sliders or knobs for volume or pitch, which would mean, that the hand doing this could not simultaneously trigger any keys. Furthermore, anyone who has tried to simulate vibrato using a knob or a slider, would probably agree, that it is in fact quite difficult both motorically but also expressively.

In all acoustical instruments, the human voice included, the depth and speed of a vibrato is proportional by the amount of force applied to the instrument. A strong vibrato on a guitar, for instance, would require the guitarist to bend the strings quite a bit in both directions, and the force exerted from the strings on the fingers would increase with the amount of bending. This not only helps prepare the following downward motion required to finish one cycle of the vibrato, which is in itself supporting the very act of the vibrato, but it also gives the player some sense of what is going on, on a tactile perceptual level. Manipulating a knob or slider on a synthesizer that offers no resistance and no meaningful physical feedback other than the perceived sound, thus, would seem like a bad design choice as an interface for achieving musical vibrato. The same point could be made for other common synthesizer controls such as filters, attack and sustain-controls to name a few, and even the pitch of the keys. In fact the argument also applies to most music software and software synthesizers, where interaction can be solely based on mouse and keyboard – or simply on raw programming at the topmost layer of abstraction.

There clearly are issues concerning musical interaction with digital instruments. This is not to say that the notion of virtuosity is under attack or that expression cannot be made on instruments with digital sound generation and no noteworthy haptic feedback mechanisms. However, research shows that performance can be enhanced—at least in quantitative terms (Askenfelt & Johansson 347)—through augmenting instruments with physical feedback. Research should however not be limited to investigating how to simulate feedback patterns in already known instruments, but on a more general note, how haptic feedback relates to musical phenomena. Haptic feedback may improve performance quantitatively in some cases, but the notion of a haptic vocabulary, that can be applied on a more general level, adding an extra dimension to instrument design and interaction design in general is a vastly promising perspective.


Appendix

cf10

Figure 1: The Roland CF-10 Digital Fader. Roland Corporation 1989.

[http://2.bp.blogspot.com/_rScBRKlTdoE/TIu5MRkXTDI/AAAAAAABgOM/NWtAMl0781A/s1600/04222c6ea1.jpg]

keytriggergrandpiano

Figure 2: Key trigger-action diagram for a typical grand piano.

[http://2.bp.blogspot.com/-j6Wq8uxIdaw/T9Pw2Q_-0LI/AAAAAAAAAC8/f4bfLlqmi1M/s1600/grand+action+scetch.jpg]

minimoog

Figure 3: The Minimoog analogue synthesizer. Moog Music, 1970 – …

[http://switchedonaustin.com/sites/default/files/styles/uc_product_full/public/IMG_0044.JPG]
Works Cited

ARP Instruments. “For Keyboard Players who Need an Extra Hand”. Contemporary Keyboard, Nov. 1976. Print.

Askenfelt, A. & Jansson, E. V. “On Vibration Sensation and Finger Touch in Stringed  Instrument Playing” Music Perception 9:3 (1992). Print.

Chang, A. & O’Sullivan, C. “Describing Haptic Phenomena.” CHI 2005. New York: ACM.

Ishii, H. & Ullmer, B. “Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms.” CHI 1997. New York: ACM.

Jordá et al. “The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces”. TEI ’07. New York: ACM.

Keightley, K. “Reconsidering Rock”. The Cambridge Companion to Rock and Pop. Eds. Frith, Straw & Street. Cambridge: Cambridge University Press, 2001:109-42

Pedersen, E. W. & Hornbæk, K. “mixiTUI: a tangible sequencer for electronic live performances”. TEI ’09 (2009). New York: ACM.

Pedersen, E. W. & Hornbæk, K. “Tangible Bots: Interaction With Active Tangibles in Tabletop Interfaces”. CHI ’11 (2011). New York: ACM.

Yamaha Corporation. “Yamaha DX7 – The Performance is about to begin”. Keyboard Magazine. Sept. 1983:42-43.

Yamaha Corporation. “Freedom of Expression”. Keyboard Magazine. Apr. 1982:25

Posted in #BWPWAP Category

News

The latest issue APRJA Machine Research is now published. We will soon be releasing a call for our next workshop ...

Share

Share on FacebookTweet about this on TwitterShare on Google+Share on LinkedInShare on RedditEmail this to someone