NIME :: Robin Fox

4 – 8 June 2006, Paris France

Robin Fox_Nime

Robin Fox_Nime

The New Interfaces for Musical Expression (NIME) conference provides an annual forum in which technology based instrument builders; designers, programmers and practitioners can demonstrate and discuss recent developments in the field of human/machine interaction. The ‘musical’ factor limits this potentially enormous field by drawing attention to the distinctly music based possibilities of gesture mapping, virtual instrument design, mobile technologies and networked performance among a myriad of other technical concerns. The conference ran from Sunday June 4th until Thursday June 8th and morphed seamlessly between paper presentations, interface demonstrations, concerts and installations.

The 2006 manifestation of NIME was hosted by IRCAM in Paris as part of that institution’s annual Agora festival. This meant that the NIME event was flanked by a number of other related events like concerts, installations and workshops, all of which were open to the public. First impressions, upon arriving at the venue, seemed to point to a re-curring contradiction that plagued me during the presentations and performances that is just starting to solidify in my mind now, in it’s aftermath. The first ‘thing’ that you see as you approach the IRCAM entrance is Jean-Francois Laporte’s Tremblement de Mer (pictured right) a brilliant work that, when firing, completely swamped the Place Igor Stravinsky with a huge yet warmly sonorous flood of noise. Part of the great beauty of the work was it’s simplistic use of low frequency tones fed straight onto large ‘thunder sheets’ with different combinations of tones creating remarkably different clusters of harmonics. From this relatively simple yet, sonically,effective installation, participants cross the bridge into IRCAM along which is installed Tom May’s Acousmeaucorps, a camera triggered, multi-speaker interactive installation. It was here that the contradiction began to reveal itself. At least at this point, there seemed to be a real disparity in the ‘tech to outcome’ ratio. I should point out that I did enjoy May’s work (although thirty or so trips over the bridge later it was getting a little tired…or perhaps I was) but it seemed a little twee next to the grandeur of Laporte’s efforts. It drove home, before I’d even entered the conference, that, although NIME focuses on the technologies necessary to facilitate new interfaces, it isn’t necessary to get lost in a maze of circuit boards and sensor cables in order to produce a convincing sounding work. In fact, May’s piece was evidence, to an extent, that a pre-occupation with situation and hardware had left the ‘sound’ behind.

Robin Fox_Nime 2

Robin Fox_Nime 2

Day 1: Sunday 4th June
Sunday was dedicated to two concurrent daylong special sessions one focusing on motion capture and movement analysis as a way forward for choreography and the other dealing with the area of improvisation using computers. As an improviser working entirely in the computer domain I was keen to attend the latter and was hoping that the debate had moved a little further than the old ‘I don’t see how you can even call the computer an instrument’ nonsense that has been plaguing the debate for years now. Thankfully, there was only a modicum of recalcitrant intervention from humanists appalled by the notion that you could possibly elicit any sort of ‘feeling’ out of a computer based performance….

Much of the first session was dedicated to the idea of producing AI based algorithms that would render the computer an interesting (and even thoughtful) accompanist in an improvised music setting. Papers were delivered by members of the LAM (live algorithmic music) Research Network. The most interesting discussion in this vein centered around the development of objects for gestural interaction that could incorporate and execute certain generative patterns of behaviour. The basic idea (and one worth pursuing even in it’s simplest form) was that simple linear mappings, where moving the hand up = higher pitch or greater sound intensity for example, didn’t really capture the complexity of a gesture at all. By imbuing the movement with a ‘behaviour’ or‘tendency’ the mapping can take on extra associations and move closer to are presentation of gesture complexity. This discussion lead neatly into David Wessel’s (CNMAT Berkeley) discussion of what he calls ‘finger space.’ As a computer-based improviser his goal is to be able to play his machines at the level of speed and agility accessed by a virtuosic instrumentalist and ‘go with’ an instrumental collaborator as they switch from mode to mode. What Wessel pinpointed quite effectively is that, for all the hype surrounding the ‘flexibility’ of the computer interface, there is still work to be done in terms of real-time decision making flexibility and physicality in performance.

The Improvisation for Two Piano’s and Brain-Computer Music Interface (EduardoMiranda et al) took EEG readings from a ‘subject’ and claimed to create music based upon what the ‘subject’ was thinking. Of course, all of the musical results came down to the mapping of the EEG data to certain musical ideas and parameters so the fact that the results were remarkably like Beethoven was about as convincing as recent research claiming to ‘show’ that mouse DNA sounds like Mozart!! There is certainly potential for neurological control of sound parameters in performance, this has been evident since Alvin Lucier’s enquiries in the 1960’s. The BCI demonstration demonstrated nothing other than the fact that the digitisation of EEG data is getting better and more affordable.

From the high tech brain analysis machine to the violin, Mari Kimura of the Julliard school NYC gave a presentation on pragmatism and integrity in computer music improvisation. While giving some interesting insight into the nature of her own approaches to performance and improvisation, the computer component consisted of a couple of amplitude based triggers that turned a ‘virtual conga player’ (read: soundfile)on and off! Far more in depth and complex was legendary Trombonist, composer and innovator George Lewis’s presentation on improvising with creative machines. Most of his presentation involved exposing the gradual development of his Voyager software environment (designed in collaboration with Damon Holzborn at Columbia University). Voyager is a music information generation machine designed to be able to ‘improvise’ ina meaningful way with a performer. Having worked on the system for a number of years now, the attention to detail in terms of the parametric breakdown of a performed note (in this case the note is ‘performed’ by a disklavier) and the malleable behaviours or responses that the system is capable of bring the Voyager quite close to an AIimprovising accompanist. There are alternating ‘listening’ and ‘ignoring’ modes that can cause interesting motion between related and disjunct musical materials.There was certainly scope for game play and even a mutual antagonism in the brief trombone driven demonstrations. George Lewis

From a beautifully organised and controlled form of human machine musical interaction came the stuttering explosions from Michel Waisvisz and his now legendary Cracklebox, an interface developed some years ago now at Steim in the Netherlands but still a crowd pleaser. Thecrackle box is premised on the idea of human conduction of electrical information. While playing the box Waisviszrefers to himself as the thinking or ‘wet’ part of an electronic circuit. His performance was energetic and powerful and threw a challenge up to the slicker and faster technologies to provide a more immediate and rewarding interface design/experience. Waisvisz, long considered a visionary experimentalist in the field of electronic and interactive music technologies, also outlined his plans for the future. He envisages an electronic sound generating technology that will remove the need to be ‘plugged-in’ in

Michel Waisvisz any form. These instruments,he posits, will be powered by the small amounts of electromagnetic activity emitted by the body and the brain and will dis-connect new performance from the inherently political notion of ‘nudging’ electrons around that are stored in large facilities and served up to your appliances at significant environmental cost. His points stirred some useful debate and it was certainly charming to hear this ardent technologist issuing cautionary tales about the terms ‘engagement’ and ‘interactivity’ being subsumed as technical jargon in this age of technocratically motivated research.

Day 2: Monday 5th June Session 1: Mobile & Public
This session mapped the growing fascination that instrument designers, hardware developers and installation artists have with increasingly portable and, therefore, mobile multi-media devices. The first paper (Lalya Gaye et al) was designed to introduce the fundamental concerns of a growing community of artists in this emerging field. Gayeoutlined that the emerging genre is about movement, ubiquitous computing, the use of pervasive and locative media and the social and geographic dynamics implied by mobility. She then went on to highlight some of the issues facing the field:

  • Body/space/sound
  • Synchronicity across time and place
  • Fore grounded and backgrounded technologies and locations
  • Social acceptance of new behaviours

Gaye ended her presentation with the question ‘is there an aesthetic associated with these new engagements with mobile and public technologies?’ There was general agreement that, while the technology will definitely imprint itself aesthetically onto various processes, the practice is still in it’s infancy and it is unnecessary to pigeon-hole it’s trajectory at this stage.

In the next presentation Atau Tanaka outlined a project that he is currently engaged in with the Sony Laboratories in Paris. The project involves participants roaming across the city with portable multi-media devices the data from which is streamed back to a gallery hub where it can be viewed, manipulated and transformed by other participants. The transformations can then be sent to the mobile units for playback in public spaces. Tanaka is interested in the fact that our increasing technological mobility, designed to foster a sense of the ‘placeless’ has sparked an artistic fascination with the idea of location. The project engages with the mapping of space onto media and treats geographyas the primary musical interface.

Michael Rohs and George Essl (Deutsche Telekom) presented some developments that they had made in turning the video enabled mobile phone into an interactive musical instrument. Their research involves connecting the phones to a synthesis engine via bluetooth and controlling the sound by scanning the phone across a series of impulse images. Of course, the irony was that the project, although engaging with a mobile technology, had rendered the phone unitfixed to a very particular location in order for it to function. The final paper for the session was delivered by Greg Schiemer of Wollongong University who spoke about and demonstrated his work for gyrating mobile phones that are able to produce various justly intoned scales. By swinging the phones gently around their heads the performers create a beautiful mix of sounds all imbued with a hint of

Doppler shift. The photo shown here (left) is actually taken from Schiemer’s website, unfortunately, the photo’s I took at the live performance didn’t turn out. As it happened the gallery space pre-arranged for the performance was too small to accommodate the swinging phones so the event was moved onto the street making it the most ‘public and mobile’ event on the concert calendar for the conference.

Session 2: Networked and Collaborative
The second session focused on networked performance environments often with a collaborative focus. The first two papers (Bryan-Kinns et al and Gurevich) outlined networked performance environments designed for use by children. The idea was to retain some ‘musicality’ in the performance of electronic music making while taking the moment to moment decision making processes out of the realm of standardised music notation and into the realm of the intuitively graspable graphic user interface. Bryan-Kinns paper focused on the idea of decay in networked performance undertaken by children using the Daisy interface. Children at separate stations can add or subtract from the overall composition by adding or deleting graphic elements from a screen. Bryan-Kinns is interested in the performance dynamic fostered by such an environment. Gurevitch’s interface Jam Space is a little more sophisticated and involves the making of more consciously musical decisions. Gurevich is interested in ‘technologically mediated shared experiences.’

On a completely different tangent Ben Knapp’s research is leading toward the creation of a network of what he calls ‘integral’ musical controllers. Here, there is no collaborative aspect outside of the human machine interaction. These integral controllers are based primarily on bio and neuro feedback devices like EEG readings and galvanic skin response meters. What Knapp is trying to achieve is a two-way feedback loop where the emotional state of the performer/listener can become a factor. His research is similar to the Brain instrument mentioned above however, unlike the Brain instrument research group he is in no way claiming that the resulting sound will mirror what a person is thinking in any direct or tangible way.

The penultimate paper dealt with perturbation techniques for assessing or quantizing data output from multi-performer environments, and, finally, Ryan Aylward demonstrated and discussed his Sensemble wireless sensor system still under development at MIT laboratories.

Day 3: Tuesday 6th June
Day three of the conference turned its focus to the notion of instrument design from both the software and hardware perspectives.

Session 1: Real, Virtual, Spatial, Graphic
This session began with a paper from the Karlsruhe based ZKM collective on their Klangdom project. The Klangdom is a modular 39 speaker array configurable in a dome like structure within the Blauer Kubus performance space. The collective is endeavouring to provide a truly flexible diffusion space able to accommodate in house systems as well as interface with external software and hardware developments.

Following that, Mike Wozniewski (et.al) outlined their vision for the extension of the semiotic potential of immersive sound space that functions in standard virtual environments. Concerned with the spatial performance of sonic information, they are engaging with recent debates on locative data sonification and the potential for reducing the ‘cognitive load’ on VE users through enhanced and multiplexed sensory data transmission.

Later, in what was a long session, the papers moved from the virtual and spatial to the graphic end of user interfaces for the performance of computer-based music. Thor Magnusson (University of Sussex, Creative Systems Lab) has been working on the ixi interface systems since 2000 and gave a paper demonstrating recent developments and discussing the nature of screen-based instruments as semiotic machines. He argues that the structure and functionality of the screen-based design carries semiotic information and introduces causal and, therefore, formal tendencies to the resulting performances. His interfaces are designed to be intuitively graspable and each design draws the user into a particular way of working with sound. The ‘spin drum’ interface is shown below. As you can see, the circularity of the interfaces, which allow the user to grab and spin the objects at varying speeds, tends toward the creation of complex polyrhythmic loops.

One interface that attempts to circumnavigate this tendency for a performance interface to impose a semiotic framework on the performance itself is an,as yet, unnamed interface presented by Mark Zadel of McGill University. His interface allows a performer to draw the sounds via a wacom tablet style interface and determine certain sound manipulation properties by drawing in a particular manner or style. Zadel was keen to point out that the interface needs to be practiced and learnt just like any other instrument. SPIN DRUM His primary aim is to provide an interface that reduces the amount of pre-programmed information that sits behind the majority of lap-top based performance by allowing for a great deal of real-time and relatively intuitive performative interaction.Interestingly, the visual results of his demonstration, as well as the visual examples provided in the documentation (pictured right), take the shape of children’s drawings, a series of abstracts quiggles and shapes. Despite the chaotic appearance of these results the sounds produced in the demonstration were highly organised and also evidenced a significant skill base in the performer(Zadel).

The final presentation of the session came from Toshio Iwai of the Yamaha Corporation providing an amusing demonstration of an instrument called the Tenori-On. Although essentially a musical toy, the interface was a sophisticated grid of 16×16 programmable and touch sensitive led’s. Once again, the interface is designed to simplify the generation of musical materials, providing an intuitive platform for human machine interaction.

Session 2: Instrument Design
The second session of the third day focused on the enhancement of existing electronic musical instruments new design concepts for the future. The first paper (Kvifte et al)presented a proposed organology for a coherent definition of electronic instruments in terms of both description and design. This was an essentially banal paper designed to unify the discourse on electronic instrument design under the umbrella of a singular and reductive terminology. Following the organology discussion Mark Marshall (et al)presented recent work in the area of vibro tactile feedback. The impetus behind this subtle form of force-feedback is to provide the performer of electronic instruments with the feeling of playing an acoustic instrument. For example, sending the vibration normally felt by a clarinet player to the tongue of a wx7 wind-controller player. Although the subtlety in approach and the techniques developed were admirable, the teleogy of the research is problematic. One of the inherently interesting things about electronic instruments (whether software or hardware) is that they ARE NOT acoustic instruments.It is this very fact that provides the impetus behind the ability of new electronic instruments to forge new approaches to sound and performance. In light of this, it seems strange in the extreme to dedicate a lot of time and research energy into forcing the ‘feel’ of an acoustic performance onto the electronic or computer-based performance situation.

Of far greater interest was Rodolphe Koehly et al’s presentation of research into paper-based FSR’s (force sensing resistors) and latex/fabric traction sensors. The whole DIY aesthetic of this approach was a welcome relief from the preceding hyper-boffin approaches to interactivity. The paper outlined techniques for constructing linear touch potentiometers using discarded video-tape and some of the potential applications of conductive polymers including the construction of paper-based pressure sensors and tissue-based bend sensors.

organization of sound and instrument. In essence, the ad hoc instrument idea is an extension of Bowers and Archer’s idea of the infra instrument presented at NIME 05 which deals with ‘in between’ instruments. These infra instruments are either moving toward becoming instruments (in construction) or moving away from having been instruments (in de-construction) but in this state of flux they can neither be described as instruments or non-instruments. Although a relatively obvious observation,particularly to anyone with experience or knowledge of the development of experimental musical instruments, it is certainly an idea that should be central to the thinking of a community of developers in the field of interface design and was a welcome contribution/reminder.

Day 4: Wednesday 7th June
Due to sound checking schedules at Pont Ephemere, I was unable to attend the Wednesday sessions.

Posters and Demos:
Nested alongside the paper presentations outlined above were the poster and demo presentations where developers of new software and hardware interfaces demonstrated their wares. There were also permanently set-up tables advertising the products of companies like EoBody, Cycling74 and Lemur among others. A selection of the interfaces and installations is outlined below with a photo and brief descriptor:

1               Visual turntable: An interface that uses a standard turntable monitored by an I-sight camera to capture shapes and colours that are sent to Max/MSP and used to control synthesis and playback parameters:

2               Orbaphone: A custom built dodecahedron shaped unit designed to produce sound and corresponding light information. The unit works on the principles of sound and light ‘radiation’ as opposed to ‘projection.’

3               Musi-Loom: A room sized installation where participants manipulate samples and other synthesized sound elements by manipulating the various aspects of an old loom-like device. The sensors were both tactile (you had to touch the loom) and free form (video tracking etc), a combination that enhanced playability of the interface.

4               Slider: An essentially simple strip-sensor interface with some attached scaling parameters to manage the outgoing data. Some neat tricks in max/MSP to smooth the sample management. Useful tool for thethe digital dj perhaps.

5               Reactable: Probably one of the most impressive interfaces on display at the conference the Reactable is a remarkable combination of sound and light information. Performers place ice hockey ‘puk’ like objects onto the light table surface to instantiate a sound or a process (like a filter). If the wave appears. If a filter module is attached then the visual representation updates to incorporate the effect of the filter on the initial wave form.Interestingly, this incredible digital interface gave rise to performances that sounded like elementary noodlings on an old analog synth (an EMS aks forexample).

Concert Schedule: Alongside the papers, posters and installations was a series of concerts held at various venues around Paris. It is beyond the scope of this report to outline the concerts in detail. Suffice it to say that there was a strong Australian contingent and the performances by The Bent Leather Band, Greg Schiemer and theGarth Paine/Michael Atherton duo were all highlights of the concert series. I presented my laser-based audio visual materials at Pont Ephemere on the evening of Wednesday the 7th. It was well received.

In conclusion the NIME 06 conference presented a comprehensive appraisal of current and emerging approaches to the idea of the musical instrument interface. The conference was well organised and presented information in a number formats, papers, posters,demos, installations and concert presentations, allowing for these new approaches to be described, discussed, played with and witnessed. I feel, having attended the conference that I am far better placed to work on future projects that revolve around interactive technologies and extended musical interfaces. I would like to take this opportunity to thank ANAT for their generous support and commend them on their commitment to innovative technology based arts practices nationwide.

http://www.robinfox.com.au/

Tags: , ,

Leave a Reply