I.M.G.L.P.

Improvised Musical Gesture Learning Project

The IMGLP project (designed in Max/MSP) is an improvisational and performance system, intended to provide varied responses in reaction to the musical input characteristics of an instrumental performer. The system integrates two performers. The first performer is an acoustic instrumentalist, the second is a computer performer that responds directly to live musician improvisations. The musician’s improvised phrases are analysed and performance data captured. The performance data is categorized according to aspects of its musical characteristics and individual musical attributes are stored as a musically indexed phrase. The categorized phrases are then recalled by the system in accordance with the musical characteristics of the continuing live improvised musical input. The recalled performance sequence data for each musical attribute is then combined, modified and used to control a MIDI capable instrument played by the computer performer. This facilitates a dynamic computer performer response that is directly derived from and influenced by the improvising musician. The system has been designed so that no pre-prepared recorded sequence data is utilised. Therefore all sequence data is generated from the musical input of the performer, with sequence selection, combination and transformation derived from improvised musical characteristics.

IMGLP_LearningVersion












Two pieces demonstrating I.M.G.L.P are :


Klonp 


Klonp is a piece constructed from a number of recorded improvisations. The piece explores and exploits a deliberately mechanistic aesthetic to generate explicitly inhuman and improbable musical sequences. The piece fuses the minimal improvisations of a human performer (myself) playing a Bosendorfer grand piano, with the computer performer playing a multi-sampled Bosendorfer Imperial concert grand piano virtual instrument.

The human performer plays sporadic, brief and rapid arpeggiated patterns, trills and intermittent chord stabs, resulting in short musical sequences at a fast tempo played by the computer performer. Contrast is created in the piece through the swift repetitive patterns and colourful interpolated flourishes played by the computer.

No pre-prepared sequenced material is used with all computer performer musical characteristics derived from the musician’s improvisations. Paradoxically the human performer takes a secondary musical role, allowing the computer performer to respond violently with militant musicality.  




Hammer & Tongue


Hammer & Tongue is also constructed from a series of recorded improvisations, using a multi-sampled piano instrument played by the computer performer. In this piece the improvising acoustic instrument is a soprano saxophone (performed by Anna Dolphin). The piece contrasts the aesthetic style explored in Klonp. The musician this time adopting a more lyrical musical style that subsequently influences the nature of the computer performer responses. The piece is episodic in its musical structure, resulting in an evolving and ever changing musical dialogue between the two performers. The human performer responding directly to the computer performer and vice versa.

All material heard in the recorded piece is generated from the improvisations of the musician and resulting sequence output of the computer performer. Some occasional overlaying of additional soprano saxophone improvisations has been included in sections of the piece for compositional effect. 



andy@dysdar.org.uk                                                                                                                                                                                                  © 2014