For feedback-augmented alto clarinet
and real-time computer processing
Stelios Manousakis: Composition, programming, clarinet, electronics
The piece Palpebla Resonoj #1 is an open composition for feedback-augmented clarinet and real-time processing, taking its final form in performance. It explores hybrid acoustics through the resonances and dynamic behavior of a sonic chimera: a self-designed, self-constructed (and self-repaired) electro-acoustics alto clarinet, created from a 60+ year old, discarded instrument.
The clarinet is the central element of a compound hybrid sound synthesis system which involves acoustic, electric and digital sound paths, all interferring and interacting with one another. Its body is used at the same time as an acoustic instrument, an interactive acoustic filter, resonator and feedback chamber, a synthesis controller, and a sound diffuser.
The project forms part of long-term research in digital musical instrument design, sensing, haptics, acoustics, and non-standard digital sound synthesis and processing, and has gone through several iterations. Palpebla Resonoj #1 features a sensor-less version of the instrument, using audio analysis to control and modulate electronic processes purely through the sound generated inside the instrument.
Listen / Watch
Below is a video with excerpts from the piece's premiere at Studio Loos in The Hague (25 April 2013):
About the instrument / technical information
Acoustically, the basis of the feedback system includes an electret microphone placed inside a lathe-fabricated aluminum extension added between the instrument’s mouthpiece and neck. The microphone is especially sensitive to the mouth’s resonance, tongue movement, and sounds produced by the performer. To complete the loop, a speaker is attached to the bell of the instrument with a laser-cut acrylic harness.
Under enough volume a feedback loop emerges inside the instrument’s body. It can be manually controlled by changing the acoustic properties of the clarinet as a resonant tube: by opening and closing finger holes, blocking the top end of the clarinet with the mouth, or using the mouth cavity as a resonant chamber. The instrument can be played without a mouthpiece, which provides more filtering control with the mouth; on the other hand, the reed allows for very fine sonic manipulations by controlling its vibration - and it also allows playing the instrument conventionally, of course. The feedback loop can also be controlled electrically, by changes in the amount of preamplification, the voltage of its power source (when drained the preamp creates non-linearities), the amount of amplification, and by EQing.
The piece begins without the computer’s involvement, using purely acoustic and electric means for generating and manipulating sound. As the piece progresses, the output of the microphones is also sent to a computer running a custom-made SuperCollider modular instrument for further processing. This instrument makes wide use of a spectral processing library I am developing, centered around convolution-based FIR filtering, with kernels generated on-the-fly from the clarinet’s sound. The resulting sound is routed to the PA but can also be mixed, EQed and routed to the embedded speaker, making the processed sound part of the feedback loop inside the clarinet (see figure above). In this manner, the sound of the processing can also be controlled by the same acoustic means as the acoustic feedback (i.e. finger holes, mouth, etc) and mixed acoustically with the clarinet’s sound inside its body. This also provides strong vibrotactile feedback through the instrument, on the fingers, lips, tongue, and inside the mouth.
Additional controls for this performance include a number of audio analyzers which track different input features. They are patched in one-to-many, many-to-one, and many-to-many mappings specifically tuned for each process, creating both quantitative changes (e.g. modulating a parameter), as well as qualitative (switching to different states, e.g. different filtering profile). A switch pedal is used to sequentially advance the electronics through different, precomposed scenes.
I designed, developed and fabricated the instrument at, and with support from, the Center for Digital Arts and Experimental Media (DXARTS), University of Washington (Seattle, USA). Many thanks to Richard Karpen, Juan Pampin, Blake Hannaford, James Coupe, James Hughes, Ivan Arteaga, Meghan Trainor, Nicolás Varchausky for their aid in this project in the form of tough challenges, constructive feedback, or technical help and ideas.