afferent data engages the camera as a means of mediating imperceptible and automatic movement of the human body into a generative instrument for executing a real-time sound environment. Even in intended stillness, the lens of the camera captures another reality: the subtle movements the body makes to remain upright; the expansion and contraction in breathing.
We digitize and feed this data stream into a custom computer software instrument. The software maps this data stream onto a sound generation process that fragments and re-synthesizes recorded audio samples of the performer’s breath. Additionally, the software “moves” those samples around within an 8-channel immersive speaker array that surrounds the viewers. The audience sees a performer in complete stillness, yet hears a complex soundscape of reconstituted breath, the result of mediated sight through the watchful eye of the digital camera.