Sound meets image: freedom of expression in texture description

Paper presented at Human Vision and Electronic Imaging XVII, 23-26 January 2012, Burlingame, CA

In this exploratory study the use of sound was explored as means for expressing perceptual attributes of visual textures. Eight participants used a physical interactive interface coupled to a frequency-modulation synthesizer to create sounds that matched visual textures. Additionally, they had to describe the similarity and dissimilarity between sound and image, which resulted in 130 unique dimensions. A hierarchical cluster analysis on synthesizer use resulted in three clusters, each corresponding with what appear to be mutually exclusive vocabularies. This may help in eventually uncovering multimodal perceptual dimensions. Follow-up research is planned, including semantic distance analysis and rating experiments.

The interface consisted of a number of colored glasses, each corresponding to a section of the Ableton Live ‘Operator’ FM-synthesizer. A USB camera and the reacTIVision platform were used to track the glasses, whereas MaxMSP was used to control the Operator parameters. The video below demonstrates the use of this interface.

Sound meets image: demonstration physical interactive interface