Audiovisual Parameter Mapping in Music Visualizations

With the advent of affordable personal computers with processors and applications that were in a position to manipulate moving images in real time, the phenomena of sound and image, which were previously separate from each other in terms of media technology, could be linked by means of the algorithmic translation of auditory and visual parameters.

The way to this development was paved by the increasing dissemination of electronic music, during which the computer established itself as an instrument. The music was created digitally, and the available applications suggested experimenting with the linking up of digital sounds and images.

Initial attempts with early programs translated techniques from analog video into digital possibilities and manipulated the image parameters. However, programs were soon developed that contained generative processes and gave rise to an independent digital aesthetic. Sound could now be analyzed and broken up into wave bands and thus as data material provide the input for image-generating systems. Today, in the course of direct programmability, a variety of complex generation methods have emerged that continuously expand the spectrum of expressive means and allow their elaboration.

The initial application areas of digital music visualizations encompassed club visuals and live performances within the scope of concerts with electronic music. Today, they have fanned out to include events of all kinds. With the breakthrough of LED technology, more and more public space is being conquered. Numerous buildings and billboards have playable surfaces and are used for artistic interventions. In addition, contemporary artists are increasingly producing multimedia works on DVD or as installations.