Here is one of my experiment on video tracking to control sound processing. In this experiment, the main strategy is to use the difference of RGB values between each successive frame to control various sound synthesis and audio file playback. The spike of these data and the density of the spikes are also used to create different changes in sound.

Chien-Wen Cheng, D.M.A. in Music Composition from the University of North Texas (2007), is an associate professor at National Taipei University of Technology, specializing in interactive computer music. He has won awards like Best Background Music (Moonwhite Film Festival, 2018), NTSO prizes (2011, 2013), and first prize in the 2007 Voices of Change Competition. His works have been featured in ICMC, SEAMUS, and international competitions.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment