In the work that I have done in the last year it has become increasingly obvious to me that, in interactive music (music with human input and computer output) there are two necessarily interconnected issues that needs to be addressed with care. (1) The extracting of information from the input is maybe the most obvious problem to resolve and also the matter most commonly discussed. I have already made a point that the extracted information must be logically connected to the intended output of the interaction. (2) The mapping of the extracted, hopefully valid information on to musical means of expression.
Let us compare interactive music with the wordprocessor: If the input is performed on a keyboard, the mapping between input and output is linear and not complicated on the most basic level of understanding. Even though the input undergoes many transformations before the typed character is printed on the screen the model is simple - one discrete event is mapped on to another discrete event. However, if the input comes from voice input by way of a speech-to-text translation the system is infinitely more complex. Here, a continuous signal is interpeted and transformed to discrete characters. In interactive music one continuous signal (input) is transformed to discrete events (sampled) and output as another continous signal (output).
To develop this discussion further the reasons for wanting interaction must be unfolded. I am interested in transmitting the expressive wish of the performer to the computer part since I believe that it is in the lack of real time input to the computer part that makes it troublesome to integrate in certain types of music. This being one of the objects, the only way to determine whether the interaction has resulted in the intended effect is if there is a perceptible connection between the input (the music as played by the performer) and the output (the sounds as played by the computer). Now this brings up another question that needs to asked: What is a perceptible connection?