In the saxophone quartet Persipicio I use real time analysis of what the saxophones play to generate and manipulate musical material and sounds. When I composed this piece in 2002 and 2003 I was under the impression that the fact that the analysis was done on the interpretation of the score rather than on the score itself was interesting enough in itself, not only on the analytical level but also on the level of perception. However, as I am revising the electronic part for this piece I am trying to address the question of significance. Why does a certain input give rise to a certain kind of output and what is the significance of this relation to the composition as a whole and to the events as such?

Another, more general question that may be asked is what is the limitations of the computer as an interactive performer and how should this limitation be dealt with in relation to the output of the computer and the whole? If we consider the analysis part of the specific computer program for this piece as the ears of the virtual performer how can we deal with the, on one level, restrictions these ‘ears’ have compared to human perception? Should I give up trying to implement or compose certain gestures and accept the restrictions or should I work with ways to circumvent them? And if I chose to do that how will that affect the real-time aspect of the composition?

Click the tag/category for related posts