« October 2007 | Main | January 2008 »

December 22, 2007

timbreMap program testing

The timbreMap program is part of PhD project and is designed to organize timbral features of its audio input in its 2D output space. It uses the JetNet implementation of Artificial Neural Networks by Lönnblad et al., in particular the Kohonan feature map. The Kohonan net is a self organizing (unsupervised training) feature map widely used in speach recognition. In the timbreMap program the network is fed a Bark scale transform of the input. In the screenshots below the output, the winning node, is represented by the black dot in the center window. There is no pre-conceived mapping of input to output and although a similar input will result in correspondingly similar output the trained weights may differ from and cause a different area in the output to respond to the same sound in two different training sets.

In the following screen capture we can observe the program while it attempts to organize its weights in response to three sine wave oscillators, crossfaded and tuned to three different frequencies. Thanks to the simplicity of the input the network organizes itself fairly quickly and optimizes its responses so that the winning node travels along the borders of the output map. Once the map is trained the network responds with the same output no matter the order or speed of its input.

Get the Flash Player to see this player.

In the next example the network has been trained on six different saxophone samples: Two ordinarily played notes, two "growled" notes and two multiphonics. What we see in the screen capture is the response of an already trained network. Though the output is noisier than the previous example there is a clear pattern to the responses. About halfway I add a simple synthesizer with a pitch tracker (using Miller Puckette's fiddle object in PD). The synthesis algorithm is a simple implementation of Phase Aligned Formant synthesis taken from the PD documentation (Chapter 3, F12). Then, I map the X axis of the network output to the synthesis formant center frequency, and the Y axis to the index parameter.

Get the Flash Player to see this player.

The mapping was done more or less arbitrarily, merely making sure the parameters would stay within reasonable ranges. Though the mapping is less sucessful on the multiphonics and the noisy growles, it makes perfect sense on the ordinary notes. Letting properties of the input control aspects of the output that belong to the same class of events, in this case seems to imply that the details of the mapping are less important. However for the noisy input, what we perceive as one sound in the input (a growl or a multiphonic), in the synthesis becomes an oscillation between two different sounds. Here, more care in the mapping is needed, or a "smearing" of the data to couteract the "jumpiness" of the output.

Posted by henrikfr at 11:00 PM | Comments (2)

December 17, 2007

Seminar Dec. 11

[Edited and updated on Dec 19]

Yesterday I defended a part of my dissertation (at what is here called a 75% seminar). The opponent was British composer and academic Simon Emmerson, whose new book Living Electronic Music[Emmerson, 2007] seems really interesting. It's difficult for me at this point to get a sense of what really happened, I simply don't have the perspective, but I have feeling it went well. Simon brought up a few issues that are really important for me to think more closely about though he primarily spoke about the musical works and not about the text. This was actually quite unexpected. The concept of artistic research, or of doing a PhD in composition, is very new in Scandinavia (in the UK it has been possible for 25 years) so we don't have an established form for the defense, nor for the content. The problem in my project is it hadn't occured to me that one could look at it from only the point of view of the compositions. IOW, I wasn't prepared for the kind of questions I was getting, nor for the discussions that followed from these questions.

The seminar brought some issues to my attention that I need to consider for the final defense in May, but I was also provided with the solution to a problem I've had difficulties with: The role of the Integra project in my PhD. The issues or questions that Simon brought up were (as I remember it now) mainly concerned with the notation of the music and I feel that most of his concerns would be resolved by documenting my pieces, improvised as well as notated, using the Integra database and the libIntegra backend.

Finally, to more clearly contextualize all of my artistic activities (improvising, composing, sound design, programming, performing, etc) I should write more explicitly about these roles and their relation to my artistic practice. I should examine the composer/performer split of "the musician" that Wishart writes about in his book On Sonic Art [Wishart, 1985]. In the seminar I am afraid that I came across as a control freak that wouldn't let anyone perform my music without my own participation which is not true at all. It is however true that I'm less interested in this aspect of music making (writing music for others to perform independently of my active participation) because music to me is listening. Writing a score for someone to perform without me hearing it is less interesting to me than working with a performer, writing a score and perform it with the performer (in this context my 'performing' may be limited to sound design or even just listening). Now, and this is the important lesson that I learned, this does not mean that the descritive/prescriptive aspects of the scores are un-important. In particular, the way the instrumental parts in some of my pieces are notated, I can see it is confusing if the electronic part is as lacking of detail as it often is and this is an inconsistency that I need to resolve.

Posted by henrikfr at 05:40 PM | Comments (0)

Back