Wednesday February 29th, 2012

Nick Collins

3.00pm Computer Science Auditorium (CSG-01)

Machine listening and learning for musical systems

Musical articial intelligences are playing an important role in new composition and performance systems. Critical to enhanced capabilities for such machine musicians will be listening facilities modeling human audition, and machine learning able to match the minimum 10000 hours or ten years of intensive practice of expert human musicians. Future musical agents will cope across multiple rehearsals and concert tours, or gather multiple commissions, potentially working over long musical lifetimes; they may be virtuoso performers and composers attributed in their own right, or powerful musical companions and assistants to human musicians.

In this seminar we’ll meet a number of projects related to these themes. The concert system LL will be introduced, an experiment in listening and learning applied in works for drummer and computer, and electric violin and computer. Autocousmatic will be presented, an algorithmic composer for electroacoustic music which incorporates machine listening in its critic module. Large corpus content analysis work in music information retrieval shows great promise when adapted to concert systems and automated composers, and the SuperCollider library SCMIR will be demonstrated, alongside a new realtime polyphonic pitch tracker.


Nick Collins is a composer, performer and researcher in the field of computer music. He lectures at the University of Sussex, running the music informatics degree programmes and research group. Research interests include machine listening, interactive and generative music, and audiovisual performance. He co-edited the Cambridge Companion to Electronic Music (Cambridge University Press 2007) and The SuperCollider Book (MIT Press, 2011) and wrote the Introduction to Computer Music (Wiley 2009). iPhone apps include RISCy, TOPLAPapp, Concat, BBCut and PhotoNoise for iPad. Sometimes, he writes in the third person about himself, but is trying to give it up. Further details, including publications, music, code and more, are available from


Wednesday March 14th, 2012

Andy Farnell

3.00pm Computer Science Auditorium (CSG-01)

Andy will be presenting a lecture and workshop where students can understand and create sound effects starting from nothing.

Approaching sound as a process, rather than as data is the essence of Procedural Sound which has applications in video games, film, animation, and media in which sound is part of an interactive process creating a living sound effect that runs as computer code,  changing in real time according to unpredictable events. We will use the Pure Data (Pd) language to construct sound objects, which are more flexible and useful than recordings. Participants should come with Pure Data (Extended version) already installed on their laptops, a fully charged battery and a set of headphones.

Andy Farnell is a computer scientist from the UK specialising in audio DSP and synthesis. Pioneer of Procedural Audio and the author of the MIT textbook “Designing Sound”, Andy is visiting professor at several European institutions and consultant to game and audio technology companies. He is also an enthusiastic advocate and hacker of Free open source software, who believes in educational opportunities and access to enabling tools and knowledge for all.


Wednesday March 28th, 2012

Domenico Scijano

3.00pm Computer Science Auditorium (CSG-01)

During his talk Sciajno will explore the actual limits and the hidden potential of the present-day electronic sound production and composition in that peculiar area where borders between the academic and the non-academic scene are weaker. In spite of the technical limitation that characterized the initial production of electronic composition the music created had an inner multilayered structure that reflected the articulation adopted by instrumental composers. With the technical development that enables musician to store in a laptop all the electronic instruments that in the fifties-sixties occupied full laboratories we assist to an increase of complexity that affects mainly the morphology of single streams of sounds rather than the structures and the forms. A reflection over this issue and to other connected to it will be done during the lecture and will be accompanied by some presentations of his investigations, compositions and mixed media works.

Born in Torino, Italy,1965, now based in Palermo, Sicily Domenico Sciajno studied ‘Instrumental and Electronic Composition’ and Double bass in the ‘Royal Conservatory’ of Den Haag in Holland. His interest for improvisation combined with the influences of an academic education brought his research to the creative possibilities for interaction between acoustic instruments, indeterminacy factors and their live processing by electronic devices or computers.

From 1992 he has been present in major international music and media arts festivals as interpreter, improviser and composer. His work is documented by worldwide independent labels of experimental and electronic music. The wide spectrum of his experience brings him very close to the concept of performance, where he uses texts and electronics in combination with a choreographic use of the scene space and the projection of visuals made by himself. He has developed Interactive Sound installations for art galleries and exhibitions.

Since 2003 he teaches Electronic Music at the Conservatory of Music Scontrino in Sicily. In the edition 2004 of Prix Ars Electronica his work OUR UR in collaboration with Alvin Curran was given an an honorary mention. In September 2008 his multichannel installation Phadora was premiered during the opening on the 2008 edition of Ars Electronica.

More at