Wednesday March 28th, 2012
3.00pm Computer Science Auditorium (CSG-01)
During his talk Sciajno will explore the actual limits and the hidden potential of the present-day electronic sound production and composition in that peculiar area where borders between the academic and the non-academic scene are weaker. In spite of the technical limitation that characterized the initial production of electronic composition the music created had an inner multilayered structure that reflected the articulation adopted by instrumental composers. With the technical development that enables musician to store in a laptop all the electronic instruments that in the fifties-sixties occupied full laboratories we assist to an increase of complexity that affects mainly the morphology of single streams of sounds rather than the structures and the forms. A reflection over this issue and to other connected to it will be done during the lecture and will be accompanied by some presentations of his investigations, compositions and mixed media works.
Born in Torino, Italy,1965, now based in Palermo, Sicily Domenico Sciajno studied ‘Instrumental and Electronic Composition’ and Double bass in the ‘Royal Conservatory’ of Den Haag in Holland. His interest for improvisation combined with the influences of an academic education brought his research to the creative possibilities for interaction between acoustic instruments, indeterminacy factors and their live processing by electronic devices or computers.
From 1992 he has been present in major international music and media arts festivals as interpreter, improviser and composer. His work is documented by worldwide independent labels of experimental and electronic music. The wide spectrum of his experience brings him very close to the concept of performance, where he uses texts and electronics in combination with a choreographic use of the scene space and the projection of visuals made by himself. He has developed Interactive Sound installations for art galleries and exhibitions.
Since 2003 he teaches Electronic Music at the Conservatory of Music Scontrino in Sicily. In the edition 2004 of Prix Ars Electronica his work OUR UR in collaboration with Alvin Curran was given an an honorary mention. In September 2008 his multichannel installation Phadora was premiered during the opening on the 2008 edition of Ars Electronica.
More at www.sciajno.net
Wednesday March 21st 2012
2.30p.m. Computer Science Auditorium (CSG-01)
TOE (Theory of Everything) is a live electronic work by ‘As Good as it Gets’ which is a duo consisting of Lars Bröndum and Sten-Olof Hellström.
SyntJuntan, Ann Rosén and Lise-Lotte Norelius + possible student collaborators.
Tuesday March 20th 2012, 10.30a.m. Music Technology Lab (CSG-13)
Wednessday March 21th 2012, 10.00a.m. Music Technology Lab (CSG-13)
Lars Bröndum, PhD, is a composer, performer of live-electronics, theorist and guitarist. His music has been performed in Sweden, Japan, Scotland, Lithuania, Latvia, England, USA, Spain and Mexico and has been broadcast on radio and webradio in Germany, England, Sweden and USA. His music often explores the interaction between acoustic and electronic instruments and lives on the border between written music and improvisation. The compositions are structured around cyclical processes, irregular ostinatos, fragmented gestures and microtonal clusters. Lars performs live using an analog modular system, a Theremin, effect pedals and a laptop with Max/MSP. Bröndum often composes for (and plays with) the ensembles ReSurge, ExSurge and Spiral Cycle. Other projects that he is involved in are the “Ensemble SFW” and a duo with Sten-Olof Hellström. He has received composer grants from Konstnärsnämnden (Swedish Art Council), STIM (Swedish Performing Rights Society) and awards from FST (Swedish Composer´s Society). Bröndum is the founder of the independent record label MuArk.
Sten-Olof Hellström has been active as a professional composer since 1984 and gained a Masters of Music in composition at University of East Anglia, England 1990. He has been employed as a researcher and composer at the Centre for User Orientated IT Design (CID), Royal Institute of Technology (KTH) since 1997. As a researcher Sten-Olof has mainly worked in the area of Human Computer Interaction where he has been part of several major EU-funded long-term research projects such as eRENA and Shape. He is also very active in the field of sonification (representing data with sound). One example of current work is the construction and development of a computer interface for the visually impaired. Sten-Olof’s main occupation and profession is as a composer working with electro-acoustic music. His music has been performed and broadcast around the world and he is also activeas a performer playing live electro-acoustic music on his own and with others such as Ann Rosen, John Bowers, and Simon Vincent. Sten-Olof is also part of the performance group Zapruda Trio based in England.
Tuesday 20th March 2012
2.00p.m. I-Media Lab (CS2-46)
Syntjuntan is an ensemble of female Composers, musicians and instrument builders.
The founders started synth junta to meet women’s Curiosity for technology and electronics, to Encourage Them to build instruments and Other Means to Facilitate Their own experimentation. Syntjuntan want to help women Have better self-esteem, so cannabis They take the place Needed to Implement Their ideas, developmental Their music and meet the audience. In the Synthesizer Junta They create Their own synth and get over Their Possible fear of any technology. It Should Them familiarize yourselves with the experimental nature of music by listening to and talking about or Perhaps even to the performer a new work to an audience.
Syntjuntan was Founded 2009 by Ida Lundén , Lise-Lotte Norelius and Ann Rosen.
For more details, see http://syntjuntan.se/
Wednesday March 14th, 2012
2.30pm Computer Science Auditorium (CSG-01)
Andy will be presenting a lecture and workshop where students can understand and create sound effects starting from nothing.
Approaching sound as a process, rather than as data is the essence of Procedural Sound which has applications in video games, film, animation, and media in which sound is part of an interactive process creating a living sound effect that runs as computer code, changing in real time according to unpredictable events. We will use the Pure Data (Pd) language to construct sound objects, which are more flexible and useful than recordings. Participants should come with Pure Data (Extended version) already installed on their laptops, a fully charged battery and a set of headphones.
Andy Farnell is a computer scientist from the UK specialising in audio DSP and synthesis. Pioneer of Procedural Audio and the author of the MIT textbook “Designing Sound”, Andy is visiting professor at several European institutions and consultant to game and audio technology companies. He is also an enthusiastic advocate and hacker of Free open source software, who believes in educational opportunities and access to enabling tools and knowledge for all.
Wednesday March 7th, 2012
3.00pm Computer Science Auditorium (CSG-01)
Alex will give a talk on laptop orchestras that will include technical details on how to create a ‘lork’ along with examples of different compositions.
The Dublin Laptop Orchestra (DubLork) aims to bring some theatricality and ‘physical presence’ into electronic music performance by creating software instruments that require movement and skill from performers and encourage interaction and improvisation. We consist of performers (currently six to eight) on laptops that each have their own hemispherical speaker placed next to them allowing for a direct relation between themselves and the location of their sound. A wireless network is also used for syncing laptops allowing the orchestra to build up intricate rhythms and textures that go beyond anything physical musicians could perform.
This seminar will look at the steps taken to set up DubLork (i.e. how you might do the same if for some reason you wanted to..), technical issues (how do you actually create sounds and talk over networks, etc), and ways to write for this type of group (no right answers here but I can show examples of things that have worked well in the past). A couple of demonstrations of pieces will be given and the hemispherical speakers and infamous golf controllers will be available for people to mess around with if things go that way.
www.dublinlaptoporchestra.com
Alex Dowling Biography:
I make music for real and imaginary instruments. I’m particularly interested in traditional music (mostly Irish and Scandinavian) and keep looking for ways to create something that is truly original but rooted in these traditions. This year I’ll have stuff performed by the Crash Ensemble, Orkest de Ereprijs, and Sideband (offshoot of the Princeton Laptop Orchestra) among others. At the start of 2011 I co-founded the ‘Dublin Laptop Orchestra’ with Dan Trueman thanks to funding from the Irish Arts Council. I also co-created the audio-visual installation ‘Bodysnatcher’ for the Biorhythm exhibition at the Science Gallery, Dublin that subsequently featured at Oxegen and Electric Picnic music festivals. This installation was exhibited at the Eyebeam Gallery in New York during the World Science Festival 2011. Later this year I’ll be starting a PhD in Composition at Princeton University.

Wednesday February 29th, 2012
3.00pm Computer Science Auditorium (CSG-01)
Machine listening and learning for musical systems
Musical artificial intelligences are playing an important role in new composition and performance systems. Critical to enhanced capabilities for such machine musicians will be listening facilities modeling human audition, and machine learning able to match the minimum 10000 hours or ten years of intensive practice of expert human musicians. Future musical agents will cope across multiple rehearsals and concert tours, or gather multiple commissions, potentially working over long musical lifetimes; they may be virtuoso performers and composers attributed in their own right, or powerful musical companions and assistants to human musicians.
In this seminar we’ll meet a number of projects related to these themes. The concert system LL will be introduced, an experiment in listening and learning applied in works for drummer and computer, and electric violin and computer. Autocousmatic will be presented, an algorithmic composer for electroacoustic music which incorporates machine listening in its critic module. Large corpus content analysis work in music information retrieval shows great promise when adapted to concert systems and automated composers, and the SuperCollider library SCMIR will be demonstrated, alongside a new realtime polyphonic pitch tracker.
biog:
Nick Collins is a composer, performer and researcher in the field of computer music. He lectures at the University of Sussex, running the music informatics degree programmes and research group. Research interests include machine listening, interactive and generative music, and audiovisual performance. He co-edited the Cambridge Companion to Electronic Music (Cambridge University Press 2007) and The SuperCollider Book (MIT Press, 2011) and wrote the Introduction to Computer Music (Wiley 2009). iPhone apps include RISCy, TOPLAPapp, Concat, BBCut and PhotoNoise for iPad. Sometimes, he writes in the third person about himself, but is trying to give it up. Further details, including publications, music, code and more, are available from http://www.cogs.susx.ac.uk/users/nc81/index.html