compositions, performances and development


Generating Reaper Project Files(2014-*)

coding project to use as a compositional tool

Medium: software

In 2014, a friend brought to my attention that the DAW-software Reaper used a normal ‚flat-text‘ format to write its project files. He also sent me a method to exploit this fact by means of a small script written in Python, enabling the project file to be parameterized.

It was clear that this method would offer ways to generate rough sound material. In that period I was quite intrigued by the so-called ‚strange attractors‘, specifically the Lorenz Attractor, so I started to build upon the given script, taking the Lorenz Attractor as a way to control event triggers, lengths, pitch, form and loudness . Quickly, a way was needed to handle different sets of parameters, so a method was implemented to read JSON files. This allowed to define a data set with threshold values, the to-be-included audio files, an influence on the morphology by specifying fades, and more.

Apart from that, it also allowed for micro-editing, making hundreds of edits on a millisecond scale, while offering the same possibilities on a macro-scale.

These abilities made it possible, while being reproducable, to combine series of events, having the same time-structure, but with different content, to combine multiple tracks with different content into micro-aggregates, and play these back over different combinations of loudspeakers. This technique was used both in the short antipodean rim and in the longer Tensegrity.

Both pieces do not have a stable time-structure, as the Lorenz Attractor, a never repeating three-dimensional algorithm, is always in an accelerating or decelerating state. Hence, a next design cycle of the tool could implement an arbitrary event engine, allowing for more flexible way to generate and control time.


CLASHES&CROSSFADES - metropolic moments (2018)

Performance for a dancer, a musician, a local video streaming network and the audiences smartphones

Orchestration: live-electronics, baritone/tenor sax
Duration: ca 45 minutes
Performed: Addis Ababa, Ethiopia, Walcheturm Zürich, Dampfzentrale Bern, Kunsthaus Aussersihl Zürich, Helmhaus Zürich

Melaku Belay kombiniert Elemente von traditionellem Tanz mit Beobachtungen des Alltags auf der Strasse. Jeroen Visser‘s Performance bewegt sich zwischen der starken Physikalität des Saxophons mit den flüchtigen Qualitäten der elektronischen Musik. Beide arbeiten an den Überblendungen von Vergangenheit und Zukunft: Traditioneller und zeitgenössischer Tanz, traditionelle akustische Instrumente und neue digitale Technologien. Melaku und Jeroen beteiligen sich beide an der Disziplin des anderen, der Tänzer macht Musik, der Musiker nimmt Teil an der Choreographie. Dabei bewegen sie sich zwischen dem Publikum und reagieren auf die Begebenheiten des jeweiligen Ortes.

Zusätzlich arbeiten sie mit Videomaterial, welches sie mit mobilen Projektoren im Raum verteilen können. Ausserdem wird Video- und Audiomaterial live an die Zuschauer gesendet via eines lokalen WIFI-Netzwerks. Dieses Material kann über einen Browser auf jedem Smartphone empfangen werden.

Verzögerungen und zeitliche Verschiebungen im Empfang auf den verschiedenen Geräten sorgen für eine diffuse Wolke von Video- und Audiomaterial mitten im Publikum.
Reales Geschehen, Projektionen und digitale Übertragungen widerspiegeln die Schwierigkeit im Umgang mit dem Fokus, mit welcher das Publikum ständig konfrontiert sind. Die Bilder laufen auseinander, die Zuschauer müssen sich entscheiden, wohin sie ihre Aufmerksamkeit lenken wollen. Das gesamte Bild kann kaum wahrgenommen werden. Verzögerte Signale, Überschneidungen, Verzerrungen und Bildfehler hinterfragen dabei unsere Wahrnehmung der Realität.

Eine Registrierung der Aufführung in der Walcheturm Zürich ist zugänglich (mit dem pass: fano) an dieser Stelle: CLASHES&CROSSFADES - metropolic moments
Eine Registrierung vom Intro unserer Uraufführung am AVAF Video Festival in Addis Ababa, mit Hilfe der Metallarbeiter vom Mercato findet sich hier: CLASHES_Fendika_INTRO


Limbic Limbo (2017)

composition for percussion, smartphones and cloudspeakers

Orchestration: percussion, live electronics
Duration: ca 12 minutes
Performed: Premiere at MA-concert

This composition for drums, mobile devices and Cloudspeaker is opposing the unorganized to the organized. Written for Vincent Glanzmann, it accentuates either factor, which is countered and amplified by the use of the spatialization methods.

The inherent exact time structure of playing drums is stretched by confronting the output of the performer with unorganized delays and organized delays and transformations by the Cloudspeaker. These two ways of spatializing integrate smartphones and other possible mobile devices, and uses the ICST development Cloudspeaker (Visser-Vogtenhuber), a set of mobile intelligent wireless loudspeakers with built-in SuperCollider environment. The work is constructed in such a way that the drummer can be, through the playback by mobile phones and the Cloudspeaker, in conversation with his output.

The unpredictability of the output which is temporal in nature when spatialized over mobile devices, and timbral when spatialized over the Cloudspeaker, prevents foreseeable output of this conversation, and supplies musical input for the performer at the same time.

The use of live streaming over mobile devices creates a sonotope, a sound space which is based on real sound sources instead of virtualized one, previously shown with the project die Neukoms. The sound produced by smartphones is embedded amongst the listeners, instead of the listeners being immersed by a conventional surrounding loudspeaker setup. The Cloudspeaker works in the same way, but instead of temporal, its timbre is organized.

To view a registration of the piece, with only a vague impression of the impact of the smartphones and the CLOUDspeaker, please follow this link: Limbic Limbo


Some people walk in the rain, others just get wet (2016)

composition for piano, tenorsaxophone,vibraphone and live electronics

Orchestration: piano, tenorsaxophone, vibraphone, live electronics

Duration: 15‘-18‘

Performed: FoA, Zürich, Walcheturm Zürich


This work in progress, short SPWITROJGW, was commissioned by the Ensemble Werktag for their series of concerts held under the name „Funkloch on Air“.

The three players have a set of pitches, which is changing for each of the five parts of the piece. The tempo of individual parts is given, as well as pitches and dynamics. The rhythmical structures are given in a score, to be chosen from, at the player‘s discretion. The synchronization between the players is achieved via a SuperCollider patch, prescribing tempo and deciding on length. The length of the parts can be somehow controlled by the performers by using the triggers they have available, triggering both samples and a counter, but is restricted by a maximum. In this way they can influence the length of the parts to the principal of whatever comes first, either the maximum amount of events for one part, or the time limit in bars which is on the part. The samples are organized in classes, the choice within a class is randomized, so the player doesn‘t know in advance of what to combine with.

SPWITROJGW was made around a pentatonic scale used by Getatchew Mekuria, Ethiopian saxophonist (1935-2016) to which this piece is dedicated. He used this scale especially for his rendering of the Ethiopian Warchant the „Shellela“. In orchestra pitch G the scale consists of the notes G-B-C-D#-F#, a variation of the Ethiopian Bati scale with an augmented fifth. The movement in the piece was defined by a 12-tone scale which degrades gradually to end up with the said pentatonic scale.

The live electronics have been implemented to become as much a natural part of the soundsource of the acoustical instrument as possible, using a transducer for the piano, and a talkbox for the saxophone, so no external speakers are used in the room.

A video registration of the piece is found under: SPWITROJGW

Tensegrity (2016)

electronic composition

Source: computerprogram
Medium: fixed media, 8 channel
Duration: 15‘
Performed: ZHdK, Zürich

A cube, eight points in space. Or twelve planes of four points. „Tensegrity“ plays with all these points by treating the planes which connect them, or parts of them, as a spatial grid. No panning is used in this work. The sounds themselves are points in space which by their similarity, volume and synchronicity form swarms, with temporal and spatial organization. However, they can transform from points to lines, and build layers of long held constructs. All these aggregates connect and intersect with each other on the perceptive layer, defining polyphonic spatial structures which are held together by the strings of time.

The structure of the work, as well as its sounding content, is entirely based on the Lorenz Attractor, the non-linear, non-periodic mathematical system developed by Edward Lorenz as weather model. The model, which is perpetually accelerating or decelerating, gives the impulses for microediting, and produces an ever gliding tempo scale forming, reforming, and dissolving the monolithic structures it holds together.

Controlling the Lorenz Attractor with starting parameters and time values, influences overal tempo, pitch, and the morphology of the sounding events. The form is constructed out of a random sequence produced by this method.

As with its predecessing piece, „antipodean rim“, there is no visual direction proposed to the work. Also here it is encouraged to listen to this composition into another direction as the neighboring listener.

Here is a binaural encoded version so please listen to it with headphones:: Tensegrity Binaural

Hu_Manhu_Wo_Man (2015)

transdisciplinary performance

Orchestration: computermusic RGB Lighting
Choreography: four dancers
Medium: fixed media, 8 channel
Duration: 60‘
Performed: Zürich, ZHdK

HU_MANHU_WO_MAN is a transdisciplinary performance made by Magnhild Fossum (choreography), Jeroen Visser (composition), Hans Leidescher and Eloisa Avila (scenography) in the Z+-Showroom framework. It builds on the Gender theme, abstracting this by emphasizing an artificial difference between two groups. The visitors are divided in a Non-Hearing, and a Non-Seeing group at a meeting point. From this point they are led to the performance space through the building of the ZHdK via an unusual route, to enter the performance space together.

In the space, casual sitting bags are available for all. Both groups can smell the fresh grass laying on the floor. A light-weight, but wide spread restaurant environment is heard. The lighting is very dim and discloses little of the space. Then, happening simultaneously, a program unfolds for both. The seeing group experiences a low light, minimalistic visual experience with a choreography of in gold tissue wrapped dancers in an amorphous movement. The hearing group is stimulated by small abstract sound events which can come from anywhere in the room.

Over the course of an hour, several sonic and visual experiences grow more intense, they take more space and the two environments become more and more intertwined. The handicaps are taken away, and eventually the participants become performers. They end up dancing in a humungous colourful breathing dome of tissue, on a groovy electronic pulse.


A Place I‘ve Never Been (2015)

soundtrack for animated movie by Adrian Flury

Orchestration: computer music 
Medium: fixed media, 8 channel
Duration: 4‘
Performed: 53 worldwide festivals

This composition is made for Adrian Flury‘s experimental animated movie. The movie explores the overloaded online photographic archives, by means of a specific touristic place. Using single frame editing it examines hidden traces and textures, to extract a new meaning out of the reigning photographic redundancy.

The musical composition forms the global structure on which the animation movie is based. It follows a serial concept derived from a pentatonic tuning, inspired by the greek lyre. This concept was used to develop microstructures, which were vertically constructed out of the ratio of the pentatonic. The macrostructure finds its structure on this ratio too, and supplies the structure for the timeline. The audio and the visual follow, although they use some synchronization points, their own pathways, contributing to an equally balanced audio-visual experience.

The animation can be found here on (please use password: apinb): A place I've never been

12&12 (2012)

24 hours field recording / sound installation

Source: field recordings
Medium: fixed media, stereo
Duration: 24 hours
Performed: Alliance Éthio-Française, Addis Ababa, Ethiopia
Goethe Institute, Addis Ababa, Ethiopia
Radio Lora, Zürich
Radio ON, Berlin

12 & 12 is a sound-installation which exposes 24 sound-stations playing 1 hour of real time sound clips. These one hour registrations together form an overview of one day in Addis Ababa represented in sound, depicting subjects as sport, leisure, trading, work, food, and festivities. The sound-stations are documented with the place and the time of the recording to identify the origin. Connection with visual information is avoided as much as possible, permitting the forming of individual images in the head of the listener.

The individual sounds are made available through loudspeakers and headphones depending on the chosen material, as much as possible produced with equipment and objects available in Addis Ababa.

As sound is one of the very few non-contact senses that works while asleep, there is a subconscious registration of sound in the human mind. This is filtering out what is not needed, and letting through what is for useful for communication and physical reaction. Acoustic impressions are often thought of as not useful (or even annoying), but while having the right focus and placement, they can even be experienced as having definite musical qualities. In this exposition of sound one can experience some of the rhythm, tempo, and color of Addis Ababa during one day.

The title 12 & 12 reflects to the daytime starting at 6.00am, and the nighttime at 6.00pm in Ethiopian time, as sunrise and sunset hardly change. The 12 Hour Day time was on display at the Goethe Institute, and the Alliance Éthio-Française showed the nightly hours.

12&12 was adapted for 24 hours integral radio play on Radio Lora in Zürich in 2013, and for Radio ON in Berlin in 2017.

The Spanning (2007)

electronic composition

Adaptation: 2013
Source: analog modular synthesizer
Medium: fixed media, 8 channel Ambisonics
Duration: 13“40“
Performed: vpro radio netherlands, various festivals, Zürich

The Spanning has been made in 2007 on an artist in residence invitation of Worm in Rotterdam NL, in their CEM-Studio. This studio is equipped only with analog synthesis systems.
The work was premiered in Worm (nov 2007) in a special adaptation for the ‘192 Loudspeakers’, a wavefield synthesis system developed to enable a holosonic sound projection. It had it’s radio premiere on Dutch National Radio VPRO in dec 2008, and an adaptation for Ambisonics was made at the ZHdK in 2013.

The Spanning is a work which exploits irregularities in rhythm, tuning and timbre. In search to combine artefacts of analog equipment (temperature influencing tuning, bad contacts leading to distortion etc.), with the irregularities in human physicality, a tactile long string instrument was fabricated.

The different sequences expose the spreading of reverberating overtones coming from the strings, the tiny-accent sequencer parts using stereo imaging, the pulsing electronic rhythmical structures, the static state after the turning point at around 5 minutes in the piece, the long, manually tuned revolving canonic glissando, and the long crescendo towards the end of the piece. Patterns were combined in an associative manner.

The Spanning was composed bottom-up, first producing rough material with the general idea in mind. The arrangement and mix were done in three days. Little artefacts like clicks of the sequencer were not edited out but kept as character.

For the acoustic string parts, long unwound piano strings of different lengths (3-8 meters) were used, which sounded by manually pulling them. These were led over a piezo pickup and recorded. The recorded sound was used straight, or transposed down one or two octaves.

Please have a listen with headphones to the binaural version here: The Spanning Binaural