CARNEGIE MELLON UNIVERSITY PORTFOLIO
WORK SAMPLE 1:
LUSCINIA
for orchestra and live electronics (13′)
winner of the 2017 La Jolla Symphony Nee Commission
winner of a 2018 ASCAP Morton Gould Young Composer Award
program notes
“luscinia” is the genus portion of the scientific name for the common nightingale, Luscinia megarhynchos. Nightingales are small birds found primarily throughout Europe and Asia, and are known for their highly varied song, which is often sung at night. They have been referenced throughout literature, music, and visual art for centuries, though perhaps one of the nightingale’s most well-known appearances is in the tale of Philomel, found in Ovid’s Metamorphoses. Ovid writes of a young woman who is assaulted by her brother-in-law, Tereus, who then cuts out her tongue to prevent her from identifying him as the perpetrator. Unable to speak, she weaves a tapestry depicting her assault and sends it to her sister Procne, who hatches a plan to exact revenge. After discovering this plan, Tereus chases Procne and Philomel into the forest, where they escape by being turned into birds – Procne into a swallow, and Philomel into a nightingale. For many artists, the nightingale’s song has often had melancholy connotations, presumably due in some part to Ovid’s story; however, in a somewhat cruelly ironic twist, modern ornithologists have found that it is usually only the male nightingale that actually sings (as is the case with many species of birds). This piece incorporates live electronic processing, which involves both the generation of new sounds in response to the orchestra and live modification of what the orchestra is playing. This allows for the seamless integration of the acoustic and electronic elements of the piece, and in some cases, they may be indistinguishable. One of the most important aspects of the processing of the orchestra allows for the production of vocal sounds using the spectral profiles of the music that the orchestra is playing. In this way, the orchestra is able to give voice to those who have historically been silenced. In fact, luscinia is, most of all, a meditation on silence (albeit not a peaceful, pastoral one). Anyone who has paid attention to the news as of late knows that we are currently experiencing a watershed moment with respect to societal conversations surrounding sexual assault. Though I began work on this piece many months prior to the Harvey Weinstein investigation (and the many others that have followed), I hope that someday soon, situations such as the impetus for this piece will no longer be commonplace. While many composers hope that their music stays relevant long after its premiere, I can say with certainty that I sincerely hope that this piece does not. It is time for change, and it is time for action. I am immensely grateful to all of the people who contributed their stories to the electronic component of this piece, and to Maestro Schick and the orchestra for their trust and adventurousness in bringing it off of the page. I am also grateful to the Nee family for supporting this commission (and emerging composers in general), and for their belief in the importance of the creation of new music.
WORK SAMPLE 2:
UTTERANCES, WANDERINGS
for open duration flexible string instrumentation
commissioned by the VIVO Music Festival
>>>score<<<
>>>stochastic practice version of dynamic score system<<<
note: the following video has three different iterations of the piece that took place over the course of a concert.
utterances, wanderings was commissioned to accompany the works of Italian composer Giacinto Scelsi, whose compositional process often embraced improvisation, collaboration, and new technologies. He would often record his improvisations on the ondiola and then transcribe (or have trusted friends and collaborators transcribe) those recordings, arranging them for different combinations of acoustic instruments. Drawing inspiration from this process, utterances, wanderings uses a custom AI model to place a detailed analysis of Scelsi’s music in conversation with an analysis of my newly-composed piece in real-time during the performance. The software I’ve written “listens” to the performers and adapts, dynamically presenting them with new modules of my score to use as starting points for improvisations in a way that is (theoretically) characteristic of some of the musical decisions at play in Scelsi’s music. The audience is later invited to join in the improvisation as well, creating an immersive soundscape together using a custom app built on the same model, as well as recordings made during the performance. In a time when AI is increasingly playing an exploitive role in music-making, this piece aims to demonstrate that when used thoughtfully, it can be a tool for collaboration and co-creation. It aims to prove that as the world becomes more fractured (in part, due to these new technologies), artists and musicians can act as agents of repair and unity.
TECHNICAL NOTES
An LSTM was trained on rehearsal recordings of the Scelsi pieces on the program involving a combination of MFCCs, chroma features, and my annotations of the next module of my piece associated with averaging over a temporal window (which was tuned during the rehearsal process). In this way, it acts like a predictive algorithm based upon timbre and pitch. Using a combination of Python and Max, the MFCC and chroma calculations and consequent score module selections are communicated to the players’ tablets via Node.js and WebSockets. In order for performers to practice and gain fluency with the interface and materials, I created a stochastic version (without live analysis) of the algorithm. That can be accessed here.
Additionally, I created a very simple web app for audience participation for the post-concert talk-back and reception, where audience members could improvise and “remix” recordings from the performance to continue the chain of collaboration and co-creation integral to Scelsi’s work. That is available here. I unfortunately do not have any quality documentation of the post-concert interaction period. I designed the audioreactive visuals in response to Scelsi’s personal signature emblem (a circle above a line of roughly the same diameter), which he viewed as a less egocentric approach to attribution of his work.
WORK SAMPLE 3:
EXCISION NO. 2: THEY DIDN’T KNOW WE WERE SEEDS
for viola and live electronics (10′)
written for Kurt Rohde
commissioned by the Barlow Endowment for Music Composition at Brigham Young University
>>>score<<<
program notes
“what didn’t you do to bury me
but you forgot that I was a seed”
-Dinos Christianopoulos, The Body and the Wormwood (1978)
excision no. 2 takes the concept of a seed and the roots that it grows as a point of departure, using both sonic representations of roots pushing through soil and spectral processes that metaphorically represent growth to create a performance system that ultimately questions the performer’s embodied relationship to their instrument using a transducer strapped to the back of the viola. Due to the fragile and awkward nature of some of the performance techniques involved (in particular, bowing beyond the bout and close to the scroll), the physical vibrations from the transducer are strong enough to impact the ability of the performer to control the bow, leading to unintentional sonic outputs and ultimately questioning the agency of the performer within the cybernetic feedback loop.
WORK SAMPLE 4:
SUBSUMPTION, NO. 1
site-specific interactive sound, light, and film installation
designed in collaboration with particle physicist and filmmaker James Beacham
(all code, interaction design, sound design, lighting design, and actual installation done by me; film by James)
program notes
A collaboration between creative technologist Tina Tallon and particle physicist and filmmaker James Beacham, Subsumption, No. 1 is a site-specific interactive sound, light, and video installation which examines different registers in which we engage with hidden physical, psychological, and social structures and the emergent behaviors that result from these interactions. The viewer is complicit in the construction of their experience, unique to each participant, subject to rules not fully legible—and perhaps unknowable—but inescapable. The viewer’s movements through the cryptoporticus mold field recordings, images, and data from the Large Hadron Collider at CERN into new visual and sonic patterns, questioning the biases that lead us to classify stimuli as noise or signal in an effort to elucidate the underlying structures that give rise to our experiences of space and time. Subsumption, No. 1 asks what we owe ourselves – and more importantly, what we owe each other – in protecting spaces of potentiality for all of the stories that have yet to be told, both on human and cosmic timescales. Ultimately, it asks us to consider our roles in constructing and maintaining unjust hegemonies, and to imagine what alternatives may exist if only we refuse to accept the status quo and insist upon continuing to search.
technical notes
Installed in the cryptoporticus of the American Academy in Rome during January and February 2022, the installation is comprised of a network of 3500 individually-addressable LEDs woven into over 3500 m2 of transparent nylon webbing encompassing the space, a film projected onto a raised, horizontal screen film, 8 channels of live audio spread along the corridor, and a network of ultrasonic proximity sensors that control the spatialization and behavior of the audio and lights.
The webbing into which the lights are interwoven is designed to be as transparent as possible when the overhead lights in the space are on, with the nuances of the structure only becoming apparent in the dark as visitors activate the lights that articulate the space. The piece first opened to the public during the AAR’s Winter Open Studios exhibition, but was left running for the next month after for data collection.
The drone is created via a feedback loop in the ancient Roman aqueduct segment that runs underneath of the American Academy’s cryptoporticus, which is activated by the vibrations from the steps of the people above. The air column in the aqueduct in turn vibrates the screen of the projection box above the access shaft through which the film is being projected, which gives a haptic experience of the physical and electronic processes at play in producing the sound.
A network of ultrasonic motion sensors along the length of the cryptoporticus controls both the spatialization of the lights and the audio, allowing the interaction to both follow the users around the space and attempt to entice them to move to other locations. Data related to timing and location is collected and used to train a simple machine learning algorithm, resulting in the constant evolution of the ways in which it follows and attempts to guide them through the space. (Thankfully, because of COVID restrictions, very few people could enter the space at a time, which led to a high degree of granularity and personalization of the experience.) Each light is mapped to a pixel in the film, and through the interaction design, the visitor is able to “play” the video into the LED network.
(click on any of the photos below to see components in more detail.)
WORK SAMPLE 5:
YAMMER
interactive audio installation
presented at NeurIPS 2025 Creative AI Track
>>>camera-ready NeurIPS paper<<<
note: because the conference had not occurred at the time of application submission, I do not yet have a longer-form video showing actual installation and interaction with anyone other than myself (though the audio here is the actual output of the system). The video below is merely the preview and brief technical overview that was sent to populate the video compilation of all works running throughout the conference. The audioreactive visuals are not an integral part of the work; they were created to satisfy the fact that NeurIPS requests visual media for all artworks, regardless of actual medium.
program notes
yammer is an interactive audio installation and performance environment that questions the ambiguities and limitations inherent in attempts to describe and represent music and other complex human expressive sonic events using commonplace ontologies in audio classification systems and large language models. Live audio produced by visitors to the installation undergoes audio classification using YAMNet (which is trained on Google’s dataset AudioSet – a corpus scraped from YouTube), and an immersive soundscape is created by combining the live audio input with playback and processing of members of the AudioSet dataset belonging to the same putative audio event classes, often to humorous and nonsensical ends. Ultimately, yammer entreats those engaging with the installation to question not only the datasets used in audio classification, but also the datasets underlying many other models with which they may engage on a daily basis. Additionally, it questions the artistic utility of text-to-sound and text-to-music models, and the role of embodied cognition in musical artificial intelligence.
