Example Projects
Here are some examples of projects completed in the past! Note that this does not include work completed in previous employment, undergraduate degree, or work contributing to my PhD thesis.
🥁 🔉 Electronic Snare Drum (ongoing) Long-running project to create a highly precise electronic snare drum, with the dynamic range and responsiveness of an acoustic drum, and without triggering/sampling. Email for further information.
Techniques: Bayesian Optimisation (custom, Python), Differential Digital Signal Processing (PyTorch), DSP (C++)
- Built using Bela, an ultra-low latency embedded platform (C++).
- Uses simple but high-resolution sensors (piezo, dynamic microphone), fit on any snare drum with a mesh batter head.
- Learns to transform the (muted) performance signals into the sound of an acoustic snare drum (supervised learning).
- Optimises physical models using Bayesian optimisation and backpropagation using DDSP.
🎻 🔉 Investigating Unsupervised Time-Domain Representation Learning for Digital Instruments (2019) Placement project with Bela as part of Media & Arts Technology Doctoral Training Scheme, supervised by Dr. Andrew McPherson. [pdf]
Techniques: Deep Learning (PyTorch, Python)
- VRAE-ST model developed for encoding and re-synthesis of time-domain signals of bowed string vibrations (compression ratio 100:3, at 11.025kHz).
- Variational recurrent autoencoders (VRAE) enhanced with sequence transformer (ST) in order to remove distributional shifts in the form of magnitude and temporal perturbations—which learned transformations resembling downsampling and amplitude normalisation, before/after encoding/decoding.
- Demonstrated that an RNN can learn sequences of latent encodings (trajectories) in the VRAE-ST, for synthesis of complex (and novel) timbres.
🥁 🔉 Osci-rhythm (2019) Web-app that tracks metrical frequencies of a MIDI stream in real time, using a Gradient Frequency Neural Network, in order to sequence melodies. [video, code]
Techniques: Oscillating Neural Networks (PyTorch), MIDI Processing (mido, Python), Visualisation (p5.js
), Synthesis (Tone.js
)
- Dynamical neural network (non-linear oscillator bank) which synchronises to rhythmic frequencies in MIDI stream.
- Network state used to sequence melodies in real time, using a Javascript front-end.
- User can snapshot the current melody, reset it, or add an additional bank of oscillators to layer different melodies.
🥁 🎼 AttrMidiMe (2021) Extends Magenta’s MidiMe with attribute regularisation, for further-personalised models - in order to generate drum rhythms with variable complexity! [listen, code]
Techniques: Deep Learning (Tensorflow.js, Javascript/Typescript)
- Additional loss term in the personalised VAE—learned on-top of MusicVAE—ensures latent dimension is regularised according to salient attributes for a listener, e.g. syncopation score (rhythmic complexity).
- Given ~5-10 drum rhythms with varying syncopation, can generate novel rhythms with user-specified levels of complexity.
Open source contributions:
- (2020)
bibo
, command line reference manager: added auto-complete for file commands.
- (2019)
resonators
, resonant filter bank synthesis on Bela: added Python bindings.
Other unlisted projects include:
- (2017) Real-time MIDI analysis and improvisation suggestions for electronic drums.
using Python (custom Sequitur algorithm, mido, threading)
- (2018) Sonic Breadcrumbs at Abbey Road Hackathon (2018), audio-augmented-reality interactive experiences.
using Javascript (node, p5.js, Chirp)
- (2019) Graph-based analysis of music lyrics from several sources, data acquired through scraping and communication with LyricFind.
using Python (networkx, sklearn, swagger)
- (2019) Automated generation of novel poems using content from The Poetry Society.
using Python (custom Markov models, scipy, flask), Java (Processing for Android), and Arduino
Page design by Ankit Sultana