#MusicBricks
TAKING MUSIC TECH IDEAS TO MARKET
Matt Black, Coldcut / Ninja Tune
#MusicBricks projects
Rojan Gharibpour, Incubatee
#MusicBricks toolkit

Gesture Sensors for Music Performance
The R-IoT sensor module embeds a 9 axis sensor with 3 accelerometers, 3 gyroscopes and 3 magnetometers, all 16 bit. It allows for getting 3D acceleration, 3-axis angular velocity and absolute orientation at a framerate of 200 Hz over WiFi. The core of the board is a Texas Instrument WiFi module with a 32 bit Cortex ARM processor that execute the program and deals with the Ethernet / WAN stack. It is compatible with TI’s Code Composer and with Energia, a port of the Arduino environment for TI processors. The sensor module is completed with a series of analysis MaxMSP modules that facilitates its use, based on the MuBu & Co Max library. This collection of analysis tools allows for: filtering and analyzing, computing scalar intensity from accelerometer or gyroscope, kick detection, Mdetection motion patterns such as “freefall”, spinning, shaking, slow motion. Further motion recognition tools are available in the MuBu & Co library.

Melody Extraction
This module includes a number of pitch tracking and melody transcription algorithms implemented in the Essentia library. Applications include visualization of predominant melody, pitch tracking, tuning rating, source separation.

Real-time Onset description
This module allows to detect onsets in real-time and provide a number of audio descriptors. It is part of essentiaRT~, a real-time subset of Essentia (MTG’s open-source C++ library for audio analysis and audio-based music information retrieval) implemented as an external for Pd and Max/MSP. As such, the current version does not yet include all of Essentia’s algorithms, but a number of features to slice and provide on-the-fly descriptors for classification of audio in real-time. A number of extractors analyse instantaneous features like the onset strength, the spectral centroid and the MFCC’s over a fixed-size window of 2048 points, after an onset is reported. Furthermore, essentiaRT~ is able to perform estimations on larger time-frames of user-defined lengths, and to report finer descriptions in terms of noisiness, f0, temporal centroid and loudness.

Sonarflow
A slick UI for browsing music by zooming into a colourful world of bubbles which represent genres, artists or moods which allows to discover new music online from various sources. It is available for iOS and Android, with APIs connecting to 7digital, last.fm, Youtube, Spotify, etc. Demo app in Google Play Store. See sonarflow.com and the source code on github.com/spectralmind.

Synaesthesia
A color detection tool triggering and controlling audio. Based on OpenCV it includes a colour detection engine and object tracking algorithm to control music and sound using plain colours or coloured objects. Using the camera it can trigger musical events. It includes an example app with a dedicated GUI. It is available for OSX and iOS. Find it on github: https://github.com/stromatolite/Synaesthesia
Freesound API
Freesound (www.freesound.org) is a state-of-the-art online collaborative audio database that contains over 200K Creative Commons licensed sound samples. All these sounds are annotated with user-provided free-form tags and textual descriptions that enable text-based retrieval. Content-based audio features are also extracted from sound samples to provide sound similarity search. Users can browse, search, and retrieve information about the sounds, can find similar sounds to a given target (based on content analysis) and retrieve automatically extracted features from audio files; as well as perform advanced queries combining content analysis features and other metadata (tags, etc…). The Freesound API will provide access to a RESTful API with API Clients in Python, Javascript, Objective-C.

Rhythm and Timbre Analysis
This is a library that processes audio data as input and analyzes the spectral rhythmic and timbral information in the audio to describe its acoustic content. It captures rhythmic and timbral features which can be stored or directly processed to compute acoustic similarity between two audio segments, find similar sounding songs (or song segments), create playlists of music of a certain style, detect the genre of a song, make music recommendations and much more. Depending on the needs, a range of audio features is available: Rhythm Patterns, Rhythm Histograms (i.e. a rough BPM peak histogram), Spectrum Descriptors and more. The library is available for Python, Matlab and Java.

MusicBricksTranscriber
(Melody & Bass Transcription + Beat & Key & Tempo Estimation)
The MusicBricksTranscriber provided by Fraunhofer IDMT is an executable that allows to transcribe the main melody and bass line of a given audio file. Also, the beat times, the key, and the average tempo are estimated. The results can be provided as MIDI, MusicXML, or plain XML files. In addition, a Python wrapper is included to further process the analysis results.

POF
POF = Pd + OpenFrameworks : openFrameworks externals for Pure Data, providing openGL mutlithreaded rendering and advanced multitouch events management. A recent addition to #MusicBricks by Antoine Rousseau, in collaboration with Matt Black of Ninja Tune, it makes making PD music apps much easier including cross platform. It also feeds the Mobile Orchestra through SyncJams. Find it on Github.

Musimap
Musimap’s algorithm applies fifty-five weighted variables to each music unit (e.g tracks, genres, labels) so as to model the world’s discography as a multi-layered system of cross-matched influences based on a musicological, lexicological and socio-psychological approach. The granular and proprietary database includes over 3B data points, 2B relations, and soon counting 50M songs. Its neural music network is the result of a unique combination of in-depth human curation and the latest AI technologies to engineer a multi-layered system.

Real-time Pitch Detection
The real-time pitch detection allows to estimate the predominant melody notes (monophonic) or multiple notes (polyphonic) from a consecutive audio sample blocks. This allows to transcribe the currently played / sung note pitches from a recorded instrument / vocal performance. The monophonic version also estimates the exact fundamental frequency values. Typical applications are music games and music learning applications. Fraunhofer IDMT provides a C++ library as well as sample projects that show how to include the functionality.

Search by Sound Music Similarity
The Search by Sound online system is based on the Rhythm and Timbre Analysis (see above) and provides a system which can be used via a REST Web API (called SMINT API) to upload, find and match acoustically similar songs in terms of rhythm and timbre – without the need to install any prerequisite or run the analysis on your own. It can be used with your own custom music dataset or the readily available content from freemusicarchive.org that was already pre-analyzed by rhythm and timbre, to find music matching a particular rhythm or timbre from that archive.

Goatify
The Goatify tool provided by Fraunhofer IDMT is an executable that automatically replaces the main melody in a song with a given sample. Therefore the main melody is extracted and removed from the song. Then the sample is placed and pitched according to the melody notes in the song. For proper pitching of the sample, the pitch of the sample itself is extracted beforehand. The tool is delivered with free sound samples (goat, etc.) from www.freesound.org for direct use.

SyncJams
A recent addition to #MusicBricks, SyncJams is an open source standard to allow wireless inter music app sync and communication of key/scale between players in a ‘mobile orchestra’, authored by Chris McCormick in collaboration with Matt Black of Ninja Tune. Also defined as: “Zero-configuration network-synchronised metronome and state dictionary for music application”. Currently Pure Data and Python are supported. Find it on Github.

Real-time Pitch-Shifting and Time-Stretching
The real-time pitch shifting library allows users to change the pitch of audio material while keeping the tempo. It allows enabled changing the tempo without changing its pitch. Typical applications are music games and music learning applications as well real time performances. Fraunhofer IDMT provides a C++ library as well as sample projects that show how to include the functionality.
A demo created as a collaboration between IRCAM and UPF to demonstrate synergies between their #MusicBricks tools. Watch the video
Thomas Lidy, TU Wien
#MusicBricks blog
#MusicBricks incubated team gets European funding
I expect you know someone who has some programming skills. Perhaps someone with a bit of experience with EEG brain-computer interfaces. I imagine that person would probably enjoy going to live for a while in Portugal and enjoy some neurotechnology work in the beautiful, sunny city of Porto with some brilliant members of the Music Tech Fest family.
SoundCloud teams up with #MusicBricks
#MusicBricks is exactly the sort of open and collaborative music creation SoundCloud is about. It opens up new options for makers and musicians to be creative and we’re excited to support it and see what amazing creations are born.
Michela Magas: Innovation Luminary
In a gala award ceremony in Amsterdam, festival founder Michela Magas was presented with one of the highest awards for Innovation in Europe.
New tools in the #MusicBricks toolkit!
We’re really excited to welcome two new members of the #MusicBricks family: Musimap and Synaesthesia.
Ben Heck Show and #MusicBricks
In this week’s episode of The Ben Heck Show,Ben and the team interviewed, judged – and, in some cases, even joined #MusicBricks teams.
#MTFBerlin: Announcing…
Ben Heck, host of popular YouTube channel The Ben Heck Show joins us with his crew at #MTFBerlin to hack, invent, share a bit of maker knowledge and film a special episode of the show surrounded by our 50 hackers in the element14 Hack Camp.
#MTFBerlin: Announcing…
Mercury Prize nominee ESKA will not only appear on the main stage at #MTFBerlin… her plan is to form collaborations, invent new types of musical instruments and experiment in the giant creative laboratory that is Music Tech Fest.
Let’s get down to business
#MTFBerlin is nicely tucked between The Great Escape and Midem. Since you’re in Europe anyway…
The festival of (exceptional) music ideas
Hi-Note uses head motion tracking and nuanced breath control to open up new possibilities for both disabled and non-disabled musicians
Meet Terry – our new Producer
We’re incredibly proud and phenomenally lucky to be able to officially welcome Terry to the MusicTechFest family. She’s not just a new member of the team – she represents bold new ambitions, exciting new directions and an expanding network of artists and music technology to bring into the MTF community.
#MTFBerlin at Funkhaus: 27-29 May 2016
Berlin is Europe’s capital of music tech – and MusicTechFest will once again bring the entire music technology ecosystem together under one roof: artists and scientists; academia and industry; makers, inventors, performers, composers and visionaries.
The #MusicBricks toolkit so far has completely exceeded our expectations…
By now, you’ll have heard about the #MusicBricks toolkit that puts groundbreaking music technology in the hands of musicians, hackers and creative developers, and supports the most promising ideas to commercial prototype. We’re going to be showcasing the successes from that project at our next big event: #MTFBerlin in May 2016.
#MusicBricks is a collaboration between Stromatolite, Sigma-Orionis, Ircam-Centre Pompidou, Music Technology Group, Vienna University of Technology, and Fraunhofer IDMT
For more information join our newsletter
#MusicBricks has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement 644871