Birdbots

Conceptual project description

What would happen if we activate some self-learning robots in an environment where they will be exposed to various stimulus (sound, human presence, light, )
The idea is to see the evolution of their communication language. Instead of using any kind of digital network they will use the ability to synthesize and analyze human-audible sound. The human-interaction will be then apart from the mere presence in the same space, also the analysis / incorporation of the sounds generated by the audience which in some cases could trig a chain reaction of the robots.
The presentation of this installation will look a bit like a multichannel audio installation, where every robot has a speaker and manages it but it's in constant dependency of the sounds created spontaneously by the other surrounding robots and also by the presence / sounds created by the eventual visitors. there will be different states depending of the time of the day and also every entity will have it's own unique character, determined by parameters such "shyness" "randomness" "dominance" "empathy" etc.

General technical description

NTT it's a network of low-power autonomous computers, they will be presented hanged in a nylon structure and each one of them it has a small loudspeaker and two binaural microphones that will provide apart from the high quality stereo audio input, also a relative position of the incoming sounds by comparing the amplitude of the audio signals. The idea is not to compete with speech recognition / speech synthesis, instead the idea is more like "what would happen if you give a brain and the ability to develop to an modern audio synthesizer, able to generate thousands of tones? ". My particular and experimental approach to the analysis of sound, will be done by analyzing simultaneously and in real time multiple characteristics of the sound, among them , frequency analysis, chord analysis, percussive elements analysis (consonant like), audio envelope analysis (volume curve) and silence analysis, frequency and duration of the relative silences (I expect some residua due to natural reverberation in any room). All this data will be simplified and collected in a big array that will be growing gradually. At the same time. this array will be used to compare incoming sounds with memorized sounds in order to detect and react to known sounds,

Technical requirements

raspberry pi computers , soundcards, mini speakers , arduinos, electret microphones, Nylon string to hang the bots, RGB leds, extension for power cables, 5v power supplies.


>>Home<<
Ntt: a platform for sonic interaction.

This portable soundboxes equipped with speakers, sensors, wireless, bluetooth, microphone, etc are actually a perfect playground for creating multichannel audio generative installations/ performances and all kind of sonification events.

Examples: -- interactive audio guide.
-- Adding "voice" to plants through bluetooth moisture/.* sensors
-- Instrument Tuner Accoustic / Electronic
-- sonic games.
-- Ear trainer
-- Automatic / algorithmic radio.
-- time / Space Intercom.
-- Audio storage










About
Music
Portfolio
Contact
Home