I currently am using Max/MSP to create an interactive system between lights and sound.
I am using Philips hue lighting which I have hooked up to Max/MSP and now I am wanting to trigger an increase in brightness/saturation on the input of a note from a Midi instrument. Does anyone have any ideas how this might be accomplished?
I have built this.
I used the shell object. And then feed an array of parameters into it via a javascipt file with the HUE API. There is a lag time of 1/6 of a second between commands.
Javascript file:
inlets=1;
outlets=1;
var bridge="192.168.0.100";
var hash="newdeveloper";
var bulb= 1;
var brt= 200;
var satn= 250;
var hcolor= 10000;
var bulb=1;
function list(bulb,hcolor,brt,satn,tran) {
execute('PUT','http://'+bridge+'/api/'+hash+'/lights/'+bulb+'/state', '"{\\\"on\\\":true,\\\"hue\\\":'+hcolor+', \\\"bri\\\":'+brt+',\\\"sat\\\":'+satn+',\\\"transitiontime\\\":'+tran+'}"');
}
function execute($method,$url,$message){
outlet(0,"curl --request",$method,"--data",$message,$url);
}
To control Philips Hue you need to issue calls to a restful http based api, like so: http://www.developers.meethue.com/documentation/core-concepts, using the [jweb] or [maxweb] objects: https://cycling74.com/forums/topic/making-rest-call-from-max-6-and-saving-the-return/
Generally however, to control lights you use DMX, the standard protocol for professional lighting control. Here is a somewhat lengthy post on the topic: https://cycling74.com/forums/topic/controlling-video-and-lighting-with-max/, scroll down to my post from APRIL 11, 2014 | 3:42 AM.
To change the bri/sat of your lights is explained in the following link (Registration/Login required)
http://www.developers.meethue.com/documentation/lights-api#16_set_light_state
You will need to know the IP Address of your hue hue bridge which is explained here: http://www.developers.meethue.com/documentation/getting-started and a valid username.
Also bear in mind the performance limitations. As a general rule you can send up to 10 lightstate commands per second. I would recommend having a 100ms gap between each one, to prevent flooding the bridge (and losing commands).
Are you interested in finding out details of who to map this data from a MIDI input to the phillips HUE lights within max? or are you already familiar with Max.
Using Tommy b's javascript (which you could put into a js object), You could for example scale the MIDI messages you want to use using midiin and borax objects and map them to the outputs you want using the scale object. Karlheinz Essl's RTC library is a good place to start with algorithmic composition if you want to transform the data at all http://www.essl.at/software.html
+1 for DMX light control via Max. There are lots of good max-to-dmx tutorials and USB-DMX hardware is getting pretty cheap. However, as someone who previously believed in dragging a bunch of computer equipment on stage just to control a light or two with an instrument, I'd recommend researching and purchasing a simple one channel "color organ" circuit kit (e.g., Velleman MK 110). Controlling a 120/240V light bulb via audio is easier than you might think; a computer for this type of application is usually overkill. Keep it simple and good luck!
Related
I have a simple script that uses music21 to process the notes in a midi file:
import music21
score = music21.converter.parse('www.vgmusic.com/music/console/nintendo/nes/zanac1a.mid')
for i in score.flat.notes:
print(i.offset, i.quarterLength, i.pitch.midi)
Is there a way to also obtain a note's voicing / midi program using a flat score? Any pointers would be appreciated!
MIDI channels and programs are stored on Instrument instances, so use getContextByClass(instrument.Instrument) to find the closest Instrument in the stream, and then access its .midiProgram.
Be careful:
.midiChannel and .midiProgram are 0-indexed, so MIDI channel 10 will be 9 in music21, etc., (we're discussing changing this behavior in the next release)
Some information might be missing if you're not running the bleeding edge version (we merged a patch yesterday on this topic), so I advise pulling from git: pip install git+https://github.com/cuthbertLab/music21
.flat is going to kill you, though, if the file is multitrack. If you follow my advice you'll just get the last instrument on every track. 90% of the time people doing .flat actually want .recurse().
i know this forum dislikes "open" questions like this, nevertheless i'd like somebody to help untie the knot in my head, much appreciated.
The goal is simple:
read a stereo 32bit 44100 S/s I2S signal from 2 adafruit sph0645 mics
create a wav-header and store the data onto an SD-card
I've been at this for a few days now and i know that this will be much more complicated than i originally thought. Main reason: signal quality. Like most tutorials on this subject the simplest "hello world" for these mics is a looped polling for I2S-samples. Poll, fill buffer, output via serial or write to SD-card. This returns a choppy, noisy, sped up version of RL-audio. The filling of the internal DMA-buffers can be seen as constant, but the rest is mostly chaos, so
how to i sync these DMA-buffers with the rest of my code?
From experience with the STM32 HAL i'd imagine some register which can be set to throw an interrupt whenever a buffer is full, or an event which can be sent between tasks via queues. Examples on this subject either poll in a main loop with mono an abysmal sample-rate and bit depth or use pages of overkill code and never adress what it does, "just copy and it works", not good. Does the ESP32-Arduino framework provide some way to to this properly? The espressif-documentation isn't something to look forward to, since some of their I2S interface functions don't even work (if you are researching this topic as well, you too might have noticed that i2s_read only returns zeros). Just a hint into the right direction would help, i'm writing my own code anyway. Interrupts? Events? Timers? Polling for full buffers? Only you might know.
have a good one, thx
Thanks to https://github.com/atomic14/ i now have an answer for a syncing-method which works very well. This method has been tried by https://esp32.com/viewtopic.php?t=12546 who also didn't fully understand what was going on: the espressif i2s-interface offers a flag stored in an event which is triggererd every time one of the specified dma-buffers has received a full set of data, ergo, is full. It looks like this:
while(<your condition>){
i2s_event_t evt;
if (xQueueReceive(<your queue>, &evt, portMAX_DELAY) == pdPASS){
if (evt.type == I2S_EVENT_RX_DONE){
size_t bytesRead = 0;
do{
//read data via i2s_read or i2s_read_bytes
} while (bytesRead > 0);
No data is stored in this queue, but rather a flag which can then be used to synchronize dma-filling and further buffering/calculating/sending the read data.
HOWEVER this only works if you install the i2s driver in a specific setup. Instead of using
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
in your setup, you can activate the "affinity" for events by passing a queue-handle and a lenght:
i2s_driver_install(I2S_NUM_0, &i2s_config, 4, &<your queue>);
hope this helps getting started, it sure did help me.
I'm trying to filter a signal and then analyse the values of the filtered signal using Tone.js / Web-Audio API.
I'm expecting to get values of the filtered signal, but I only get -Infinity, meaning that my connections between the nodes are wrong. I've made a small fiddle demonstrating this, however in my use-case I do not want to send this node to the destination of the context - I only want to analyse it, not hear it.
osc.connect(filter)
filter.connect(gainNode)
gainNode.connect(meter)
console.log(meter.getLevel())
I guess you tested the code in Chrome because there is a problem with Chrome which causes it to not process anything until it is connected to the destination. When using Tone.js that means you need to call .toMaster() at the end of your chain. I updated you fiddle to make it work: https://jsfiddle.net/8f7abzoL/.
In Firefox calling .toMaster() is not necessary therefore the following works in Firefox as well: https://jsfiddle.net/yrjgfdtz/.
After some digging I've found out that I need to have a scriptProcessorNode - which is apparently no longer recommended - so looking into Audio Worklet Nodes
Related: Terminology: "live-dvr" in mpeg-dash streaming
I'm a little bit confused about the MPEG-DASH standard and an use case. I would like to know if there's a way to specify in MPEG-DASH manifests for a "live-dvr" setup the amount of available time for seeking back in players.
That is, for example, if a "live-dvr" stream has 30' of media available for replay, what would be a standard way to specify this in the manifest.
I know I can configure a given player for a desired behaviour. My question is not about players but about the manifests.
I don't fully understand yet if this use case is formally addresed in the standard or not (see the related link). I'm guessing a relation between #timeShiftBufferDepth and #presentationTimeOffset should work, but i'm confused regarding how it should manage "past time" instead of terms like "length" or "duration".
Thanks in advance.
Yes - you are on the right lines.
The MPEG DASH implementation guidelines provide this formula (my bolding):
The CheckTime is defined on the MPD-documented media time axis; when the client’s playback time reaches CheckTime - MPD#minBufferTime it should fetch a new MPD.
Then, the Media Segment list is further restricted by the CheckTime together with the MPD attribute MPD#timeShiftBufferDepth such that only Media Segments for which the sum of the start time of the Media Segment and the Period start time falls in the interval [NOW- MPD#timeShiftBufferDepth - #duration, min(CheckTime, NOW)] are included.
The full guidelines are available at:
http://mpeg.chiariglione.org/standards/mpeg-dash/implementation-guidelines/text-isoiec-dtr-23009-3-2nd-edition-dash
I recorded my voice in Matlab. Now i want to convert that audio in to strings i-e; written sentences in Matlab. Is there a way to convert audio in to text.
I'm pretty sure MATLAB does not have native speech-to-text functionality.
A quick Google search turned up at least one project integrating speech-to-text into MATLAB.
http://www.ee.ic.ac.uk/hp/staff/dmb/voicebox/voicebox.html
Some other software that can translate recorded speech into text are Microsoft's SAPI (built into Windows Vista and Windows 7, and available as a download for Windows XP), and CMU's Sphinx project. Nuance Dragon Naturally Speaking is an option, but it is comparatively expensive. It's not obvious to me how these could be integrated into MATLAB though.
You can achieve somewhat limit mileage using the Builtin Windows Speech API. It depends on your operating system etc. and you need to follow similar principles from the API documentation:
http://msdn.microsoft.com/en-us/library/ms723627(v=vs.85).aspx
Using MATLAB's activeX server (
http://www.mathworks.co.uk/help/matlab/ref/actxserver.html)
You need the first declare a speech recogniser engine
RC = actxserver('SAPI.SpSharedRecoContext'); %connect to speech engine
And then setup various call back functions for each state of the recogniser:
RC.registerevent({'Recognition' #CallbackFunction; 'Hypothesis' #CallbackFunction; 'FalseRecognition' #CallbackFunction})
The contents of the callback function should be along these lines:
function word = CallbackFunction(varargin)
global word
result = varargin{length(varargin)-2};
word = result.Phraseinfo.GetText;
end
Then finally switch the recogniser on:
RC.Recognizer.State = 'SRSActive';
You would need to reference the documentation for which callback functions are called and when.
You will need to also setup a grammar dictionary to get meaningful results. As the engine will be attempting to recognise any word otherwise.