I have a Yamaha MIDI guitar, that, when I play a MIDI file encoded using the XG MIDI standard, causes certain lights on the guitar to turn on and off. I am trying to determine the MIDI event that causes this so that I can programmatically send the same event without the use of a MIDI file (the same way I can send a Note On (144) or Note Off (128) command).
However, while I have been able to locate a copy of the MIDI protocol, I have not been able to locate the XG MIDI protocol. Is there a way, beyond trying to send all possible commands to the device until I locate the appropriate command, to determine what the MIDI event is that is causing the lights to change state? Or is there somewhere that I can get a copy of the XG MIDI protocol?
The Yamaha manuals for their products detail the information you are looking for. The XG commands are device specific. Some XG commands give direct access to the device memory and my manual for the MU2000 tone generator warns that "you can damage the unit by sending incorrect data"
Two things:
XG is the semantic extension of MIDI protocol. It doesn't change anything in the structure of the MIDI file. The only thing is, that if you use an XG-compatible instrument to record, say, changes of the resonance of the filter, it will cause the same effect on any other XG instrument. But on the MIDI procotol level, you will still have the CC (Control Change) message #71 (IIRC).
MIDI protocol is very extensible and leaves a lot of space for manufacturers. Not only you can use CC messages, but also Registered Parameter Numbers (RPNs) and NRPNs (Non-Registered ones). On top of it you have System Exclusive (SysEx) messages and I would bet that an appropriately crafted SysEx message could change the lights on the guitar. Try to get so-called "Data List" for your instrument, it should include all the information about the MIDI messages that are being sent/received by your guitar.
Wikipedia: "In 1999, the official GM [General MIDI] standard was updated to include more controllers, patches, RPNs and SysEx messages, in an attempt to reconcile the conflicting and proprietary Roland GS and Yamaha XG additions." This was called General MIDI 2.
I recommend looking into what Java (javax.sound.midi) has to offer (C# seems to be lacking a solid MIDI library). Read up on MetaMessage, ShortMessage, SysexMessage, and Patch. From what I understand, special system messages are sent through SysexMessage (the lighting data might be here).
If you need some sample code look at Java Sound Resources.
Other links I found:
Working with XG SYSEX on the Yamaha QY70
Win32API::MIDI::SysEX::Yamaha
For a managed .NET Midi Library look for the C# Midi Toolkit on codeproject.com.
I'm using the codeproject midi toolkit by Leslie Sanford to communicate with the guitar.
http://www.codeproject.com/KB/audio-video/MIDIToolkit.aspx
Everything you need to know about the guitars communications is in the manual on a single page near the back.
Here is a video of an editor I built - it features full communications with the guitar.
YouTube Video of Guitar Program
Ultimately, you'll need to find that information from the manufacturer. It's likely a sysex message, although it could also be a controller.
Walking through all the controllers is pretty simple in software so you could try that if you wanted. But the chances of stumbling upon the right sysex message by accident or exhaustive search is close to astronomical.
Dig through the back of your manuals. It might be in there. If not, google for the sysex for your device. Otherwise you'll need to ask Yamaha for the info.
Related
I am trying to perform Time Difference of Arrival in real-time using the PS3 Eye. Since it has a built-in 4 microphone array, I've successfully rearranged the array into a square array and cross-correlated the signals using MATLAB to obtain a relatively accurate TDOA algorithm. However, so far I've been recording the signal, saving the files (4 individual files for each microphone in the array), and then feeding those files into MATLAB to read after-the-fact.
My problem is: MATLAB doesn't recognize the PS3 Eye's microphones separately; it only recognizes it as a whole. So far, Audacity is one of the few programs that actually works well in doing so, but I am inexperienced in using the program and don't know its real-time capabilities. Anyone have suggestions as to how can I can perform real-time signal analysis in this manner? If using something else besides the PS3 Eye would work better, then I am open to suggestions. Thanks.
I know very little about MATLAB or PS3 eye, but various hardware microphones allow you to capture a single audio stream containing multiple (typically 2) channels. The audio data will come to you in frames, each frame containing a single sample for each channel.
I'm not really sure what you mean by "recognizes as a whole", but I assume you mean MATLAB is mixing the channels so that the device only produces one usable channel. If you can capture the channels to file, and they all originate from the same device (i.e. hardware clock), you should be fine except that this solution is not "realtime".
There is a similar discussion on Sound Exchange which ends up suggesting the Microcone. There are a variety of other products, from microphone arrays to digital mixers for analog mic sources, also, but your question seems to be mainly about how to get the data with software.
In short, make sure you are seeing a single device with multiple channels. This will ensure each channel uses the same hardware clock and will prevent drift issues.
This is just a wild guess as I don't know know about MATLAB real time input options.
Maybe try reaper ( http://www.reaper.fm/ ).. it has great multi track capabilities and you can extend it (I think the scripting language is python ). Nice documentation and third party contributions, OSC and Rewire support. So maybe you could think of routing the audio to reaper, doing some data normalization there in python and then route data to MATLAB.
Or you could use PURE DATA which is open source and very open, with lots of patches (basic processing units) that you could probably put together.
HTH
BTW I am in no way affiliated wit reaper or PD.
EDIT: you might also want to consider supercollider (http://supercollider.github.io/) or Chuck (http://chuck.cs.princeton.edu/)
Here's a lead, but I haven't been able to test it, yet.
On Windows, you can record a single 4 track ogg audio file from the Eye with Audacity (using the WASAPI driver selection).
As of 23 Jul 2014, the pa-wavplay for 32-bit and 64-bit MEX supports WASAPI. You will have to rebuild the PortAudio library to select the WASAPI interface as described here and get all four tracks in MatLab (in Windows).
Sadly, if you're not on Windows, I don't have any suggestions. Adjusting the PortAudio build might help, but I only know that WASAPI works with the Eye.
I want my students to use Enchanting a derivative of Scratch to program Mindstorm NXT robots to drive a pre-programmed course, follow a line and avoid obstacles. (Two state, five state and proportional line following.) Is Enchanting developed enough for middle school students to program these behaviors?
I'm the lead developer on Enchanting, and the answer is: Yes, definitely.
The video demoing Enchanting 0.0.4 shows how to make a proportional line follower (and you could extend it to use a PID controller, if you wish). If you download the latest version, 0.2.2, it includes a sample showing a two-state line follower (and you can see a video and download code here). You, or with some instruction / playing around, a middle-schooler, can readily create a program to do n-states, and, especially if you follow a behaviour-oriented approach, you can avoid obstacles at the same time.
As far as I know, yes and no.
What Scratch does with its sensor board, Lego Wedo, and the S4A - Scratch for Arduino - version as well as, I believe, with NXT is basically use its remote sensor protocol - it exchanges messages on TCP port 42001.
A client written to interface that port with an external system allows communication of messages and sensor data. Scratch can pick up sensor state and pass info to actuators, every 75ms according to the S4A discussion.
But that isn't the same as programming the controller - we control the system remotely, which is already very nice, but we're not downloading a program in the controller (the NXT brick) that your robot can use to act independently when it is disconnected.
Have you looked into 12blocks? http://12blocks.com/ I have been using it for Propeller and it's great and it has the NXT option (I have not tested)
It's an old post, but I'll answer anyway.
Enchanting looks interesting, and seems to be still an active project.
I would actually take the original Scratch (1.4), as it's is more familiar and reliable.
It's easy to interface hardware with Scratch using the remote sensor protocol. I use a simple serial interface (over a USB-adapter) which provides 3 digital inputs and 3 digital outputs. With that, it's possible to implement projects such as traffic lite, light/water/heat-sensors, using only lets, resistors, reed-contacts, photo-transistors, switches, PTSs.
The costs are < 5$
For some motor-based projects like factory belts, elevator, etc. There is not much more required, a battery and a couple of transistors/relais/motor driver.
I've built a software. I want to control it via a MIDI controller (e.g. keyboard). How do I get the MIDI data from the MIDI port to my software using e.g. ALSA? I'm using Linux.
ttymidi will act as an ALSA device and print output or read input from a file. The intended use is that this file is a serial device, but /dev/stdout also works -- this can then be piped to the program. If you want to code the device yourself, the ttymidi code is probably simpler than say timidity, so you can use that as an example.
Use asoundlib. This gives you MIDI events as structured objects rather than binary data.
I'm familiar with Computer Vision (Well, know OF it), of which one application can be image recognition, such as Optical Character Recognition, I believe. However, something that I am more interested in is 'computer listening', which I have just learned is considered Digital Signal Processing.
The thing that interests me the most about signal processing is the potential application in music. I remember a while ago I saw a preview of an application (Sorry, forgot the name) which could listen to a recording of someone playing a guitar, and automatically graph it out across a time-line with the actual notes/chords that were played. Using the program, the user was able to move these around and even edit them. Now, obviously this is a lot more complicated, but does it involve the same thing? Signal Processing? I am also interested in possible applications in music visualizers and intelligent lighting systems.
My understanding is that doing this processing on a compressed audio format such as MP3 wont yield the same results as MIDI which contains separate tracks (Maybe I misunderstood). Would an uncompressed format such as PCM do better than MP3? I don't know anything about sound processing, that's just what I'm inferring from what I've read so far.
I have already seen this question which has great answers and links that cover a lot of my questions. However, most of the links I've found are theoretical, which I'm sure is all interesting and is definitely worth a read given my interest in the subject, but I wanted to know if there are any existing libraries which can facilitate this, or articles pertaining to this subject that geared towards Computer Science/Programming, with perhaps example code. Even open source sound/music visualizers or any other open source sound processing code would be great.
Sorry if I didn't make any sense. Like I said, I don't know what I'm talking about.
The thing that interests me the most
about signal processing is the
potential application in music. I
remember a while ago I saw a preview
of an application (Sorry, forgot the
name)
Maybe cubase ?
which could listen to a recording of
someone playing a guitar, and
automatically graph it out across a
time-line with the actual notes/chords
that were played
Deeply simplified, when you play a note you produce a periodic wave with a given frequency. There's a mathematical trick (the Fourier transform DFT) that converts the wave into the spectrum, which instead of presenting intensity against time, it shows it against frequency of the wave. For example, a perfect A note from a tuning fork would produce an oscillating wave at 440 Hz. In the time domain this would appear as a sinusoidal wave. In the frequency domain, it will appear as a single, narrow spike centered at 440 Hz.
Now, when you play a guitar you don't produce perfect sinusoidal waves. Hitting an A will produce the fundamental frequency, 440 Hz, but also a lot of additional frequencies (e.g. 880, on octave higher, but also a lot of other higher and lower freqs), due to the physics of the vibrating string, the material and shape of the guitar etc.. These additional frequencies are called harmonics, and they mix with the fundamental to produce "the sound of the guitar" (what in musical jargon is called timbre). A different instrument (say piano) will have different mixing of harmonics with the fundamental, producing a different timbre.
What DSP programs do is to perform a DFT on the entering signal. With additional tricks, they find the fundamental and the harmonics, and according to what they find they infer the note you played. This must happen fast, because you could find the note while playing live and triggering special tricks. For example, you could hit an A note on the guitar, the DSP understands it's an A and replaces it with the A from a piano, so from the speakers you obtain the sound of a piano.
Using the program, the user was able
to move these around and even edit
them. Now, obviously this is a lot
more complicated, but does it involve
the same thing? Signal Processing? I
am also interested in possible
applications in music visualizers and
intelligent lighting systems.
Yes. Once you are in the frequency domain, things gets very easy. For example, you could light up a specific light according to the voice frequencies, and another light with the bass drum.
My understanding is that doing this
processing on a compressed audio
format such as MP3 wont yield the same
results as MIDI which contains
separate tracks (Maybe I
misunderstood).
They are two different things. MP3 is a compressed format from a sound wave. Basically it takes what pilots the speakers, and compresses it. The idea is the same: DFT, then removal of stuff that is unlikely to be heard (for example, a high pitch that comes right after a high intensity sound is less likely to be heard, so it gets removed).
MIDI on the other hand is a scroll of events (you know, like those pianos in the far west, with the rolling paper scroll). The file contains no music. It contains instead directions for a MIDI player to perform specific notes at specific times with specific instruments. The quality of the "instrument bank" is (among other things) what distinguish a bad MIDI player (which sounds like a child toy) from a good MIDI player (which sounds realistic, in particular for pianos and violins, for wind instruments I still have to hear a realistic one).
It takes that going from MIDI to MP3, you just perform through a MIDI player. To do the other way around is a different story altogether, and much more complex, and here is where DSP comes into play, as you said.
It's like boiling a fisk tank. You get a fish soup. But to get from the fish soup back to the fish tank, it's much harder.
Would an uncompressed
format such as PCM do better than MP3?
PCM is a technique to convert an analog signal to a digital signal. So your question has a fundamental misunderstanding, that no PCM format exists (the RAW format is a close call, contaning basically nothing but crude data). If you ask if a uncompressed WAV (which contains PCM data) is better than MP3, then yes, but the question sometimes is how much this better really matters to the human ear, and how much postprocessing you have to perform on that data.
know if there are any existing
libraries which can facilitate this,
or articles pertaining to this subject
that geared towards Computer
Science/Programming, with perhaps
example code. Even open source
sound/music visualizers or any other
open source sound processing code
would be great.
If you like python, take a look at this page
Sorry if I didn't make any sense. Like I said, I don't know what I'm talking about.
Neither do I, but I toyed a bit with it.
My understanding is that doing this processing on a compressed audio format such as MP3 wont yield the same results as MIDI which contains separate tracks (Maybe I misunderstood).
MIDI essentially stores instrument information and musical notes. Also other effects (volume, pitch bend, vibrato, attack rate, etc.)
Not really digital signal processing.
Would an uncompressed format such as PCM do better than MP3?
Maybe somewhat; it depends on the application. MP3 reduces the precision of frequencies that humans are not sensitive to. If you want to do visualisations then MP3 is probably fine.
But if you want to, say, determine what sort of instrument is playing in a recording, then there could be useful information hidden in the frequencies that humans are not sensitive to.
I think The Scientist and Engineer's Guide to Digital Signal Processing is a great reference for programmers. Chapter 8 explains the discrete Fourier transform (used in MP3 processing and a lot of other places to separate out the component frequencies of a wave).
I used it to help make a graphical program that let you draw a wave with the mouse, then applied the DFT, and let you select how many frequencies to include. It was a great exercise.
I remember a while ago I saw a preview of an application (Sorry, forgot the name) which could listen to a recording of someone playing a guitar, and automatically graph it out across a time-line with the actual notes/chords that were played.
You might also be thinking of Melodyne: http://www.celemony.com/cms/
Though Vari audio in newer version of Cubase is pretty similar. :)
I think you need to define exactly what you are looking for and what you are trying to do.
If you want to learn about DSP, MIDI or PCM then there is plenty of information on Wikipedia and references.
There are many a myriad of applications for audio manipulation available. What you've described in your question is what takes place in every digital recording studio (which these days would account for almost all studios) every single day.
If you are intending to perform some DSP against, say, a guitar sound then you would ideally have a recording of the guitar itself (rather than a mixed down track containing drums or vocals). It should be quite obviously that you will get better results analysing a discrete signal without additional noise than you will analysing a signal containing significant levels of 'noise'. So yes, a multitrack recording would be preferable to 'an MP3'.
Typical MP3 contains left and right channels (tracks) so it technically is multitrack. When music is recorded (professionally, at least) different signals are recorded onto different tracks, precisely so that they can be edited and processed discretely at a later time.
What, then, do you want to do with the sounds?
As other answers have pointed out, this does not relate to MIDI at all.
I've been playing around with the SDK recently, and I had an idea to just build a personal autotuner (because I am just as awesome as T-Pain).
Intro aside, I wanted to attach a high-quality microphone into the headphone jack, and I wanted my audio to be processed in a callback, and then copied to the output buffer. This has several implications:
When my audio-in is being routed through the built-in microphone, I need to be able to process this input, and send it once my input has stopped (this works).
When my audio-in is being routed through the microphone-in input from the headset jack, I want the output to be sent immediately.
Routing, however, doesn't seem to work properly when using AudioSession modes and overrides, which technically should allow you to reroute output to the iPhone speakers, no matter where the input is coming from. This is documented to work, but in practice, doesn't really work.
Remote IO, however, is not documented at all. Anyone with experience using Remote IO audio units, can you give me a reasonable high-level overview on how to do this properly? I have been using the aurioTouch example code, but I am running into errors where I get error codes like -50 and -10863, none of which are documented.
Thanks in advance.
The aurioTouch example implements remoteIO play through.
You could modify the samples before passing them on.
It simply calls AudioUnitRender in the output render callback.
NB this trick does not seem to work if you port the code
to OSX style CoreAudio. There, 99% of the time, you need
to create two AUHALs (RemoteIO-a-likes) and pass
the samples between them.