Creating music visualizer [closed] - visualization

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So how does someone create a music visualizer? I've looked on Google but I haven't really found anything that talks about the actual programming; mostly just links to plug-ins or visualizing applications.
I use iTunes but I realize that I need Xcode to program for that (I'm currently deployed in Iraq and can't download that large of a file). So right now I'm just interested in learning "the theory" behind it, like processing the frequencies and whatever else is required.

As a visualizer plays a song file, it reads the audio data in very short time slices (usually less than 20 milliseconds). The visualizer does a Fourier transform on each slice, extracting the frequency components, and updates the visual display using the frequency information.
How the visual display is updated in response to the frequency info is up to the programmer. Generally, the graphics methods have to be extremely fast and lightweight in order to update the visuals in time with the music (and not bog down the PC). In the early days (and still), visualizers often modified the color palette in Windows directly to achieve some pretty cool effects.
One characteristic of frequency-component-based visualizers is that they don't often seem to respond to the "beats" of music (like percussion hits, for example) very well. More interesting and responsive visualizers can be written that combine the frequency-domain information with an awareness of "spikes" in the audio that often correspond to percussion hits.

For creating BeatHarness ( http://www.beatharness.com ) I've 'simply' used an FFT to get the audiospectrum, then use some filtering and edge / onset-detectors.
About the Fast Fourier Transform :
http://en.wikipedia.org/wiki/Fast_Fourier_transform
If you're accustomed to math you might want to read Paul Bourke's page :
http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/dft/
(Paul Bourke is a name you want to google anyway, he has a lot of information about topics you either want to know right now or probably in the next 2 years ;))
If you want to read about beat/tempo-detection google for Masataka Goto, he's written some interesting papers about it.
Edit:
His homepage : http://staff.aist.go.jp/m.goto/
Interesting read : http://staff.aist.go.jp/m.goto/PROJ/bts.html
Once you have some values for for example bass, midtones, treble and volume(left and right),
it's all up to your imagination what to do with them.
Display a picture, multiply the size by the bass for example - you'll get a picture that'll zoom in on the beat, etc.

Typically, you take a certain amount of the audio data, run a frequency analysis over it, and use that data to modify some graphic that's being displayed over and over. The obvious way to do the frequency analysis is with an FFT, but simple tone detection can work just as well, with a lower lower computational overhead.
So, for example, you write a routine that continually draws a series of shapes arranged in a circle. You then use the dominant frequencies to determine the color of the circles, and use the volume to set the size.

There are a variety of ways of processing the audio data, the simplest of which is just to display it as a rapidly changing waveform, and then apply some graphical effect to that. Similarly, things like the volume can be calculated (and passed as a parameter to some graphics routine) without doing a Fast Fourier Transform to get frequencies: just calculate the average amplitude of the signal.
Converting the data to the frequency domain using an FFT or otherwise allows more sophisticated effects, including things like spectrograms. It's deceptively tricky though to detect even quite 'obvious' things like the timing of drum beats or the pitch of notes directly from the FFT output
Reliable beat-detection and tone-detection are hard problems, especially in real time. I'm no expert, but this page runs through some simple example algorithms and their results.

Devise an algorithm to draw something interesting on the screen given a set of variables
Devise a way to convert an audio stream into a set of variables analysing things such as beats/minute frequency different frequency ranges, tone etc.
Plug the variables into your algorithm and watch it draw.
A simple visualization would be one that changed the colour of the screen every time the music went over a certain freq threshhold. or to just write the bpm onto the screen. or just displaying an ociliscope.
check out this wikipedia article

Like suggested by #Pragmaticyankee processing is indeed an interesting way to visualize your music. You could load your music in Ableton Live, and use an EQ to filter out the high, middle and low frequencies from your music. You could then use a VST follwoing plugin to convert audio enveloppes into MIDI CC messages, such as Gatefish by Mokafix Audio (works on windows) or PizMidi’s midiAudioToCC plugin (works on mac). You can then send these MIDI CC messages to a light-emitting hardware tool that supports MIDI, for instance percussa audiocubes. You could use a cube for every frequency you want to display, and assign a color to the cube. Have a look at this post:
http://www.percussa.com/2012/08/18/how-do-i-generate-rgb-light-effects-using-audio-signals-featured-question/

We have lately added DirectSound-based audio data input routines in LightningChart data visualization library. LightningChart SDK is set of components for Visual Studio .NET (WPF and WinForms), you may find it useful.
With AudioInput component, you can get real-time waveform data samples from sound device. You can play the sound from any source, like Spotify, WinAmp, CD/DVD player, or use mic-in connector.
With SpectrumCalculator component, you can get power spectrum (FFT conversion) that is handy in many visualizations.
With LightningChartUltimate component you can visualize data in many different forms, like waveform graphs, bar graphs, heatmaps, spectrograms, 3D spectrograms, 3D lines etc. and they can be combined. All rendering takes place through Direct3D acceleration.
Our own examples in the SDK have a scientific approach, not really having much entertainment aspect, but it definitely can be used for awesome entertainment visualizations too.
We have also configurable SignalGenerator (sweeps, multi-channel configurations, sines, squares, triangles, and noise waveforms, WAV real-time streaming, and DirectX audio output components for sending wave data out from speakers or line-output.
[I'm CTO of LightningChart components, doing this stuff just because I like it :-) ]

Related

Scala streaming peak detection with reactive events

I am trying to work out the best way to structure an application that in essence is a peak detection program. In my line of work I have been given charge of developing a system that essentially is looking at pulses in a stream of data and doing calculations on the peak data.
At the moment the software is implemented in LabVIEW. I'm sure many of you on here would understand why I'd love to see the end of that environment. I would like to redesign this in Scala (and possibly use Play if I was to make it use a web frontend) but I am not sure how best to approach the initial peak-detection component.
I've seen many tutorials for peak detection in various languages and I understand from a theoretical perspective many of the algorithms. What I am not sure is how would I approach this from the most Scala/Play idiomatic way?
Obviously I don't expect someone to write the code for me but I would really appreciate any pointers as to the direction I should take that makes the most sense. Since I cannot be too specific on the use case I'll try to give an overview of what I'm trying to do below:
Interfacing with data acquisition hardware to send out control voltages and read back "streams" of data.
I should be able to work the hardware side out, but is there a specific structure that would be best for the returned stream? I don't necessarily know ahead of time how much data I'll be reading so a stream that can be buffered and chunked would probably be appropriate.
Scan through the stream to find peaks and measure their height and trigger an event.
Peaks are usually about 20 samples wide or so but that depends on sample rate so I don't want to hard-code anything like that. I assume a sliding window would be necessary so peaks don't get "cut off" on the edge of a buffer. As a peak arrives I need to record and act on it. I think reactive streams and so on may be appropriate but I'm not sure. I will be making live graphs etc with the data so however it is done I need a way to send an event immediately on a successful detection.
The streams can be quite long and are at high sample-rates (minimum of 250ksamples per second) so I'd prefer not to have to buffer the entire stream to memory. The only information that needs to be permanent is the peak voltage data. I will need a way to visualise the raw stream for calibration purposes but I imagine that should be pretty simple.
The full application is much more complex and I'll need to do some initial filtering of noise and drift but I believe I should be able to work that out once I know what kind of implementation I should build on.
I've tried to look into Play's Iteratees and such but they are a little hard to follow. If they are an appropriate fit then I'm happy to work on learning them but since I'm not sure if that is the best way to approach the problem I'd love to know where I should look.
Reactive frameworks and the like certainly look interesting and I can see how I could really easily build the rest of the application around them but I'm just not sure how best to implement a streaming peak detection function on top of them beyond something simple like triggering when a value is over a threshold (as mentioned previously a "peak" can be quite wide and the signal is noisy).
Any advice would be greatly appreciated!
This is not a solution to this question but I'm writing this as an answer because of space/formatting limitations in the comments section.
Since you are exploring options I would suggest the following:
Assuming you have a large enough buffer to keep a window of data in memory (W=tXw) you can calculate the peak for the buffer using your existing algorithm. Next you can collect the next few samples data in a delta buffer (d) (a much smaller window). The delta buffer is the size of your increment. Assuming this is time series data you can easily create the new sliding window by removing the first delta (dXt) values from the buffer W and adding d values to the buffer. This is how Spark-streaming implements reduceByWindow function on a DStream. Iteratee can also help here.
If your system is distributed then you can use stream processing systems (Storm, Spark-streaming) to get better latency and throughput at the cost of distributing the system.
If you are really resource constrained and can live approximate results that bounded I would suggest you look at implementing a combination of probabilistic data structures such as count-min-sketch, hyperloglog and bloom filter.

Sound Waves Simiulation in 3D Environment

Hi: I want to do a sound waves simulation that include wave propagation, absorbing and reflection in 3D space.
I do some searches and I found this question in stackoverflow but it talk about electromagnetic waves not sound waves.
I know i can reimplement the FDTD method for sound waves but how about the sources and does it act like the electromagnetic waves ? Is there any resources to start with ?
Thanks in advance.
Hope this can give you some inputs...
As far as i know, in EM simulations obstacles (and thus terrain) are not considered at all. With sound you have to consider reflection, diffraction, etc
there are different standards to calculate the noise originated from different sources (I'll list the europe ones, the one i know of):
traffic, NMPB (NMPB-Routes-96) is THE standard. All the noise calculations have to be done with that one (at least in my country). Results aren't very good. A "new" algorithm is SonRoad (i think it uses inverse ray-tracing)... from my tests it works great.
trains: Schall03
industries, ISO 9613
a list of all the used models in CadnaA (a professional software) so you can google them all: http://www.datakustik.com/en/products/cadnaa/modeling-and-calculation/calculation-standards/
another pro software is SoundPlan, somewhere on the web there is a free "SoundPlan-ReferenceManual.pdf" 800-pages with the mathematical description of the implemented algorithms... i haven't had any luck with google today tough
An easy way to do this is use the SoundPlan software. Multiple sound propagation methods such as ISO9613-2, CONCAWE and Nord2000 are implemented. It has basic 3D visualization with sound pressure level contours.

Processing accelerometer data

I would like to know if there are some libraries/algorithms/techniques that help to extract the user context (walking/standing) from accelerometer data (extracted from any smartphone)?
For example, I would collect accelerometer data every 5 seconds for a definite period of time and then identify the user context (ex. for the first 5 minutes, the user was walking, then the user was standing for a minute, and then he continued walking for another 3 minutes).
Thank you very much in advance :)
Check new activity recognization apis
http://developer.android.com/google/play-services/location.html
its still a research topic,please look at this paper which discuss the algorithm
http://www.enggjournals.com/ijcse/doc/IJCSE12-04-05-266.pdf
I don't know of any such library.
It is a very time consuming task to write such a library. Basically, you would build a database of "user context" that you wish to recognize.
Then you collect data and compare it to those in the database. As for how to compare, see Store orientation to an array - and compare, the same holds for accelerometer.
Walking/running data is analogous to heart-rate data in a lot of ways. In terms of getting the noise filtered and getting smooth peaks, look into noise filtering and peak detection algorithms. The following is used to obtain heart-rate information for heart patients, it should be a good starting point : http://www.docstoc.com/docs/22491202/Pan-Tompkins-algorithm-algorithm-to-detect-QRS-complex-in-ECG
Think about how you want to filter out the noise and detect peaks; the filters will obviously depend on the raw data you gather, but it's good to have a general idea of what kind of filtering you'd want to do on your data. Think about what needs to be done once you have filtered data. In your case, think about how you would go about designing an algorithm to find out when the data indicates activity (like walking, running,etc.), and when it shows the user being stationary. This is a fairly challenging problem to solve, once you consider the dynamics of the device itself (how it's positioned when the user is walking/running), and the fact that there are very few (if not no) benchmarked algos that do this with raw smartphone data.
Start with determining the appropriate algorithms, and then tackle the complexities (mentioned above) one by one.

How can I Compare 2 Audio Files Programmatically?

I want to compare 2 audio files programmatically.
For example: I have a sound file in my iPhone app, and then I record another one. I want to check if the existing sound matches the recorded sound or not ( - similar to voice recognition).
How can I accomplish this?
Have a server doing audio fingerprinting computation that is not suitable for mobile device anyway. And then your mobile app uploads your files to the server and gets the analysis result for display. So I don't think programming language implementing it matters much. Following are a few AF implementations.
Java: http://www.redcode.nl/blog/2010/06/creating-shazam-in-java/
VC++: http://code.google.com/p/musicip-libofa/
C#: https://web.archive.org/web/20190128062416/https://www.codeproject.com/Articles/206507/Duplicates-detector-via-audio-fingerprinting
I know the question has been asked a long time ago, but a clear answer could help someone else.
The libraries from Echoprint ( website: echoprint.me/start ) will help you solve the following problems :
De-duplicate a big collection
Identify (Track, Artist ...) a song on a hard drive or on a server
Run an Echoprint server with your data
Identify a song on an iOS device
PS: For more music-oriented features, you can check the list of APIs here.
If you want to implement Fingerprinting by yourself, you should read the docs listed as references here, and probably have a look at musicip-libofa on Google Code
Hope this will help ;)
Apply bandpass filter to reduce noise
Normalize for amplitude
Calculate the cross-correlation
It can be fairly Mhz intensive.
The DSP details are in the well known text:
Digital Signal Processing by
Alan V. Oppenheim and Ronald W. Schafer
I think as well you may try to select a few second sample from both audio track, mnormalise them in amplitude and reduce noise with a band pass filter and after try to use a correlator.
for instance you may take a 5 second sample of one of the thwo and made it slide over the second one computing a cross corelation for any time you shift. (be carefull that if you take a too small pachet you may have high correlation when not expeced and you will soffer the side effect due to the croping of the signal and the crosscorrelation).
After yo can collect an array with al the results of the cross correlation and get the index of the maximun.
You should then set experimentally up threshould o decide when yo assume the pachet to b the same. this will change depending on the quality of the audio track you are comparing.
I implemented a correator to receive and distinguish preamble in wireless communication. My script is actually done in matlab. if you are interested i can try to find the common part and send it to you.
It would be a too long code to be pasted hene in the forum. if you want just let me know and i will send it to ya asap.
cheers

Separation of singing voice from music [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I want to know how to perform "spectral change detection" for the classification of vocal & non-vocal segments of a song. We need to find the spectral changes from a spectrogram. Any elaborate information about this, particularly involving MATLAB?
Separating out distinct signals from audio is a very active area of research, and it is a very hard problem. This is often called Blind Signal Separation in the literature. (There is some MATLAB demo code in the previous link.
Of course, if you know that there is vocal in the music, you can use one of the many vocal separation algorithms.
As others have noted, solving this problem using only raw spectrum analysis is a dauntingly hard problem, and you're unlikely to find a good solution to it. At best, you might be able to extract some of the vocals and a few extra crossover frequencies from the mix.
However, if you can be more specific about the nature of the audio material you are working with here, you might be able to get a little bit further.
In the worst case, your material would be normal mp3's of regular songs -- ie, a full band + vocalist. I have a feeling that this is the case you are probably looking at given the nature of your question.
In the best case, you have access to the multitrack studio recordings and have at least a full mixdown and an instrumental track, in which case you could extract the vocal frequencies from the mix. You would do this by generating an impulse response from one of the tracks and applying it to the other.
In the middle case, you are dealing with simple music which you could apply some sort of algorithm tuned to the parameters of the music to. For instance, if you are dealing with electronic music, you can use to your advantage the stereo width of the track to eliminate all mono elements (ie, basslines + kicks) to extract the vocals + other panned instruments, and then apply some type of filtering and spectrum analysis from there.
In short, if you are planning on making an all-purpose algorithm to generate clean acapella cuts from arbitrary source material, you're probably biting off more than you can chew here. If you can specifically limit your source material, then you have a number of algorithms at your disposal depending on the nature of those sources.
This is hard. If you can do this reliably you will be an accomplished computer scientist. The most promising method I read about used the lyrics to generate a voice only track for comparison. Again, if you can do this and write a paper about it you will be famous (amongst computer scientists). Plus you could make a lot of money by automatically generating timings for karaoke.
If you just want to decide wether a block of music is clean a-capella or with instrumental background, you could probably do that by comparing the bandwidth of the signal to a normal human singer bandwidth. Also, you could check for the base frequency, which can only be in a pretty limited frequency range for human voices.
Still, it probably won't be easy. However, hearing aids do this all the time, so it is clearly doable. (Though they typically search for speech, not singing)
first sync the instrumental with the original, make sure they are the same length and bitrate and start and end at the exact time and convert them to .wav
then do something like
I = wavread(instrumental.wav);
N = wavread(normal.wav);
i = inv(I);
A = (N - i); // it could be A = (N * i) or A = (N + i) you'll have to play around
wavwrite(A, acapella.wav)
that should do it.. a little linear algebra goes a long way.