I want to take some audio signal, most likely in stereo, and apply some transfer function to it with the convolution function. I have seen examples here on how to apply transfer functions after obtaining a tfest from two signals, but the tfest data is the same size as the original audio.
I have attempted to navigate MATLAB and become familiar with its interface and syntax by watching the lone Lynda video on MATLAB basics. I have prior programming experience with C# and feel comfortable in Visual Studio, but MATLAB is new to me.
The transfer function is previously obtained, and currently in Excel. The data is in octave bands (63 Hz, 125 Hz, 250 Hz, ... , 8kHz), and will be extrapolated to the spectrum of the input signal (20 Hz - 20 kHz). This will take the form of: (f1, -x1), (f2, -x2), ... ,(fn, -xn), with each data point in the sampled audio having a match with the transfer function.
The function is constant over time. Essentially, I am simulating what something would sound like after passing through a partition.
My thought process tells me this will follow: input audio, transform to frequency domain, apply transfer function, transform back into time domain, and write as WAV.
How would I go about doing this? I understand I did not provide any code, and for this I am sorry. Any resources on the topic are most appreciated. I do not expect a turn-key solution, just some guidance so I can find my way to a correct method.
I would do like this:
function [ out_wav_file ] = TransformSignal( in_waw_file )
% Read input signal
[in_sgn, FS, N] = wavread(in_waw_file);
% If audio file is multiple channel, selec one channel
in_chn = in_sgn(:, 1);
% Transform to frequency domain
% You could use a smaller FFT length but it would cost you quality when
% converting back
fft_in_sgn = abs(fft(in_sgn, length(in_sgn)));
fft_out_sgn = SomeFunction(fft_in_sgn);
out_sgn = ifft(fft_out_sgn);
wavwrite(out_sgn, FS, N, out_wav_file);
end
Hope it helps!
Related
I want to generate my own samples of a Kick, Clap, Snare and Hi-Hat sounds in MATLAB based on a sample I have in .WAV format.
Right now it does not sound at all correct, and I was wondering if my code does not make sense? Or if it is that I am missing some sound theory.
Here is my code right now.
[y,fs]=audioread('cp01.wav');
Length_audio=length(y);
df=fs/Length_audio;
frequency_audio=-fs/2:df:fs/2-df;
frequency_audio = frequency_audio/(fs/2); //Normalize the frequency
figure
FFT_audio_in=fftshift(fft(y))/length(fft(y));
plot(frequency_audio,abs(FFT_audio_in));
The original plot of y.
My FFT of y
I am using the findpeaks() function to find the peaks of the FFT with amplitude greater than 0.001.
[pk, loc] = findpeaks(abs(FFT_audio_in), 'MinPeakHeight', 0.001);
I then find the corresponding normalized frequencies from the frequency audio (positive ones) and the corresponding peak.
loc = frequency_audio(loc);
loc = loc(length(loc)/2+1:length(loc))
pk = pk(length(pk)/2+1:length(pk))
So the one sided, normalized FFT looks like this.
Since it looks like the FFT, I think I should be able to recreate the sound by summing up sinusoids with the correct amplitude and frequency. Since the clap sound had 21166 data points I use this for the for loop.
for i=1:21116
clap(i) = 0;
for j = 1:length(loc);
clap(i) = bass(i) + pk(j)*sin(loc(j)*i);
end
end
But this results in the following sound, which is nowhere near the original sound.
What should I do differently?
You are taking the FFT of the entire time-period of the sample, and then generating stationary sinewaves for the whole duration. This means that the temporal signature of the drum is gone. And the temporal signature is the most characteristic of percussive unvoiced instruments.
Since this is so critical, I suggest you start there first instead of with the frequency content.
The temporal signature can be approximated by the envelope of the signal. MATLAB has a convenient function for this called envelope. Use that to extract the envelope of your sample.
Then generate some white-noise and multiply the noise by the envelope to re-create a very simple version of your percussion instrument. You should hear a clear difference between Kick, Clap, Snare and Hi-Hat, though it won't sound the same as the original.
Once this is working, you can attempt to incorporate frequency information. I recommend taking the STFT to get a spectrogram of the sound, so you can see how it the frequency spectrum changes over time.
I'm using dsp.Audiorecord to get real-time microphone input. The sound input is a series of sinusoids with different frequencies ranging from 500 to 2000Hz. Each one sounds for a second.
I'd like to know in real-time, what's the frequency of the current sin and also make the difference between two sins with same frequency going one after the other. This is why I use dsp.Audiorecord.
This is what my code looks like now:
Microphone = dsp.AudioRecorder;
tic;
while(toc<30)
audio = step(Microphone);
[x, indexMax] = max(abs(fft(audio(:,1)-mean(audio(:,1)))));
indexMax
end
All the indexMax shows are numbers ranging from around 25 to 40. There's clearly an operation left out in order to retrieve the original frequency in [500;2000].
I've tried also to apply dsp.FFT() directly to audio but it tells me:
Error using dsp.FFT/pvParse
Invalid property/value pair arguments.
If there's any other way to perform real-time FFT on the dsp.Audiorecorder I'd really like to know. Or just if you see a way to to complete what I've done here it would be great also.
To approximately estimate what frequency goes with what index, you need to know the sample rate (Fs) of the data sent to the FFT, and the length (N) of the FFT:
f ~= index * Fs / N
That's the operation you've left out.
How can I display coefficients of audio signal when plotting an audio file in Matlab?
I am fairly new to Matlab so this could be a stupid question. I have searched for similar things but haven't come across anything similar.
First of all you need to read the sound. Considering that you have it stored in wav format, you can use for example [X, fs] = wavread('sound_name.wav');. fs would be your sample rate and X would be matrix of samples [number of samples]x[number of channels]. By default it will read sound in doubles, but it can be changed. See help wavread for details.
Then you can display raw waveform by simply ploting it: plot(X);. Or if you need spectrum of the sound, you can window signal and then apply FFT. In this case voicebox toolbox would be useful: F = enframe(X, hamming(win_len), fix(win_len/2)); sp = rfft(F.'); imagesc(10*log(abs(sp)));
There are also lots of handy functions in Matlab signal processing toolbox.
I'm trying to make a MATLAB program that converts a input 128-bit data using quadrature amplitude modulation (QAM, function qammod):
M = 16;
x = randint( 5000, 1, M);
y = modulate( modem.qammod(M), x);
But when I try to play the modulated signal using the sound(y) command, it does not allow me to do so.
I tried to make it work by doing real(y). It can be played, but data was lost. How do I make this data heard by a human while keeping its data?
I think it is possible, because in the old time people accessed the Internet over a phone line, on which the digital data can be converted to a sound signal.
Instead of using only in-phase component with real(y), you can use abs(y) which gives the magnitude of in-phase and quadrature components.
But, I would assign 16 distinct frequencies to each 16 symbols and perform something similar to FM (frequency modulation).
I am reading a .wav file in Matlab. Then I play the read file with a specified sampling frequency 44100Hz. But when I try to play a file sampled at low sampling frequency, it gets played as if I am playing it in fast forward mod and thats because the sampling frequency at which I am playing is higher than at which the file is sampled.
So my question is How can I find the sampling frequency of a file I read using wavread() in Matlab. I tried to convert the read signal in frequency spectrum and then pass the magnitude of the fft() signal but it didn't work.
Any suggestions?
Observe that wavread can return sampling frequency Fs as follows:
[y, Fs] = wavread(filename)
First off you can find the sample frequency by using this function:
def read_samplepoints(file_name):
sampFreq, snd1 = wavfile.read(file_name)
samp_points = len(snd1)
data_type = snd1.dtype
return samp_points, data_type, sampFreq
Execute in terminal by using 'folder_name'.'class_name'.read_samplepoints(file_name). The last number in the returned sequence will be the sample Frequency.
To enhance the bass of your song you need to use a low band filter to only capture your lower frequencies and keep your higher ones. However, this will chance all the frequencies in your file, which you may not want. Another way is to take your file into audacity (or a similar program) and go to the effects section and adjust the bass and treble levels (similar to the Equalizer on iTunes). Those are two options and there may be a few more but try those to begin with and see where they lead you.