I'm trying to make a MATLAB program that converts a input 128-bit data using quadrature amplitude modulation (QAM, function qammod):
M = 16;
x = randint( 5000, 1, M);
y = modulate( modem.qammod(M), x);
But when I try to play the modulated signal using the sound(y) command, it does not allow me to do so.
I tried to make it work by doing real(y). It can be played, but data was lost. How do I make this data heard by a human while keeping its data?
I think it is possible, because in the old time people accessed the Internet over a phone line, on which the digital data can be converted to a sound signal.
Instead of using only in-phase component with real(y), you can use abs(y) which gives the magnitude of in-phase and quadrature components.
But, I would assign 16 distinct frequencies to each 16 symbols and perform something similar to FM (frequency modulation).
Related
I want to generate my own samples of a Kick, Clap, Snare and Hi-Hat sounds in MATLAB based on a sample I have in .WAV format.
Right now it does not sound at all correct, and I was wondering if my code does not make sense? Or if it is that I am missing some sound theory.
Here is my code right now.
[y,fs]=audioread('cp01.wav');
Length_audio=length(y);
df=fs/Length_audio;
frequency_audio=-fs/2:df:fs/2-df;
frequency_audio = frequency_audio/(fs/2); //Normalize the frequency
figure
FFT_audio_in=fftshift(fft(y))/length(fft(y));
plot(frequency_audio,abs(FFT_audio_in));
The original plot of y.
My FFT of y
I am using the findpeaks() function to find the peaks of the FFT with amplitude greater than 0.001.
[pk, loc] = findpeaks(abs(FFT_audio_in), 'MinPeakHeight', 0.001);
I then find the corresponding normalized frequencies from the frequency audio (positive ones) and the corresponding peak.
loc = frequency_audio(loc);
loc = loc(length(loc)/2+1:length(loc))
pk = pk(length(pk)/2+1:length(pk))
So the one sided, normalized FFT looks like this.
Since it looks like the FFT, I think I should be able to recreate the sound by summing up sinusoids with the correct amplitude and frequency. Since the clap sound had 21166 data points I use this for the for loop.
for i=1:21116
clap(i) = 0;
for j = 1:length(loc);
clap(i) = bass(i) + pk(j)*sin(loc(j)*i);
end
end
But this results in the following sound, which is nowhere near the original sound.
What should I do differently?
You are taking the FFT of the entire time-period of the sample, and then generating stationary sinewaves for the whole duration. This means that the temporal signature of the drum is gone. And the temporal signature is the most characteristic of percussive unvoiced instruments.
Since this is so critical, I suggest you start there first instead of with the frequency content.
The temporal signature can be approximated by the envelope of the signal. MATLAB has a convenient function for this called envelope. Use that to extract the envelope of your sample.
Then generate some white-noise and multiply the noise by the envelope to re-create a very simple version of your percussion instrument. You should hear a clear difference between Kick, Clap, Snare and Hi-Hat, though it won't sound the same as the original.
Once this is working, you can attempt to incorporate frequency information. I recommend taking the STFT to get a spectrogram of the sound, so you can see how it the frequency spectrum changes over time.
I'm using dsp.Audiorecord to get real-time microphone input. The sound input is a series of sinusoids with different frequencies ranging from 500 to 2000Hz. Each one sounds for a second.
I'd like to know in real-time, what's the frequency of the current sin and also make the difference between two sins with same frequency going one after the other. This is why I use dsp.Audiorecord.
This is what my code looks like now:
Microphone = dsp.AudioRecorder;
tic;
while(toc<30)
audio = step(Microphone);
[x, indexMax] = max(abs(fft(audio(:,1)-mean(audio(:,1)))));
indexMax
end
All the indexMax shows are numbers ranging from around 25 to 40. There's clearly an operation left out in order to retrieve the original frequency in [500;2000].
I've tried also to apply dsp.FFT() directly to audio but it tells me:
Error using dsp.FFT/pvParse
Invalid property/value pair arguments.
If there's any other way to perform real-time FFT on the dsp.Audiorecorder I'd really like to know. Or just if you see a way to to complete what I've done here it would be great also.
To approximately estimate what frequency goes with what index, you need to know the sample rate (Fs) of the data sent to the FFT, and the length (N) of the FFT:
f ~= index * Fs / N
That's the operation you've left out.
I want to take some audio signal, most likely in stereo, and apply some transfer function to it with the convolution function. I have seen examples here on how to apply transfer functions after obtaining a tfest from two signals, but the tfest data is the same size as the original audio.
I have attempted to navigate MATLAB and become familiar with its interface and syntax by watching the lone Lynda video on MATLAB basics. I have prior programming experience with C# and feel comfortable in Visual Studio, but MATLAB is new to me.
The transfer function is previously obtained, and currently in Excel. The data is in octave bands (63 Hz, 125 Hz, 250 Hz, ... , 8kHz), and will be extrapolated to the spectrum of the input signal (20 Hz - 20 kHz). This will take the form of: (f1, -x1), (f2, -x2), ... ,(fn, -xn), with each data point in the sampled audio having a match with the transfer function.
The function is constant over time. Essentially, I am simulating what something would sound like after passing through a partition.
My thought process tells me this will follow: input audio, transform to frequency domain, apply transfer function, transform back into time domain, and write as WAV.
How would I go about doing this? I understand I did not provide any code, and for this I am sorry. Any resources on the topic are most appreciated. I do not expect a turn-key solution, just some guidance so I can find my way to a correct method.
I would do like this:
function [ out_wav_file ] = TransformSignal( in_waw_file )
% Read input signal
[in_sgn, FS, N] = wavread(in_waw_file);
% If audio file is multiple channel, selec one channel
in_chn = in_sgn(:, 1);
% Transform to frequency domain
% You could use a smaller FFT length but it would cost you quality when
% converting back
fft_in_sgn = abs(fft(in_sgn, length(in_sgn)));
fft_out_sgn = SomeFunction(fft_in_sgn);
out_sgn = ifft(fft_out_sgn);
wavwrite(out_sgn, FS, N, out_wav_file);
end
Hope it helps!
I'm attempting to add noise to a speech signal (.wav file) in matlab using the following method:
load handel.mat;
hfile= 'noisy.wav';
y = wavread('daveno.wav');
y = y + randn(size(y)) * (1/100);
wavwrite(y, Fs, hfile);
nsamples=Fs;
This adds the noise, however, it removes the actual speech spoken word and therefore, the noise is only contained. Do I need to multiply by a bigger number, or, could anyone please suggest a way to fix this problem?
The issue is that you are writing the file at the wrong sampling frequency. Find the correct sampling frequency (i.e. the value for Fs) using the second output of wavread
[y, Fs] = wavread('daveno.wav')
I am new to BCI. I have a Mindset EEG device from Neurosky and I record the Raw data values coming from the device in a csv file. I can read and extract the data from the csv into Matlab and I apply FFT. I now need to extract certain frequencies (Alpha, Beta, Theta, Gamma) from the FFT.
Where Delta = 1-3 Hz
Theta= 4-7 Hz
Alpha = 8-12 Hz
Beta = 13-30 Hz
Gamma = 31-40 Hz
This is what I did so far:
f = (0:N-1)*(Fs/N);
plot(rawDouble);
title ('Raw Signal');
p = abs(fft(rawDouble));
figure,plot (f,p);
title('Magnitude of FFT of Raw Signal');
Can anyone tell me how to extract those particular frequency ranges from the signal?? Thank you very much!
For convenient analysis of EEG data with MatLab you might consider to use the EEGLAB toolbox (http://sccn.ucsd.edu/eeglab/) or the fieldtrip toolbox (http://fieldtrip.fcdonders.nl/start).
Both toolboxes come with good tutorials:
http://sccn.ucsd.edu/eeglab/eeglabtut.html
http://fieldtrip.fcdonders.nl/tutorial
You may find it easier to start off with MATLAB's periodogram function, rather than trying to use the FFT directly. This takes care of windowing the data for you and various other implementation details.
I think the easiest way is to filter your signal in those ranges after you load your data.
E.g.
band=[30 100] eeglocal.lowpass(band(2)).highpass(band(1));
then you can use select the time you want to process.
That should be all you need.