I'm trying to save (as .WAV) and play 20kHz pure signals (short and long signals) but as I see at the visualizer, I didn't receive what I really need.
As you can see this is the WAV file inspection, I send 20kHz but there are more noises below.
Also, I use a 20kHz dedicated speaker and when I transmit the 20kHz I hear ticks that I don't want to hear (maybe because of the noises or because of the sin signal that i send wrong)
The spectrogram that is attached to the post is the pure WAV file that I write the sin signals in there, before the transmission (via special speaker)
At the lower frequency, I still receive these noises
I want 20kHz in purpose, I know that I can't hear this frequency so because of this I use a special microphone and speaker (+with spectrogram) to inspect what I transmit.
I just don't understand why I receive these noises below the frequency that I transmit, maybe it's because I sending the 'sin' signal like this?
freq=20000;
time=3000;
Fs=48000;
timevec = (0:1/Fs:time-(1/Fs));
short = sin(2*pi*freq*timevec(1:7000));
long = sin(2*pi*freq*timevec(1:20000));
%%
%.
%.
%Here i put the signals in array to send them at the order that i want
%.
%.
%%
pp = audioplayer(array,Fs);
play(pp);
filename = 'sound.wav';
audiowrite(filename,array,Fs);
Related
I want to implantation a masking time-frequency audio.
In first, I am using the function : S=spectrogram(x,window,noverlap,nfft) on Matlab, to extract the STFT of the noise+target signal (from WAV file). After that, I am forcing on some coefficients of the STFT(S variable) to be zero with decision of some threshold. But after doing ISTFT I get complex values ( not a real values like I am Expecting - like audio signal).
Can anyone explain where the problem is coming from? And what is the accepted solution to a problem of this kind?
Note:
If I were doing FFT and there doing manipulations on the signal, I would make sure that the signal has properties to be real in time, but how to keep the properties in the STFT plane?
Are you using the MATLAB function spectrogram() or stft()?
I think you should use stft() (because you can use istft() to go back to time domain).
Also whatever processing you do to the time-frequency domain, you should do the same processing to both positive frequencies and negative frequencies.
I generate the sound using matlab and then play it over computer's speaker, meanwhile I record the sound using iphone, at last i send the 'record.wav' file to computer to analyze it. But here I found that the amplitude of low frequency much is lower than that of high frequency.
The sound generation code looks like A*sin(2*pi*697*(0:N-1)/44100)+A*sin(2*pi*1209*(0:N-1)/44100 if I want to generate a dial tone for number 1, N is the length that i want to generate and 44100 is the sampling frequency.
Then I want to use FFT to analyze the frequency of the sound and plot the FFT output. Though I get the correct frequency I want, the amplitude looks different, which puzzles me a lot.
So, what happened? Why are the two amplitudes different?
[temp,fs] = audioread('record.wav');
[P1,f] = fft_recorder(temp,fs);
function [P1,f] = fft_recorder(array,fs)
array = fft(array);
P2 = abs(array/length(array));
P1 = P2(1:length(array)/2+1);
P1(2:end-1) = 2*P1(2:end-1);
f = fs*(0:(length(array)/2))/length(array);
end
The speaker outputting the sound, and the room you are in (due to multipath reflections and resonances), most likely do not have a flat frequency response over that frequency range. Any mechanical resonances due to contact with the speaker or the iPhone will also change the received audio level by different amounts at different frequencies. (The iPhone's microphone may be closer to having a flat frequency response, but not perfectly.) So some frequencies will be recorded as stronger than others, even with your variable A set to a constant.
Try testing one frequency at at time over your desired frequency range, and measure the response of your channel. The frequency response curve might even change by a large amount when changing the position of the speaker, microphone, or other large objects in the room.
I'm doing a project studying rats who squeak in the ultrasonic range (20kHz to 100kHz) using Matlab software and sound files.
I have (or will be getting) a couple .wav audio signals of these rats speaking, and among general analysis of these wave forms, I also want to convert these ultrasonic signals (outside of our hearing), into the human audible range (20hz to 20khz).
Could I get some advice on how to do this conversion (via Matlab programming and not by using equipment)
Looking into this, I've found names such as:
-frequency division
-heterodyning
-envelope detection
-time expansion
but looking into these it seems either they are explained in terms of what the equipment (bat detectors) does, or they sound incredibly similar to each other. e.g. frequency division and time expansion both involve dividing the incoming signal by 10
since I am looking into what seems to be unfamiliar turf, it would be great to find multiple ways to convert the signal (to my knowledge the names above have their own associated positive and negative traits)
Your question is a signal processing question more than a Matlab question, which isn't really what Stack Overflow is about, so you might get some negative votes.
There are indeed a number of methods of changing the frequency of audio (or any signals):
1) Slow it Down: The least disruptive to the signal is simply to slow down the audio. If you are looking to have rat signals up to 100 kHz, you'll need to sample the audio at 200 kHz or greater. Once you have your recording, simply re-save the wav file telling it that the sample rate is 44.1 kHz (or whatever). This will play it more slowly, but all the frequencies will now be audible (unlike the single side band demodulation discussed below). This is definitely the place you should start...it's the easiest and will sound the best.
fs = 200e3; %your original sample rate
myAudio = load('myFile.mat'); %your original audio
fs = 44.1e3; %simply declare that you want a lower sample rate
wavwrite(myAudio,fs,16,'myFile_44kHz.wav'); %save it out at the new rate
2) Single-Side Band: Use the demod command to "demodulate" the signal to lower its frequency. There are a number of demodulation methods available with this command. I'd use "single side band (suppressed carrier)" because that is how the rat itself (and humans) create sound. To do the demodulation, you'll have to assume a "carrier frequency", as if it were a radio signal. If the lowest frequency of a rat squeek is 20 kHz, you can assume a carrier of 20 kHz. This operation will shift all of your audio down by 20 kHz. As a result the squeek that was originall 20-100 kHz, will now be 0-80 kHz. So, you won't hear the whole thing, but you'll hear part of it.
fs = 200e3; %your original sample rate
myAudio = load('myFile.mat'); %your original audio
[b,a]=butter(2,20e3/(fs/2),'high'); %define highpass filter
myAudio = filtfilt(b,a,myAudio); %remove the low frequencies
myAudio = demod(myAudio,20e3,fs,'amssb'); %shift it down 20 kHz
wavwrite(myAudio,fs,16,'myWave_shifted.wav'); %save it out
3) Phase Vocoder (or other Pitch Shifting): To hear the whole 20-100 kHz range (which is 80 kHz bandwidth, which is 4x bigger than the 20 kHz bandwidth of human hearing), you've got to go to more extreme methods. These methods will make the audio sound bizarre, but you can give it a try. There are several algorithms. Look up "phase vocoder". Or, use one of audio processing software packages like Audacity, Raven, etc.
I am trying to measure the duration (in samples) between a start tone (3800Hz) and finish tone (high amplitude but frequency unknown). The two tones are randomly distributed in a .wav file (est. 10 seconds). Is it important to identify the sample number of the last sample of the first tone and the first sample of the last tone? How can I do this?
The .wav file contains a fundamental frequency and some noise (as well as the start tone and finish tone). I have a prerecording of the start tone with the fundamental frequency and noise in the background, can i use a correlation function to detect it?
Some of the noise exceeds 3800Hz (instantaneously) so methods using threshold values for the detecting the tones don't work very well. However, can I use the fact that the tone event is longer in duration than any noise event (because it is made deliberately by pressing and releasing a button) to detect the tone.
I'm looking at an output of an electroencephalogram sensor. This data is displayed on screen in raw form at about 200Hz. I read that in the old times, it was possible to hook up such output to a speaker and hear the waveform, instead of seeing it. So I'm interested if it is possible to replicate this experiment with modern iPhone. How can I take a waveform that is displayed in a graph form and package it in such a way that it can be played through a iPhone's speakers live? In other words, I'm looking to stream EEG data through some sort of audio player and need to know how to create audio packets from this data on the fly.
Here's the raw waveform, it is displayed at 200 data points per second (200Hz)
After I clean up and process the waveform, I'm interested in how far it deviates from the average of the waveform. In this case, I think this can be played as a increasing/decreasing amplitude of a sine wave, which may be easier.
Thank you for your input
Here's a good tutorial on generating a sine tone for output through CoreAudio:
http://www.cocoawithlove.com/2010/10/ios-tone-generator-introduction-to.html
The RenderProc is the bit of code you're twiddling with, in the example they're using an NSSlider to change the frequency, you just need to feed it with your signal data instead.
One of the ideas that I had for playing sound in response to the signal amplitude change is to divide the amplitude into a set of discrete bands of values (for example 0-10, 10-20, 20-30, etc) and then assign a sound to each band. Then using audio services or system sound, it might be possible to loop a unique sound fragment for each band.