I am using NSTimer to update peak power from iphone. From monitoring, it does not update very fast. I need high frequency of updating peak power in order of 100 micro second (100us). I also try with usleep(100) to update every 100us. Still very slow. Can someone help me to point out how to achieve this? I am thinking I need to use this code to measure distance. Thank you.
you capture the audio (record, input, or file), access its samples from the pcm cbr (uncompressed, with a fixed sampling rate) stream, and read the samples of the range you are interested in. considering the high frequency, you will only have to analyze a small number of samples (2-5, depending on the sampling rate). you may need to interpolate to improve accuracy with so few samples.
Related
I am currently collecting data from a power meter Keysight N7744A to be exact. The issue is that over the course around every 5 minutes (after collecting an hour of data) the data fluctuate over 20%. My goal is to take a single measurement and being able to guarantee that it is within 5% (>0.25dB) of the true value which can be obtained by averaging over the period of 5 minutes. However, this will impact performance by too much... A measurement is collected in 400ms.
Any thoughts on how I can cancel this extra low frequency but high amplitude noise signal? Thanks!
I have attached the data just in case I couldn't explain myself. It has 10k data points collected over 1+hours where each measurements takes ~400ms. data.dat
Perhaps a high pass filter would work.
I have data from an accelerometer and made a graph of acceleration(y-axis) and time (x-axis). The frequency rate of the sensor is arround 100 samples per second. but there is no equally spaced time (for example it goes from 10.046,10.047,10.163 etc) the pace is not const. And there is no function of the signal i get. I need to find the frequency of the signal and made a graph of frequency(Hz x-axis) and acceleration (y-axis). but i don't know which code of FFT suits my case. (sorry for bad english)
Any help would be greatly appreciated
For an FFT to work you will need to reconstruct the signal you have with with a regular interval. There are two ways you can do this:
Interpolate the data you already have to make an accurate guess at where the signal would be at a regular interval. However, this FFT may contain significant inaccuracies.
OR
Adjust the device reading from the accelerometer incorporate an accurate timer such that results are always transmitted at regular intervals. This is what I would recommend.
My team and I are planning to build an external accessory for iOS that will sample ultrasonic sound at 256KHZ. It's a lot and I am wondering whether iOS vDSP can do the conversion from time domain to frequency domain for 256,000 samples/sec, or we need to have a hardware based solution for the FFT.
Sample projects from Apple such as aurioTouch are very helpful but I couldn't find that deals with sampling rate more than the professional audio sampling frequency. I need help figuring out the following:
Can vDSP FFTs process 256,000 samples/second? If not, any other creative ways to do the same aside from doing the conversion in the hardware?
The closest discussion I found related to this is
How many FFTs per second can I do on my smartphone? (for performing voice recognition)
A 256 kHz data rate is less than 6 times faster than normal 44100 audio. And float FFTs of real-time audio data using the vDSP/Accelerate framework use only in the neighborhood of 1% or less of 1 CPU on recent iOS devices.
The FFT computation time will be a tiny portion of the time available.
Source: I wrote the vDSP FFTs.
Why not see how the devices handle upsampled signals, starting with aurioTouch.
If you need it faster, you should measure the speeds of an integer based FFT implementation.
I'm looking for some suggestions on how to compress time-series data in MATLAB.
I have some data sets of pupil size, which were gathered during 1 sec with 25,000 points for each trial(I'm still not sure whether it is proper to call the data 'timeseries'). What I'd like to do from now is to compare them with another data, and I need to compress the number of points into about 10,000 or less, minimizing loss of the information. Are there any ways to do it?
I've tried to search how to do this, but all that I could find out was the way to smooth the data or to compress digital images, which were already done or not useful to me.
• The data sets simply consist of pupil diameter, changing as time goes. For each trial, 25,000 points of data were gathered during 1 sec, that means 1 point denotes the pupil diameter measured for 0.04msec. What I want to do is just to adjust this data into 0.1 msec/point; however, I'm not sure whether I can apply techniques like FFT in this case because it is the first time that I handle this kind of data. I appreciate your advices again.
A standard data compression technique with time series data is to take the fast Fourier transform and use the smallest frequency amplitudes to represent your data (calculate the power spectrum). You can compare data using these frequency amplitudes, though for the to lose the least amount of information you would want to use the frequencies with the largest amplitudes -- but then it becomes tricky to compare the data... Here is the standard Matlab tutorial on FFT. Some other possibilities include:
-ARMA models
-Wavelets
Check out this paper on the "SAX" method, a modern approach for time-series compression -- it also discusses classic time-series compression techniques.
I am new to iphone development.I am doing research on voice recording in iphone .I have downloaded the "speak here " sample program from Apple.I want to determine the frequency of my voice that is recorded in iphone.Please guide me .Please help me out.Thanks.
In the context of processing human speech, there's really no such thing as "the" frequency.
The signal will be a mix of many different frequencies, so it might be more fruitful to think in terms of a spectrum, rather than a single frequency. Even if you're talking about
a sustained musical note with a fixed pitch, there will be plenty of overtones and harmonics present, in addition to the fundamental frequency of the note. And for actual speech,
the frequency spectrum will change drastically even within a short clip, due to the different tonal characteristics of vowels and consonants.
With that said, it does make some sense to consider the peak frequency of a voice recording.
You could calculate the Fast Fourier Transform of your voice clip, then find the frequency
bin with the largest response. You may also be interested in the concept of a spectrogram, which represents how the audio spectrum of a signal varies over time.
Use Audacity. Take a small recording of typical speech, and cut it down to one wavelength, from one peak to another peak. Subtract the two times, and divide 1 by that number and you'll get the frequency of your wave in Hz.
Example:
In my audio clip, my waveform runs from 0.0760 to 0.0803 seconds.
0.0803-0.0760 = 0.0043
1/0.0043 = 232.558 Hz, my typical speech frequency
This might give you a good basis to create an analyzer. You'd need to detect the peaks, and time between the peaks of the wave and do an average calculation of the result.
You'll need to use Apple's Accelerate framework to take an FFT of the relevant audio. The FFT will convert the audio in the time domain to the frequency domain. The Accelerate framework supports the FFT and will allow you to do frequency analysis in real-time.