Explanation of the details of an accelerometer data sheet - accelerometer

i'm working on samsung galaxy accelerometer SMB380.
This is the data sheet: http://www.bosch-sensortec.com/content/language1/downloads/SMB380_Flyer_Rev1.3.pdf
Can someone explain me technical data on the right?
I'm especially interested in the noise and precision data.
Thx

Noise is 0.5 mg/Hz^-1, which tells you how much noise there will be for a given measurement bandwidth, e.g. if you filter your signal with a 100 Hz low pass filter you can expect 5 mg of noise.
As for sensitivity, there are 3 programmable ranges. The most sensitive range is 2 g. The output is 10 bits (signed, presumably) so you get 4 mg per bit, i.e. the smallest signal you can detect at a level of +/- 1 bit is 4 mg.

you can use the butterworth filter in matlab, the result will be perfect especially if you use this parametrer [B,A]=butter(2, 0.05)

Related

what is the purpose of the frequency domain analysis

I had assumed that this was going to output a set frequency buckets that I could use to do pitch detection (like aubio pitch). But that doesn't seem to be what it does. I fired up the voice-omatic app and using it frequency display played various notes through my mic. Bars appear, all at the left hand end, almost no distinction between high and low notes. I upped the FFT size just to see if that changed anything, seems not. I found a picthdetect js project and saw that it used analyzer ,aha, here I will find the correct usage, but the meat of the code doesn't use frequency domain output, it feeds time domain into its own algorithm. So to solve my problem I will use that library, but I am still curious what the freqency domain data represents
It does exactly what you think it does, but the specific implementation on that site is not great for seeing that.
The values are mapped differently than is nescessary for seeing changes in pitch between high and low vocal range because the range is mapped linearly, not logrithmically. If you were to map each frequency bin in a logarithmic way you would get a far more useful diagram. Right now the right hand 3/4 of the visualizer is showing from 5000hz to 20000hz (roughly), which contains very very little data in an audio signal compared to 0-5000hz. The root frequency of the human voice (and most instruments) occupies mostly between 100-1000hz. There are harmonics above that, but at reducing amplitudes the higher you get.
I've tweaked the code to tell you the peak frequency and the size of each bucket. Use this app.js: https://puu.sh/Izcws/ded6aae55b.js - you can use a tone generator on your phone to see how accurate it is. https://puu.sh/Izcx3/d5a9d74764.png
The way the code works is it calculates how big each bucket is as described in my answer with var bucketSize = 1 / (analyser.fftSize / analyser.context.sampleRate); (and adds two spans to show the data) and then while drawing each bar calculate which bar is the biggest, and then multiply the size of each bucket by which bucket is the biggest to get the peak frequency (and write it in el2). You can play with the fftSize and see why using a small value will not work at all for determining whether you are playing A2(110 Hz) or A2#(116.5 Hz).
From https://stackoverflow.com/a/43369065/13801789:
I believe I understand what you mean exactly. The problem is not with your code, it is with the FFT underlying getByteFrequencyData. The core problem is that musical notes are logarithmically spaced while the FFT frequency bins are linearly spaced.
Notes are logarithmically spaced: The difference between consecutive low notes, say A2(110 Hz) and A2#(116.5 Hz) is 6.5 Hz while the difference between the same 2 notes on a higher octave A3(220 Hz) and A3#(233.1 Hz) is 13.1 Hz.
FFT bins are linearly spaced: Say we're working with 44100 samples per second, the FFT takes a window of 1024 samples (a wave), and multiplies it first with a wave as long as 1024 samples (let's call it wave1), so that would be a period of 1024/44100=0.023 seconds which is 43.48 Hz, and puts the resulting amplitude in the first bin. Then it multiplies it with a wave with frequency of wave1 * 2, which is 86.95 Hz, then wave1 * 3 = 130.43 Hz. So the difference between the frequencies is linear; it's always the same = 43.48, unlike the difference in musical notes which changes.
This is why close low notes will be bundled up in the same bin while close high notes are separated. This is the problem with FFT's frequency resolution. It can be solved by taking windows bigger than 1024 samples, but that would be a trade off for the time resolution.

Clocking Issue in FPGA with MATLAB HDL Coder

So I am using simulink to generate a series of upsampling filter. I have my input as a sine wave with 44.1 kHz input and a output sine wave of 11.2 MHz. For this I use a set of 4 FIR Interpolation Filter from Simulink. The first one with a upsample of 32 and the rest with upsample of 2.
The problem is with the Fmax (the highest value at which the circuit can be clocked). I get a Fmax which is really low. Like below 50 MHz. I did some optimizations and got it up here. I want to rise it more. If any one can help me I can attach the simulink file I have
I am using MATLAB HDL coder and Altera Quatras 2 for my synthesis purposes
First of all, i do not understand why you would upsample by 32 and then 4 times by 2. You should analyze the slowest path.
If the addition is a bottleneck, that would be in the 32x upsampling and 8,8,8 would be better. However, all depends on the implementation, which I can't guess from here.
I would advise to have a look at FIR filters. Reducing the number of FIR stages will increase your speed at the cost of increased , SNR, which may or may not be tolerable. You could take one with a very short impulse response.
You could also reduce the number bits used to represent the samples. This will again decrease the SNR but consume less logic and is likely to be faster.
You also consider to or not to use hard multiplier blocks, if available in the technology you are targetting.
Otherwise, have a look at parallel FIR filter implementations. Though I bet you'll have to implement that one yourself.
And of course, as you pointed out yourself, realistic constraints are required.
Good luck. Please consider liking my post.
Thank for the answer. Yes i need the 4 stages of upsampling because of my project requirements. My input sampling frequency is varying and my output should always be 11.2 MHz, so thats why i need those 4 different stages in order to generate output for 4 different stages.
I optimized the FIR filters by using pipeline registers, reduced the number of multipliers of the 32 upsample one using the partly serial architecture.
I guess the problem was i was not using a SDC file as need for timinig analysis by altera, now when i configure a simple SDC file, i get positive slack value and a restriected Fmax of 24.5 MHz, as my output needs to be 11.2 MHz i guess this is fine enough.
If you have some more suggestions on this please let me know, i did not quite understand the fact of the SNR

1024 pt fft on a large set of data points

I have a signal that may not be periodic. We take about 15secs worth of samples (# 10kHz sampling rate) and we need to do the FFT on that signal to get the frequency content.
The problem is that we are implementing this FFT on an embedded system (DSP) which provides a library FFT of max. 1024 length. That is, it takes in 1024 data points as input and provides a 1024 point output.
What is the correct way of obtaining an FFT on the full 150000 point input?
You could run the FFT on each 1024 point block and average them to get an average power spectrum on the lower-resolution 1024-point frequency axis (512 samples from 0 to the Nyquist frequency, fs/2, so about 10 Hz resolution for your 10 kHz sampling). You should average the magnitudes of the component FFTs (i.e., sqrt(re^2+im^2)), otherwise the average will be sensitive to the drifting phase within each subwindow, which will depend on the precise frequency of the sinusoi.
If you think the periodic component may be at a low frequency, such that it will show up in a 15 sec sample but not complete any cycles in a 1024/10k ~ 100ms sample (i.e., below 10 Hz or so), you could downsample your input. You could try something as crude as averaging every 100 points to get a somewhat-distorted signal at 100 Hz sampling rate, then pack 10.24 sec worth into your 1024 pt sequence to pass to the FFT.
You could combine these two approaches by using a smaller downsampling factor and then do the magnitude-averaging of successive windows.
I'm confused why the system provides an FFT only up to 1024 points - is there something about the memory that makes it harder to access larger blocks?
Calculating a 128k point FFT using a 1k FFT as a subroutine is possible, but you'd end up recoding a lot of the FFT yourself. Maybe you should forget about the system library and use some other FFT implementation, without the length limitation, that will compile on your target. It may not incorporate all the optimizations of the system-provided one, but you're likely to lose a lot of that advantage when you embed it within the custom code needed to use the partial outputs of the multiple shorter FFTs to produce the long FFT.
Probably the quickest way to do the hybrid FFT (1024 points using the library, then added code to combine them into a 128k point FFT) would be to take an existing full FFT routine (a radix-2, decimation-in-time (DIT) routine for instance), but then modify it to use the system library for what would have been the first 10 stages, which amount to calculating 128 individual 1024-point FFTs on different subsets of the original signal (not, unfortunately, successive windows, but the partial-bit-reversed subsets), then let the remaining 7 stages of butterflies operate on those partial outputs. You'd want to get a pretty solid understanding of how the DIT FFT works to implement this.

fft artificial defects due to finite sampling frequency

I use Matlab to calculate the fft result of a time series data. The signal has an unknown fundamental frequency (~80 MHz in this case), together with several high order harmonics (1-20th order). However, due to finite sampling frequency (500 MHz in this case), I always get the mixing frequencies from high order frequency (7-20), e.g. 7th with a peak at abs(2*500-80*7)=440 MHz, 8th with frequency 360 MHz and 13th with a peak at abs(13*80-2*500)=40 MHz. Does anyone know how to get rid of these artificial mixing frequencies? One possible way is to increase the sampling frequency to sufficient large value. However, my data set has fixed number of data and time range. So the sampling frequency is actually determined by the property of the data set. Any solutions to this problem?
(I have image for this problem but I don't have enough reputation to post a image. Sorry for bring inconvenience for understanding this question)
You are hitting on a fundamental property of sampling - when you sample data at a fixed frequency fs, you cannot tell the difference between two signals with the same amplitude but different frequencies, where one has f1=fs/2 - d and the other has f2=f2/2 + d. This effect is frequently used to advantage - for example in mixers - but at other times, it's an inconvenience.
Unless you are looking for this mixing effect (done, for example, at the digital receiver in a modern MRI scanner), you need to apply a "brick wall filter" with a cutoff frequency of fs/2. It is not uncommon to have filters with a roll-off of 24 dB / octave or higher - in other words, they let "everything through" below the cutoff, and "stop everything" above it.
Data acquisition vendors will often supply filtering solutions with their ADC boards for exactly this reason.
Long way to say: "That's how digitization works". But it's true - that is how digitization works.
Typically, one low-pass filters the signal to below half the sample rate before sampling. Otherwise, after sampling, there is usually no way to separate any aliased high frequency noise (your high order harmonics) from the more useful spectrum below (Nyquist) half the sample rate.
If you don't filter the signal before sampling it, the defect is inherent in the sample vector, not the FFT.

filtering before/after decimating

I deal with a signal of sampling frequency of 48000 Hz.
I do not need this much and doing decimation in order to get to 8000 Hz, so, decimation of 6-th order.
Additionally, I know I need to filter out 50 Hz and its harmonics. I am going to do it with the help of FFT-iFFT procedure, as I really do not know how to design a FIR filter with all its zeros and poles...
My question is:
Shall I filter out harmonics before decimating, or I can do it later?
Because if I need to filter out harmonics before, its gonna be insane to filter out 24000/50=480 of harmonics, and after decimating the computational memory decreases times (4000/50=80)!