I know that all this are used to measure sensitivity in accelerometers. It's easy to understand the concept of mV/g but I didn't find any useful information about LSB/g and Count/g. Is there any relation between them?
Thank you all in advance!
The accelerometer has an analog output in mV. That's converted by an A to D, resulting in a number of some range. (E.g., a 12-bit A to D would give a number between 0 and 4095.) LSB/g and count/g would give the sensitivity of this output number.
See here for an example.
Related
Is anyone using CrossSim - crossbar simulator from Sandia national laboratories? In the manual, it is not explained about the input files reset.csv/set.csv for lookup table generation. I need to know about the rmp values in that file. What is rmp and how was it calculated?
Or are there any other crossbar array simulation software that can be used for vector multiplication, etc mainly for memristor devices?
I am in graduate school student.
I'm studying Resistive Memory Devices for Neuromorphic Computing.
I am also using this CrossSim simulator(ver. 0.2). Maybe I can help you.
Generally, a Memristor device has two terminals whose resistance value is modulated by an arbitrary voltage pulse. If this memristor undergoes higher than the threshold voltage(Vth), its state changes. otherwise, it holds its state.
So, we program it with a higher than Vth and read its state by applying a voltage lower than Vth.
In the manual, there is no specific explanation of what's in the reset.csv/set.csv file. it contains a current value that is acquired experimentally. not a calculated value. Actually, after the lookup table is generated its values becomes conductance value. That's why reading voltage is required in the create_lookup_table.py example. (conductance) = (current) / (voltage) you know.
The lookup table is for experimental data to verify when memristors come to hardware. if you want to simulation just algorithmically you don't need a lookup table. you can do this by adding the following codes.
params.numeric_params.update_model = "ANALYTIC"
I hope this is helpful to you. :)
How can OfflineAudioContext.startRendering() output an AudioBuffer that contains the bit depth of my choice (16 bits or 24 bits)? I know that I can set the sample rate of the output easily with AudioContext.sampleRate, but how do I set the bit depth?
My understanding of audio processing is pretty limited, so perhaps it's not as easy as I think it is.
Edit #1:
Actually, AudioContext.sampleRate is readonly, so if you have an idea on how to set the sample rate of the output, that would be great too.
Edit #2:
I guess the sample rate is inserted after the number of channels in the encoded WAV (in the DataView)
You can't do this directly because WebAudio only works with floating-point values. You'll have to do this yourself. Basically take the output from the offline context and multiply every sample by 32768 (16-bit) or 8388608 (24-bit) and round to an integer. This assumes that the output from the context lies within the range of -1 to 1. If not, you'll have to do additional scaling. And finally, you might want to divide the final result by 32768 (8388608) to get floating-point numbers back. That depends on what the final application is.
For Edit #1, the answer is that when you construct the OfflineAudioContext, you have to specify the sample rate. Set that to the rate you want. Not sure what AudioContext.sampleRate has to do with this.
For Edit #2, there's not enough information to answer since you don't say what DataView is.
In the course of testing an algorithm I computed option prices for random input values using the standard pricing function blsprice implemented in MATLAB's Financial Toolbox.
Surprisingly ( at least for me ) ,
the function seems to return negative option prices for certain combinations of input values.
As an example take the following:
> [Call,Put]=blsprice(67.6201,170.3190,0.0129,0.80,0.1277)
Call =-7.2942e-15
Put = 100.9502
If I change time to expiration to 0.79 or 0.81, the value becomes non-negative as I would expect.
Did anyone of you ever experience something similar and can come up with a short explanation why that happens?
I don't know which version of the Financial Toolbox you are using but for me (TB 2007b) it works fine.
When running:
[Call,Put]=blsprice(67.6201,170.3190,0.0129,0.80,0.1277)
I get the following:
Call = 9.3930e-016
Put = 100.9502
Which is indeed positive
Bit late but I have come across things like this before. The small negative value can be attributed to numerical rounding error and / or truncation error within the routine used to compute the cumulative normal distribution.
As you know computers are not perfect and small numerical error always persists in all calculations, in my view therefore the question one should must ask instead is - what is the accuracy of the input parameters being used and therefore what is the error tolerance for outputs.
The way I thought about it when I encountered it before was that, in finance, typical annual stock price return variance are of the order of 30% which means the mean returns are typically sampled with standard error of roughly 30% / sqrt(N) which is roughly of the order of +/- 1% assuming 2 years worth of data (so N = 260 x 2 = 520, any more data you have the other problem of stationarity assumption). Therefore on that basis the answer you got above could have been interpreted as zero given the error tolerance.
Also we typically work to penny / cent accuracy and again on that basis the answer you had could be interpreted as zero.
Just thought I'd give my 2c hope this is helpful in some ways if you are still checking for answers!
Information Gain= (Information before split)-(Information after split)
Information gain can be found by above equation. But what I don't understand is what is exactly the meaning of this information gain? Does it mean that how much more information is gained or reduced by splitting according to the given attribute or something like that???
Link to the answer:
https://stackoverflow.com/a/1859910/740601
Information gain is the reduction in entropy achieved after splitting the data according to an attribute.
IG = Entropy(before split) - Entropy(after split).
See http://en.wikipedia.org/wiki/Information_gain_in_decision_trees
Entropy is a measure of the uncertainty present. By splitting the data, we are trying to reduce the entropy in it and gain information about it.
We want to maximize the information gain by choosing the attribute and split point which reduces the entropy the most.
If entropy = 0, then there is no further information which can be gained from it.
Correctly written it is
Information-gain = entropy-before-split - average entropy-after-split
the difference of entropy vs. information is the sign. Entropy is high, if you do not have much information of the data.
The intuition is that of statistical information theory. The rough idea is: how many bits per record do you need to encode the class label assignment? If you have only one class left, you need 0 bits per record. If you have a chaotic data set, you will need 1 bit for every record. And if the class is unbalanced, you could get away with less than that, using a (theoretical!) optimal compression scheme; e.g. by encoding the exceptions only. To match this intuition, you should be using the base 2 logarithm, of course.
A split is considered good, if the branches have lower entropy on average afterwards. Then you have gained information on the class label by splitting the data set. The IG value is the average number of bits of information you gained for predicting the class label.
I am simulating a digital filter, which is 4-stage.
Stages are:
CIC
half-band
OSR
128
Input is 4 bits and output is 24 bits. I am confused about the 24 bits output.
I use MATLAB to generate a 4 bits signed sinosoid input (using SD tool), and simulated with modelsim. So the output should be also a sinosoid. The issue is the output only contains 4 different data.
For 24 bits output, shouldn't we get a 2^24-1 different data?
What's the reason for this? Is it due to internal bit width?
I'm not familiar with Modelsim, and I don't understand the filter terminology you used, but...Are your filters linear systems? If so, an input at a given frequency will cause an output at the same frequency, though possibly different amplitude and phase. If your input signal is a single tone, sampled such that there are four values per cycle, the output will still have four values per cycle. Unless one of the stages performs sample rate conversion the system is behaving as expected. As as Donnie DeBoer pointed out, the word width of the calculation doesn't matter as long as it can represent the four values of the input.
Again, I am not familiar with the particulars of your system so if one of the stages does indeed perform sample rate conversion, this doesn't apply.
Forgive my lack of filter knowledge, but does one of the filter stages interpolate between the input values? If not, then you're only going to get a maximum of 2^4 output values (based on the input resolution), regardless of your output resolution. Just because you output to 24-bit doesn't mean you're going to have 2^24 values... imagine running a digital square wave into a D->A converter. You have all the output resolution in the world, but you still only have 2 values.
Its actually pretty simple:
Even though you have 4 bits of input, your filter coefficients may be more than 4 bits.
Every math stage you do adds bits. If you add two 4-bit values, the answer is a 5 bit number, so that adding 0xf and 0xf doesn't overflow. When you multiply two 4-bit values, you actually need 8 bits of output to hold the answer without the possibility of overflow. By the time all the math is done, your 4-bit input apparently needs 24-bits to hold the maximum possible output.