I want to convert data(double precision,15 decimal points) to data of another type(quadruple precision,34 decimal points). So, I used vpa function like this:
data = sin(2*pi*frequency*time);
quad_data = vpa(data,34);
But, the type of the result is sym, not double. And when I checked each cell of the sym type data, 1x1 sym was created in each cell. I tried to use fft function using 'quad_data', but it didn't work. Is there any solution that I can change the decimal point of double type from 15 to 34?
The only numeric floating point types that MATLAB currently supports is double, single, and half. Extended precision types can be achieved via the Symbolix Toolbox (e.g., vpa) or 3rd party code (e.g., John D'Errico's FEX submission High Precision Floating HPF class). But even then, only a subset of floating point functions will typically be supported. If the function you are trying to use doesn't support the variable type, then you would have to supply your own function.
Also, you are not building vpa objects properly in the first place. Typically you would convert the operands to vpa first and then do arithmetic on them. Doing the arithmetic in double precision first as you are doing with data, and then converting to extended precision vpa, just adds garbage to the values. E.g., set the digits first and then use vpa('pi') to get the full extended precision version of pi as a vpa variable.
There is a commercial 3rd-party toolbox for this purpose, called the Multiprecision Computing Toolbox for MATLAB.
This tool implements many of the mathematical operations you would expect from double inputs, and according to benchmarks on the website, it's much faster than vpa.
Disclosure: I am not affiliated with the creators of this tool in any way, however I can say that we had a good experience with this tool for one of our lab's projects.
The other suggestion I can give is doing the high-precision arithmetic in an another language\environment to which MATLAB provides interfaces (e.g., C, python, java), and which should have the quad data type implemented.
Related
I'm reading an IMU on the arduino board with a s-function block in simulink by double or single data types though I just need 2 decimals precision as ("xyz.ab").I want to improve the performance with changing data types and wonder that;
is there a way to decrease the precision to 2 decimals in s-function block or by adding/using any other conversion blocks/codes in the simulink aside from using fixed-point tool?
For true fixed point transfer, fixed-point toolbox is the most general answer, as stated in Phil's comment.
However, to avoid toolbox use, you could also devise your own fix-point integer format and add a block that takes a floating point input and convert it into an integer format (and vice versa on the output).
E.g. If you know the range is 327.68 < var < 327.67 you could just define your float as an int16 divided by 10. In a matlab function block you would then just say
y=int16(u*100.0);
to convert the input to the S-function.
On the output it would be a reversal
y=double(u)/100.0;
(Eml/matlab function code can be avoided by using multiply, divide and convert blocks.)
However, be mindful of the bits available and that the scaling (*,/) operations is done on the floating point rather than the integer.
2^(nrOfBits-1)-1 shows you what range you can represent including signeage. For unsigned types uint8/16/32 the range is 2^(nrOfBits)-1. Then you use the scaling to fit the representable bit into your used floating point range. The scaled range divided by 2^nrOfBits will tell you what the resolution will be (how large are the steps).
You will need to scale the variables correspondingly on the Arduino side as well when you go to an integer interface of this type. (I'm assuming you have access to that code - if not it'd be hard to use any other interface than what is already provided)
Note that the intXX(doubleVar*scale) will always truncate the values to integer. If you need proper rounding you should also include the round function, e.g.:
int16(round(doubleVar*scale));
You don't need to use a base 10 scale, any scaling and offsets can be used, but it's easier to make out numbers manually if you keep to base 10 (i.e. 0.1 10.0 100.0 1000.0 etc.).
As a final note, if the Arduino code interface is floating point (single/double) and can't be changed to integer type; you will not get any speedup from rounding decimals since the full floating point is what will be is transferred anyway. Even if you do manage to reduce the data a bit using integers I suspect this might not give a huge speedup unless you transfer large amounts of data. The interface code will have a comparatively large overhead anyway.
Good luck with your project!
Matlab by default uses double as the numeric type. I am training a GMM and running out of memory, so I want to change the default numeric type to float which takes half the memory as double. Is it possible?
I know that single(A) converts a double precision element A to single precision but we need to allocate double precision storage for A first which runs out of memory. Also, I cannot use single() around all my matrix allocation as various functions in many toolboxes are called which I cannot change manually.
So is there a way that calling zeros(n) will allocate a matrix of floats by default instead of double ?
No, there is currently no way to change the default numeric type to float / single. See these informative posts on MathWorks forums:
http://www.mathworks.com/matlabcentral/answers/8727-single-precision-by-default-lots-of-auxiliary-variables-to-cast
http://www.mathworks.com/matlabcentral/answers/9591-is-there-a-way-to-change-matlab-defaults-so-that-all-workspace-floating-point-values-to-be-stored-i
Also, quoting John D'Errico on the first link I referenced - a formidable and legendary MATLAB expert:
This is not possible in MATLAB. Anyway, it is rarely a good idea to work in single. It is actually slower in many cases anyway. The memory saved is hardly worth it compared to the risk of the loss in precision. If you absolutely must, use single on only the largest arrays.
As such, you should probably consider reformulating your algorithm if you are using so much memory. If you are solving linear systems that are quite large and there are many zero coefficients, consider using sparse to reduce your memory requirements.
Besides which, doing this would be dangerous because there may be functions in other toolboxes that rely on the fact that double type allocation of matrices is assumed and spontaneously changing these to single may have unintended consequences.
As #rayryeng said, there's no way in MATLAB to "change the default numeric type" to single. I'm not even entirely sure what that would mean.
However, you asked a specific question as well:
So is there a way that calling zeros(n) will allocate a matrix of floats by default instead of double?
Yes - you can use zeros(n, 'single'). That will give you an array of zeros of type single. zeros(n) is just a shorthand for zeros(n, 'double'), and you can ask for any other numeric type you want as well, such as uint8 or int64. The other array creation functions such as ones, rand, randn, NaN, inf, and eye support similar syntaxes.
Note that operations carried out on arrays of single type may not always return outputs of type single (so you may need to subsequently cast them to single), and they may use intermediate arrays that are not of type single (so you may not always get all the memory advantages you might hope for). For example, many functions in Image Processing Toolbox will accept inputs of type single, but will then internally convert to double in order to carry out the operations. The functions from Statistics Toolbox to fit GM models do appear to accept inputs of type single, but I don't know what they do internally.
I encountered some problem while using Matlab. I'm doing some computations concerning OTC instruments (pricing, constructing discount curve, etc.), firstly in Excel and after that in Matlab (for comparison). While I`m 100% sure that computations in Excel are good (comparing to market data), it seems that Matlab is producing some differences (i.e. -4,18-05E). Matlab algorithm looks fine. I was wondering - maybe it is because Matlab is rounding some computations - I heard a little bit about it. I'm trying to convert a double numbers to float by function vpa(), but it looks that it is not working with double numbers. Any other ideas?
Excel uses 64 bit double precision floating point numbers compliant with IEEE 754 floating point specification.
The way that Excel treats results like =1/5 and appears to compute them exactly (despite this example not being a dyadic rational) is purely down to formatting. It handles =1/3 + 1/3 + 1/3 similarly. It's quite smart really if you think about it: the implementers of Excel had no real choice given that the average Excel user is not au fait with the finer points of floating point arithmetic and would simply scorn a spreadsheet package that "couldn't even get 1/5 correct".
That all said, you're very unlucky if you get a difference of -4,18-05E between the two systems. That's because double floating point is accurate to around 15 significant figures. Your algorithms would be implemented very poorly indeed for the error terms to bubble up to that magnitude if you're consistently using double precision floating point types.
Most likely (and I too work in finance), the difference will be in the way you're interpolating your discount curve. That's where I would look first if I were you.
Given the value of the error compared to the default format settings, this is almost certainly because of using the default format short and comparing the output on the command line to the real value.
x = 5.4444418
Output:
x =
5.4444
Then:
x-5.4444
Output:
ans =
4.1800e-05
The value stored in x remains at 5.4444418, it is only the measure output to the command line that changes.
We can write a simple Rational Number class using two integers representing A/B with B != 0.
If we want to represent an irrational number class (storing and computing), the first thing came to my mind is to use floating point, which means use IEEE 754 standard (binary fraction). This is because irrational number must be approximated.
Is there another way to write irrational number class other than using binary fraction (whether they conserve memory space or not) ?
I studied jsbeuno's solution using Python: Irrational number representation in any programming language?
He's still using the built-in floating point to store.
This is not homework.
Thank you for your time.
With a cardinality argument, there are much more irrational numbers than rational ones. (and the number of IEEE754 floating point numbers is finite, probably less than 2^64).
You can represent numbers with something else than fractions (e.g. logarithmically).
jsbeuno is storing the number as a base and a radix and using those when doing calcs with other irrational numbers; he's only using the float representation for output.
If you want to get fancier, you can define the base and the radix as rational numbers (with two integers) as described above, or make them themselves irrational numbers.
To make something thoroughly useful, though, you'll end up replicating a symbolic math package.
You can always use symbolic math, where items are stored exactly as they are and calculations are deferred until they can be performed with precision above some threshold.
For example, say you performed two operations on a non-irrational number like 2, one to take the square root and then one to square that. With limited precision, you may get something like:
(√2)²
= 1.414213562²
= 1.999999999
However, storing symbolic math would allow you to store the result of √2 as √2 rather than an approximation of it, then realise that (√x)² is equivalent to x, removing the possibility of error.
Now that obviously involves a more complicated encoding that simple IEEE754 but it's not impossible to achieve.
I tried to assign a very small number to a double value, like so:
double verySmall = 0.000000001;
9 fractional digits. For some reason, when I multiplicate this value by 10, I get something like 0.000000007. I slighly remember there were problems writing big numbers like this in plain text into source code. Do I have to wrap it in some function or a directive in order to feed it correctly to the compiler? Or is it fine to type in such small numbers in text?
The problem is with floating point arithmetic not with writing literals in source code. It is not designed to be exact. The best way around is to not use the built in double - use integers only (if possible) with power of 10 coefficients, sum everything up and display the final useful figure after rounding.
Standard floating point numbers are not stored in a perfect format, they're stored in a format that's fairly compact and fairly easy to perform math on. They are imprecise at surprisingly small precision levels. But fast. More here.
If you're dealing with very small numbers, you'll want to see if Objective-C or Cocoa provides something analagous to the java.math.BigDecimal class in Java. This is precisely for dealing with numbers where precision is more important than speed. If there isn't one, you may need to port it (the source to BigDecimal is available and fairly straightforward).
EDIT: iKenndac points out the NSDecimalNumber class, which is the analogue for java.math.BigDecimal. No port required.
As usual, you need to read stuff like this in order to learn more about how floating-point numbers work on computers. You cannot expect to be able to store any random fraction with perfect results, just as you can't expect to store any random integer. There are bits at the bottom, and their numbers are limited.