Matlab: Obtaining an exact decimal number from a csv file - matlab

I have a CSV file where I have decimal numbers like 1.1, 1.10, 1.100. When I load the file in Matlab using importdata or even textscan, it show all these numbers same: 1.1 discarding 0's at the end. But, I have different meaning of them.
Is there anyway to recover?
Thanks,
Sam

You're saying you mean something different by 1.1 than by 1.100000? Mathematically, they are the same number (I sincerely hope you know this already).
So if your "numbers" don't have the same meaning as numbers in any of the strictly defined mathematical number systems (which MATLAB normally assumes is the case), you should import them as strings (%s) rather than numbers (%d, %f, etc.), and process them as such.

Related

kdb - issue with reading floating numbers

I am reading covariance data from flat files. For that reason, not being able to fully read the floating number results in covarince not satisfying positive semi definite requirements.
For instance, this is one of the input from raw text:
“-0.581050672”— no, actually raw text is this: -5.801050672E-01
When I read this into kdb and cast with F, it results in -0.50810507. When I do this for all and check the covariance, unfortunately it does not satisfy PSD constraints. Other hack I have been doing is to add small noise in Identity matrix…
Apart from this hack, is there way to read above data into proper floating number up to 9th digit? I tried \P and .Q.f but these only seem to work in Display.
Thank you
Sorry, does not seem like a kdb issue. Was exporting these data into different software and floating points were lost during this process. Thanks for pointer.

how to change variable type to accept numbers more than 100 quintillion

I am a beginner in scratch and working on a simple program that speaks out numbers in multiples of 10 and the highest number it would speak using text-to-speech tool is 100 quintillion. After this it would start speaking in exponential format.
How can I change the datatype of variable in a way it accepts numbers more than 100 quintillion and wont change it to exponential format?
How can I change the datatype of variable
Well, first of all, you can't change the datatype of a variable in Scratch
accepts numbers more than 100 quintillion and wont change it to exponential format
The way Scratch works is that it does that and there is no lifehack I could find to fix this, but you could try deep searching the Scratch Forums
My first thought would be to use a base64 counting system as opposed to base 10, which humans use.
You are currently storing your number in base 10, where you are limited by the numbers 0-9; whenever you use up all of these digits, you must use another character space. However, since scratch doesn't restrict variables to only numbers, we can make use of non-numerical characters too.
Base 64 uses capital A to capital Z, then uses lowercase A to lowercase Z, uses all digits 0-9, then uses the symbols + and /. Luckily, Scratch supports all of these characters!

Read huge binary numbers in Scala

I have 2 huge numbers written in a .txt file in binary format. Both of them have about 800 digits. I want to read them from this file in Scala, but I can't find a suitable type, which would be able to hold all the digits. It seems that even BigInt cuts a part of the digits.
The task itself is to add to numbers in binary representation and count zeroes/ones. I wanted to operate with String, so it would be easier to convert from binary to a decimal system.
So I would be grateful for any advice on which type to use better in Scala for such numbers?

How to tell if two numbers are really different or they are actually the same due to floating point error

For example,
0.168033639538270
and
0.168033639538270
are two double type numbers that are from two different calculations (some further calculations from the eigenvalues of a matrix).
But they are treated as different by MATLAB (by unique or ==). How do I know if MATLAB treats them as different due to floating point error eps = 2.220446049250313e-16, or if they are actually different (the digits behind the first 15 digits are not the same, but MATLAB just will not display them). Sometimes MATLAB treats two number with the same display value as the same, but sometimes different, so I want to know if they are really different.
You can print a formatted version of the number at required precision using sprintf, and then compare the two strings using strcmp.

Matlab vs Excel differences in computations

I encountered some problem while using Matlab. I'm doing some computations concerning OTC instruments (pricing, constructing discount curve, etc.), firstly in Excel and after that in Matlab (for comparison). While I`m 100% sure that computations in Excel are good (comparing to market data), it seems that Matlab is producing some differences (i.e. -4,18-05E). Matlab algorithm looks fine. I was wondering - maybe it is because Matlab is rounding some computations - I heard a little bit about it. I'm trying to convert a double numbers to float by function vpa(), but it looks that it is not working with double numbers. Any other ideas?
Excel uses 64 bit double precision floating point numbers compliant with IEEE 754 floating point specification.
The way that Excel treats results like =1/5 and appears to compute them exactly (despite this example not being a dyadic rational) is purely down to formatting. It handles =1/3 + 1/3 + 1/3 similarly. It's quite smart really if you think about it: the implementers of Excel had no real choice given that the average Excel user is not au fait with the finer points of floating point arithmetic and would simply scorn a spreadsheet package that "couldn't even get 1/5 correct".
That all said, you're very unlucky if you get a difference of -4,18-05E between the two systems. That's because double floating point is accurate to around 15 significant figures. Your algorithms would be implemented very poorly indeed for the error terms to bubble up to that magnitude if you're consistently using double precision floating point types.
Most likely (and I too work in finance), the difference will be in the way you're interpolating your discount curve. That's where I would look first if I were you.
Given the value of the error compared to the default format settings, this is almost certainly because of using the default format short and comparing the output on the command line to the real value.
x = 5.4444418
Output:
x =
5.4444
Then:
x-5.4444
Output:
ans =
4.1800e-05
The value stored in x remains at 5.4444418, it is only the measure output to the command line that changes.