I have a dialog node in my Watson Assistant that takes a number provided by the user and then provides a response with that number used in a division like so:
<? (1000 * ($Number / 6.00)) ?> Grams
The problem is that this returns a message like this: (Considering the number was 2)
333.3333333333333 Grams
Removing the .00 from the 6.00 results in:
0 Grams
Is there any way I can round the result of the division? Ideally I would like to round it to the nearest multiple of ten but rounding it to 2 decimal places would also be acceptable.
I want to get a final message like so:
333.33 Grams
You could use one of the round() methods provided by ava.lang.Math. Watson Assistant allows those methods in the input processing. Depending on how many fractions you need, you would need to "massage" the numbers. There are round() methods for float and double. You could first apply toDouble() to make sure the number is treated correctly.
Note that there is a discussion on how to perform exact and efficient rounding in Java.
Related
Consider two number like 1 and 0.99 and i want to sub these number in C#
float s = 0.99f - 1f;
Console.WriteLine(s.toString());
result is : -0.0099999
what can i do that result equal to -0.01 ?
Try this;
decimal d = 0.99m - 1m;
Console.WriteLine(Math.Round(d, 2));
Computers aren't able to perfectly represent fractional numbers. They can only approximate them which is why you're seeing -0.0099999 instead of the expected -0.01.
For anything that you require close approximations you'd typically use an arbitrary precision type and round where appropriate. The most common rounding for currency is bankers rounding as it doesn't skew results heavily in either direction.
See also:
What is the best data type to use for money in c#?
http://wiki.c2.com/?BankersRounding
Floating point numbers are often an approximation. There is a whole field of study about how to use floating numbers in a responsible way in computers and believe me, it is not trivial!
What most programmers do is live with it and make sure their code is 'robust' against the small deviations you get from using floating point numbers.
Wrong:
if (my_float== -0.01)
Right:
if (my_float>= -0.00999 && my_float<=-0.01001)
(The numbers are just as example).
If you need exact numbers you can e.g. use integers. You can use rounding but that is not done halfway calculations as you are likely to make the result more unreliable. Rounding is normally when you print the end result. After all as a good engineer you should know how many digits are relevant at the end.
I am having a problem with a simple division from two integers. I need it to be as accurate as possible, but for some reason the double type is working strange.
For example, if I execute the following code:
double res = (29970.0/1000.0);
The result is 29.969999999999999, when it should be 29.970.
Any idea why this is happening?
Thanks
Any idea why this is happening?
Because double representation is finite. For example, IEEE754 double-precision standard has 52 bits for fraction. So, not all the real numbers are covered. So, some of the values can not be ideally precise. In your case the result is 10^-15 away from the ideal.
I need it to be as accurate as possible
You shouldn't use doubles, then. In Java, for example, you would use BigDecimal instead (most languages provide a similar facility). double operations are intrinsically inaccurate to some degree. This is due to the internal representation of floating point numbers.
floating point numbers of type float and double are stored in binary format. Therefore numbers cant have precise decimal values. Those values are instead quantisized. If you hypothetically had only 2 bits fraction number type you would be able to represent only 2^-2 quantums: 0.00 0.25 0.50 0.75, nothing between.
I need it to be as accurate as possible
There is no silver bullet, but if you want only basic arithmetic operations (which map ℚ to ℚ), and you REALLY want exact results, then your best bet is rational type composed of two unlimited integers (a.k.a. BigInteger, BigInt, etc.) - but even then, memory is not infinite, and you must think about it.
For the rest of the question, please read about fixed size floating-point numbers, there's plenty of good sources.
I have some inputs on my site representing floating point numbers with up to ten precision digits (in decimal). At some point, in the client side validation code, I need to compare a couple of those values to see if they are equal or not, and here, as you would expect, the intrinsics of IEEE754 make that simple check fails with things like (2.0000000000==2.0000000001) = true.
I may break the floating point number in two longs for each side of the dot, make each side a 64 bit long and do my comparisons manually, but it looks so ugly!
Any decent Javascript library to handle arbitrary (or at least guaranteed) precision float numbers on Javascript?
Thanks in advance!
PS: A GWT based solution has a ++
There is the GWT-MATH library at http://code.google.com/p/gwt-math/.
However, I warn you, it's a GWT jsni overlay of a java->javascript automated conversion of java.BigDecimal (actually the old com.ibm.math.BigDecimal).
It works, but speedy it is not. (Nor lean. It will pad on a good 70k into your project).
At my workplace, we are working on a fixed point simple decimal, but nothing worth releasing yet. :(
Use an arbitrary precision integer library such as silentmatt’s javascript-biginteger, which can store and calculate with integers of any arbitrary size.
Since you want ten decimal places, you’ll need to store the value n as n×10^10. For example, store 1 as 10000000000 (ten zeroes), 1.5 as 15000000000 (nine zeroes), etc. To display the value to the user, simply place a decimal point in front of the tenth-last character (and then cut off any trailing zeroes if you want).
Alternatively you could store a numerator and a denominator as bigintegers, which would then allow you arbitrarily precise fractional values (but beware – fractional values tend to get very big very quickly).
I have following code:
float totalSpent;
int intBudget;
float moneyLeft;
totalSpent += Amount;
moneyLeft = intBudget - totalSpent;
And this is how it looks in debugger: http://www.braginski.com/math.tiff
Why would moneyLeft calculated by the code above is .02 different compared to the expression calculated by the debugger?
Expression windows is correct, yet code above produces wrong by .02 result. It only happens for a very large numbers (yet way below int limit)
thanks
A single-precision float has 23 bits of precision. That means that every calculation is rounded to 23 binary digits. This means that if you have a computation that, say, adds a very small number to a very large number, rounding may result in strange results.
Imagine that you are doing math in scientific notation decimal by hand, under the rule that you may only have four significant figures. Let's say I ask you to write twelve in scientific notation, with four significant figures. Remembering junior high school, you write:
1.200 × 101
Now I say compute the square of 12, and then add 0.5. That is easy enough:
1.440×102 + 0.005×102 = 1.445×102
How about twelve cubed plus 0.75:
1.728×103 + 0.00075×103 = 1.72875×103
But remember, I only gave you room for four significant digits, so you must round; then we get:
1.728×103 + 7.5×10-1 = 1.729×103
See? The lack of precision can make the computation come out with unexpected results.
In your example, you've got 999999 in a calculation where you're trying to be precise to 0.01. log2(999999) = 19.93 and log2(0.01) = -6.64. The difference is more than 23; therefore you would need more than 23 binary digits to perform this calculation accurately.
Because floating point mathematics rounds-off precision by its very nature, it is usually a bad choice for currency computation, where you must be accurate to the last cent. But are you really concerned with fractions of a cent in your application? If not, then why not do away with the decimal point altogether, and simply store cents (instead of dollars) in a 64-bit integer? 264¢ is more than the GDP of the entire planet.
Floating point will always produce strange results with money type calculations.
The golden rule is that floating point is good for things you measure litres,yards,lightyears,bushels etc. etc. but not for things you count like
sheep, beans, buttons etc.
Most money calculations are to do with counting pennies so use integer math
and you wont get the strange results. Either use a fixed decimal arithimatic
library (which would probably be overkill on an iPhone) or store your amounts
as whole numbers of cents and only convert to $ and cents on display.
Is it possible to store a fraction like 3/6 in a variable of some sort?
When I try this it only stores the numbers before the /. I know that I can use 2 variables and divide them, but the input is from a single text field. Is this at all possible?
I want to do this because I need to calculate the fractions to decimal odds.
A bonus question ;) - Is there an easy way to calculate a decimal value to a fraction? Thanks..
Well in short, there is no true way to extract the original fraction out of a decimal.
Example: take 5/10
you will get 0.5
now, 0.5 also translates back to 1/2, 2/4, 3/6, etc.
Your best bet is to store each integer separately, and perform the calculation later on.
The best thing to do is to implement a fraction class (or rational number class). Normally it would take a numerator and denominator and be able to provide a double, and do basic math with other fraction objects. It should also be able to parse and format fractions.
Rational Arithmetic on Rosetta Code looks like something good to start with.
I'm afraid there aren't any easy answers for you on this. For creating the fraction, you'll have to split the text field on the '/', convert the two halves to doubles, and divide them out. As for converting it back to a fraction, you'll have to crack open a math textbook and figure it out. (Even worse, a double is not actually precise—you may think it has 0.1 in it, but it really has 0.09999999999999998726 or something like that, so you'll have to choose a precision and go for it, or write some sort of fraction class that's based on a pair of integers.)
The method, as been said, is to store the numerator and denominator, much in the way you can write it on paper.
for 'C' use the
GNU Multiple Precision Arithmetic Library
look for 'rational' in the docs.
Is there an easy way to calculate a decimal value to a fraction?
If you limit your decimal values to a certain number of decimal points you could create a lookup table.
0.3333, 1/3
0.6666, 2/3
0.0625, 1/16
0.1250, 1/8
0.2500, 1/4
0.5000, 1/2
0.7500, 3/4
etc...
So if the user input 0.5 you pad it with 0's until you got 4 decimal places. You would then use the lookup table to return "1/2". The lookup table should probably be a dictionary of sorts.
It wouldn't be too difficult to do estimating either. For example, if the user entered 0.0624 you could easily select the value in the table closest to that decimal. In this case it would return "1/16."
Don't let typing/entering of the finite set of decimal/fraction pairs scares you (it's really not that large depending on the precision you choose).
If all else fails perhaps a google search would reveal a library that does this sort of this for you.