I state that I am new with Flutter.
I would like to do 50/2
I try with the DartPad
print((50/2).toString()); // return 25
I try the Flutter debugger build that I installed in a Pixel 4 emulator
print((50/2).toString()); // return 25.0
Why does the ".0" return?
Was I wrong to do something?
Is everything normal and is it kind of a code conversion?
How can I get it without ".0"?
ps. This is a case but I'm also talking about more complex situations where instead of doing a precise division, I could divide two variables (which maybe are not int but double) and / or do other operations. I've already evaluated things like toStringAsPrecision, it works for the single case but messes up the string if it contains true decimals.
ps2. The only solution that came to my mind is to replace the .toString with a custom extension method that eliminates superfluous zeroes (also considering the decimal point)
If you want to skip the .0 you should use truncating division a~/b. Answer is explained here : How to do Integer division in Dart?
Related
After browsing through Rescript's API, it seems like there is no function that compares 2 strings that returns a boolean. The best option is localeCompare but it behaves somewhat different from the JS's ==. Why does localeCompare return a float instead of an integer?
You can compare strings using == in rescript as well. Alternatively there is a String.equal as well if you need a function restricted to strings specifically, The "native" (non-Js) standard library modules such as String unfortunately seems to be left out of the rescript documentation entirely.
localeComapre probably returns a float because it might be possible for it to return non-integer numbers. JavaScript unfortunately has no integer type, which makes it hard to know if it could return floats even if it seems obvious that it shouldn't. I've seen several of this kind of bugs myself in various bindings.
I'm using GWT 2.8.2.
When I run the following code in SuperDev mode, it logs 123.456, which is what I expect.
double d = 123.456789;
JsNumber num = Js.cast(d);
console.log(num.toFixed(3));
When I compile to JavaScript and run, it logs 123 (i.e. it does not show the decimal places).
I have tried running the code on Android Chrome, Windows Chrome and Windows Firefox. They all exhibit the same behavior.
Any idea why there is a difference and is there anything I can do about it?
Update: After a bit more digging, I've found that it's to do with the coercion of the integer parameter.
console.log(num.toFixed(3)); // 123 (wrong)
console.log(num.toFixed(3d)); // 123.456 (correct)
It seems that the JsNumber class in Elemental2 has defined the signature as:
public native String toFixed(Object digits);
I think it should be:
public native String toFixed(int digits);
I'm still not sure why it works during SuperDev mode and not when compiled though.
Nice catch! This appears to be a bug in the jsinterop-generator configuration used when generating Elemental2's sources. Since JS doesn't have a way to say that a number is either an integer or a floating point value, the source material that jsinterop-generator works with can't accurately describe what that argument needs to be.
Usually, the fix is to add this to the integer-entities.txt (https://github.com/google/elemental2/blob/master/java/elemental2/core/integer_entities.txt), so that the generator knows that this parameter can only be an integer. However, when I made this change, the generator didn't act on the new line, and logged this fact. It turns out that it only makes this change when the parameter is a number of some kind, which Object clearly isn't.
The proper fix also then is probably to fix the externs that are used to describe what "JsNumber.toFixed" is supposed to take as an argument. The spec says that this can actually take some non-number value and after converting to a number, doesn't even need to be an integer (see https://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5 and https://www.ecma-international.org/ecma-262/5.1/#sec-9.3).
So, instead we need to be sure to pass whatever literal value that the Java developer provides to the function, so that it is parsed correctly within JS - this means that the argument needs to be annotated with #DoNotAutobox. Or, we could clarify this to say that it can either be Object or Number for the argument, and the toFixed(Object) will still be emitted, but now there will be a numeric version too.
Alternatively, you can work around this as you have done, or by providing a string value of the number of digits you want:
console.log(num.toFixed("3"));
Filed as https://github.com/google/elemental2/issues/129
The problem is that "java" automatically wraps the int as an Integer and GWT end up transpiling the boxed Integer as a special object in JS (not a number). But, if you use a double, the boxed double is also transpiled as a native number by GWT and the problem disappears.
I'm not absolutely sure why this works in super-devmode, but it should not. I think that the difference is that SDM maps native toString to the Java toString, and (even weirder) the native toFixed call the toString of the argument. In SDM the boxed-interger#toString returns the string representation of the number which ends up coercing back to int, but in production, the boxed-interger#toString returns "[object Object]", which is handled as NaN.
There is a special annotation #DoNotAutobox to being able to use primitive integers in JS native APIs. This prevents integer auto-wrap, so the int is transpired to a native number (example usage in the Js#coerceToInt method). Elemental2 might add this annotation or change the type to int as you suggest. Please, create an issue in the elemental2 repo to fix this (https://github.com/google/elemental2/issues/new).
I was using a Get and Set to store a Double into Core Data as an NSNumber. During this conversion which was something like this.
var number {
get {
return coreDataNumber.double
}
set {
coreDataNumber = NSNumber(double: newValue!)
}
}
If the syntax is wrong, that has nothing to do with my question, I'm just not on my Mac right now. I eventually came to the conclusion the only way to maintain accuracy on the conversion was to use a String to store the Double. I am fine with using this method, but for my future knowledge, is there a way to prevent a number like 0.003459 from becoming 0.0034589999999999999999 when you retrieve it? This wasn't the only conversion error I found. Sometimes it would round when I didn't want it to. I understand this probably has something to do with that not all decimal values can be properly portrayed in binary. If there is a way to convert without losing accuracy I would appreciate that knowledge.
The accuracy is much higher than 6 decimal digits.
Using your numbers:
0.003459 - 0.0034589999999999999999 = 1e-22
The problem is the formatting function (or lack thereof).
Facebook's code changes on Tuesday night have impacted how parseInt works in FBJS. Where I previously used it to convert decimal numbers to straight integers, now it always returns undefined.
For example:
return parseInt(decimalnum);
no longer works. Anyone figured out how we are supposed to round to integers now? Thanks.
Thanks for the report. It's fixed on trunk now; it should be out tomorrow unless there's another push later today.
I suspect that decimalnum is not defined in your function. Try replacing your return with return decimalnum; -- you may still be returning undefined.
parseInt is not for rounding - it actually takes the integer component of a number, or coerces a string to be a number. If you want to round, use Math.round. Depending on your usage, you may find Math.floor or Math.ceil useful.
Math.floor()
Math.ceil()
Math.round()
parseInt()
Did you try parseInt(decimalnum, 10); ?
Suppose I wish to have 2 functions, one that generates a random integer within a given range, and one that generates a random double within a given range.
int GetRandomNumber( int min, int max );
double GetRandomNumber( double min, double max );
Notice that the method names are the same. I'm trying to decide whether to name the functions that or...
int GetRandomInteger( int min, int max );
double GetRandomDouble( double min, double max );
The first option has the benefit of the user not having to worry about which one they are calling. They can just call GetRandomNumber with integers or doubles and get a result.
The second option is more explicit in the names, but it reveals unneeded information to the caller.
I know this is petty, but I care about petty things.
Edit: How would C++ behave regarding implicit conversion.
Example:
GetRandomNumber( 1, 1 );
This could be implicitly converted for the GetRandomNumber function for the double version. Obviously I don't want this to occur. Will C++ use the int version before the double version?
I prefer your second example, it is explicit and leaves no ambiguity in interpretation. It is better to err on the side of being explicit in method names to clearly illuminate the purpose and function of that member.
The only downside to your approach is that you have coupled the name of the method to the return type which is not ideal in the event that you want to change the return type of one of these methods. However in that case I would be better to add a new method and not break compatibility in your API anyways.
I prefer the second version. I like overloading a function when ultimately the two functions do the same thing; in this case they return different types so they're not quite the same thing. Another possibility if your language supports it is to define it as a generic/template method, like:
T GetRandom<T>(T min, T max);
A function name should tell what the function does; I do not see a point in cramming the types into the names. So definitely go for overloading - that's what it is for.
I prefer the overloaded method. Look at the Math.Min() method in .NET. It has 11 overloads: int, double, byte, etc.
I usually prefer the first example because it doesn't pollute the namespace. For example when calling it, if I pass ints, I'm expecting to get back an int. If I pass in doubles, I probably expect to get back a double. The compiler will give you an error if you write:
//this causes an error
double d = GetRandomNumber(1,10);
so it's not really a big issue. and you can always cast the arguments if you need an int but have doubles for input...
In some of languages you can not vary the return type of overloaded functions this would require the second example.
Assuming C++, the second also avoids problems with ambiguity. If you said:
GetRandomNumber( 1, 5.0 );
which one should be used? In fact, it is a compilation error.
I think the ideal solution would be
Int32.GetRandom(int min, int max)
Double.GetRandom(double min, double max)
but, alas, static extension method are not possible (yet?).
The .net Framwork seems to favor the first option (System.Math class):
public static decimal Abs(decimal value)
public static int Abs(int value)
Like Andrew, I would personally favor the second option to avoid ambiguity, although I do think this is a matter of taste.