Which method is used in Kotlin's Double.toInt(), rounding or truncation? - type-conversion

On the official API doc, it says:
Returns the value of this number as an Int, which may involve rounding or truncation.
I want truncation, but not sure. Can anyone explain the exact meaning of may involve rounding or truncation?
p.s.: In my unit test, (1.7).toInt() was 1, which might involve truncation.

The KDoc of Double.toInt() is simply inherited from Number.toInt(), and for that, the exact meaning is, it is defined in the concrete Number implementation how it is converted to Int.
In Kotlin, the Double operations follow the IEEE 754 standard, and the semantics of the Double.toInt() conversion is the same as that of casting double to int in Java, i.e. normal numbers are rounded toward zero, dropping the fractional part:
println(1.1.toInt()) // 1
println(1.7.toInt()) // 1
println(-2.3.toInt()) // -2
println(-2.9.toInt()) // -2

First of all, this documentation is straight up copied from Java's documentation.
As far as I know it only truncates the decimal points, e.g. 3.14 will become 3, 12.345 will become 12, and 9.999 will become 9.
Reading this answer and the comments under it suggests that there is no actual rounding. The "rounding" is actually truncating. The rounding differs from Math.floor that instead of rounding to Integer.MIN_VALUE it rounds to 0.

use this roundToInt() in kotlin
import kotlin.math.roundToInt
fun main() {
var r = 3.1416
var c:Int = r.roundToInt()
println(c)
}

Use the function to.Int(), and send the value to the new variable which is marked as Int:
val x: Int = variable_name.toInt()

Related

In Dart, how do you set the number of decimals in a double variable? [duplicate]

This question already has answers here:
How do you round a double in Dart to a given degree of precision AFTER the decimal point?
(28 answers)
Closed last year.
I want to set a double, let's call it Price, in Dart, so that it always gives me a double of 2 decimal places.
So 2.5 would return 2.50 and 2.50138263 would also return 2.50.
The simplest answer would be double's built-in toStringAsFixed.
In your case
double x = 2.5;
print('${x.toStringAsFixed(2)}');
x = 2.50138263;
print('${x.toStringAsFixed(2)}');
Would both return 2.50. Be aware that this truncates (e.g., 2.519 returns 2.51). It does not use the standard rounding (half-even) banker's algorithm.
I recommend using a NumberFormat from the intl package; The parsing and formatting rules are worth learning since they appear in other languages like Java.
double d = 2.519;
String s = NumberFormat.currency().format(d);
print(s);
returns USD2.52
s = NumberFormat('#.00').format(d);
returns 2.52
Since your are dealing with money, you should probably use NumberFormat.currency, which would add the currency symbol for the current locale.
Your question is more about how Dart handles the type double. Something like the following might work depending on your use-case:
void main() {
double num = 2.50138263;
num = double.parse(num.toStringAsFixed(2));
print(num);
}
More info about how Dart handles double can be found here.

SCALA: Function for Square root of BigInt

I searched internet for a function to find exact square root of BigInt using scala programming language. I didn't get one, But saw one Java Program and I converted that function into Scala version. It is working but I am not sure, whether it can handle very large BigInt. But it returns BigInt only. Not BigDecimal as Square Root. It shows there is some bit manipulation done in the code with some hard coding of numbers like shiftRight(5), BigInt("8") and shiftRight(1). I can understand the logic clearly, But not the hard coding of these bitshift numbers and the number 8. May be these bitshift functions are not available in scala, and thats why it is needed to convert to java BigInteger at few places. These hard coded numbers may impact the precision of the result.I just changed the java code into scala code just copying the exact algorithm. And here is the code I have written in scala:
def sqt(n:BigInt):BigInt = {
var a = BigInt(1)
var b = (n>>5)+BigInt(8)
while((b-a) >= 0) {
var mid:BigInt = (a+b)>>1
if(mid*mid-n> 0) b = mid-1
else a = mid+1
}
a-1
}
My Points are:
Can't we return a BigDecimal instead of BigInt? How can we do that?
How these hardcoded numbers shiftRight(5), shiftRight(1) and 8 are related
to precision of the result.
I tested for one number in scala REPL: The function sqt is giving exact square root of the squared number. but not for the actual number as below:
scala> sqt(BigInt("19928937494873929279191794189"))
res9: BigInt = 141169888768369
scala> res9*res9
res10: scala.math.BigInt = 19928937494873675935734920161
scala> sqt(res10)
res11: BigInt = 141169888768369
scala>
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
Your question 1 and question 3 are actually the same question.
How [do] these bitshifts impact [the] precision of the result?
They don't.
How [are] these hardcoded numbers ... related to precision of the result?
They aren't.
There are many different methods/algorithms for estimating/calculating the square root of a number (as can be seen here). The algorithm you've posted appears to be a pretty straight forward binary search.
Pick a number a guaranteed to be smaller than the target (square root of n).
Pick a number b guaranteed to be larger than the target (square root of n).
Calculate mid, the whole number mid-point between a and b.
If mid is larger than (or equal to) the target then move b to mid (-1 because we know it's too large).
If mid is smaller than the target then move a to mid (+1 because we know it's too small).
Repeat 3,4,5 until a is no longer less than b.
Return a-1 as the square root of n rounded down to a whole number.
The bitshifts and hardcoded numbers are used in selecting the initial value of b. But b only has be greater than the target. We could have just done var b = n. Why all the bother?
It's all about efficiency. The closer b is to the target, the fewer iterations are needed to find the result. Why add 8 after the shift? Because 31>>5 is zero, which is not greater than the target. The author chose (n>>5)+8 but he/she might have chosen (n>>7)+12. There are trade-offs.
Can't we return a BigDecimal instead of BigInt? How can we do that?
Here's one way to do that.
def sqt(n:BigInt) :BigDecimal = {
val d = BigDecimal(n)
var a = BigDecimal(1.0)
var b = d
while(b-a >= 0) {
val mid = (a+b)/2
if (mid*mid-d > 0) b = mid-0.0001 //adjust down
else a = mid+0.0001 //adjust up
}
b
}
There are better algorithms for calculating floating-point square root values. In this case you get better precision by using smaller adjustment values but the efficiency gets much worse.
Can't we return a BigDecimal instead of BigInt? How can we do that?
This makes no sense if you want exact roots: if a BigInt's square root can be represented exactly by a BigDecimal, it can be represented by a BigInt. If you don't want exact roots, you'll need to specify precision and modify the algorithm (and for most cases, Double will be good enough and much much much faster than BigDecimal).
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
These aren't the only options. The point is that for every positive n, n/32 + 8 >= sqrt(n) (where sqrt is the mathematical square root). This is easiest to show by a bit of calculus (or just by building a graph of the difference). So at the start we know a <= sqrt(n) <= b (unless n == 0 which can be checked separately), and you can verify this remains true on each step.

Implicit type of constant in swift tutorial

When I do example from tutorial, I get some issue from constants variables topic.
If someone explain my example I'll be appreciate for this.
When you don't specify a type, a floating point number literal will be inferred to be of type Double.
Double, as its name suggests, has double precision than Float. So when you do:
let a = 64.1
The actual value in memory may be something like 64.099999999999991. Since Double shows only 16 significant digits, it shows 64.09999999999999, rounding off the last "1".
Why does let b: Float = 64.1 show the correct number?
When you specify the type to float, the precision decreases. Float only shows 8 significant digits. That's 64.099999, but there's a "9" straight after that, so it rounds it up to get 64.1.
This has nothing to do with explicitly stating the variable type. Try specifying it to be a Double:
let b: Double = 64.1
It will still show 64.09999999999999.

Number Operations and Return Types

I am confused by what is returned when performing number operations in Swift between various types. Consider the following:
var castedFoo = Float(7.0/5.0) // returns 1.39999997...
var specifiedTypeFoo:Float = 7/5.0 //returns 1.39999997...
var foo = (7/5.0) //returns 1.4
What separates the first two from the last one? They are all returning floats, so why is the value from the last one rounded? I understand that the first is casted and the second explicitly specified to be a Float, but the last one also returns a Float value. So what makes the difference here?
According to Swift documentation,
Unless otherwise specified, the default type of a floating-point literal is the Swift standard library type Double, which represents a 64-bit floating-point number.
In other words, the literal 5.0 is of type Double.
Your first two examples set the result type to Float; your last example keeps the type of the result a Double, because the result of the division of an Int and a Double is a Double. Because of that difference, the last result has higher precision.

how does compare two different type of objects in scala?

When i am checking values inside scala interpreter like:
scala> 1==1.0000000000000001
res1: Boolean = true
scala> 1==1.000000000000001
res2: Boolean = false
Here I am not getting clear view related with "how does scala compiler interpret these as integer or floating points or doubles(and comparing)" .
It is not really Scala related, it is more of a ieee-754 floating-point arithmetic issue. First of all when comparing Int with Double it will cast Int to Double (always safe). The second case is obvious - values are different.
What happens with the first case is that Double type is not capable of storing that many significant digits (17 in your case, 64-bit floating point can store up-to 16 decimal digits) so it rounds the value to 1. And 1 == 1.