Generic Numeric division - scala

As a general rule, we can take any value of any number type, and divide it by any non-zero value of any number type, and get a reasonable result.
212.7 / 6 // Double = 35.449999999999996
77L / 2.1F // Float = 36.666668
The one exception, that I've found, is that we can't mix a BigInt with a fractional type (Float or Double).
In the realm of generics, however, there's this interesting distinction between Integral and Fractional types.
// can do this
def divideI[I](a: I, b: I)(implicit ev: Integral[I]) = ev.quot(a,b)
// or this
def divideF[F](a: F, b: F)(implicit ev: Fractional[F]) = ev.div(a,b)
// but not this
def divideN[N](a: N, b: N)(implicit ev: Numeric[N]) = ev.???(a,b)
While I am curious as to why this is, the real question is: Is there some kind of workaround available to sidestep this limitation?

The reason is because integer division and float division are two very different operations, so all Numerics do not share a common division operation, although humans might think of them both as "division."
The workaround would be to create 4 division operations: Integral/Integral, Integral/Fractional, Fractional/Integral, Fractional/Fractional. Do the calculation in whatever application-specific way you feel is appropriate. When I did this for a calculator I wrote, I kept it in Integral if possible, and cast to Double otherwise.

My understanding is that these traits describe sets closed under defined operations:
Numeric is closed under plus, minus, times, negate,
Fractional adds div (i.e. plus, minus, times, negate, div),
Integral adds quot and rem (i.e. plus, minus, times, negate, quot, rem).
Why do you want to sidestep it?

Related

SCALA: Function for Square root of BigInt

I searched internet for a function to find exact square root of BigInt using scala programming language. I didn't get one, But saw one Java Program and I converted that function into Scala version. It is working but I am not sure, whether it can handle very large BigInt. But it returns BigInt only. Not BigDecimal as Square Root. It shows there is some bit manipulation done in the code with some hard coding of numbers like shiftRight(5), BigInt("8") and shiftRight(1). I can understand the logic clearly, But not the hard coding of these bitshift numbers and the number 8. May be these bitshift functions are not available in scala, and thats why it is needed to convert to java BigInteger at few places. These hard coded numbers may impact the precision of the result.I just changed the java code into scala code just copying the exact algorithm. And here is the code I have written in scala:
def sqt(n:BigInt):BigInt = {
var a = BigInt(1)
var b = (n>>5)+BigInt(8)
while((b-a) >= 0) {
var mid:BigInt = (a+b)>>1
if(mid*mid-n> 0) b = mid-1
else a = mid+1
}
a-1
}
My Points are:
Can't we return a BigDecimal instead of BigInt? How can we do that?
How these hardcoded numbers shiftRight(5), shiftRight(1) and 8 are related
to precision of the result.
I tested for one number in scala REPL: The function sqt is giving exact square root of the squared number. but not for the actual number as below:
scala> sqt(BigInt("19928937494873929279191794189"))
res9: BigInt = 141169888768369
scala> res9*res9
res10: scala.math.BigInt = 19928937494873675935734920161
scala> sqt(res10)
res11: BigInt = 141169888768369
scala>
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
Your question 1 and question 3 are actually the same question.
How [do] these bitshifts impact [the] precision of the result?
They don't.
How [are] these hardcoded numbers ... related to precision of the result?
They aren't.
There are many different methods/algorithms for estimating/calculating the square root of a number (as can be seen here). The algorithm you've posted appears to be a pretty straight forward binary search.
Pick a number a guaranteed to be smaller than the target (square root of n).
Pick a number b guaranteed to be larger than the target (square root of n).
Calculate mid, the whole number mid-point between a and b.
If mid is larger than (or equal to) the target then move b to mid (-1 because we know it's too large).
If mid is smaller than the target then move a to mid (+1 because we know it's too small).
Repeat 3,4,5 until a is no longer less than b.
Return a-1 as the square root of n rounded down to a whole number.
The bitshifts and hardcoded numbers are used in selecting the initial value of b. But b only has be greater than the target. We could have just done var b = n. Why all the bother?
It's all about efficiency. The closer b is to the target, the fewer iterations are needed to find the result. Why add 8 after the shift? Because 31>>5 is zero, which is not greater than the target. The author chose (n>>5)+8 but he/she might have chosen (n>>7)+12. There are trade-offs.
Can't we return a BigDecimal instead of BigInt? How can we do that?
Here's one way to do that.
def sqt(n:BigInt) :BigDecimal = {
val d = BigDecimal(n)
var a = BigDecimal(1.0)
var b = d
while(b-a >= 0) {
val mid = (a+b)/2
if (mid*mid-d > 0) b = mid-0.0001 //adjust down
else a = mid+0.0001 //adjust up
}
b
}
There are better algorithms for calculating floating-point square root values. In this case you get better precision by using smaller adjustment values but the efficiency gets much worse.
Can't we return a BigDecimal instead of BigInt? How can we do that?
This makes no sense if you want exact roots: if a BigInt's square root can be represented exactly by a BigDecimal, it can be represented by a BigInt. If you don't want exact roots, you'll need to specify precision and modify the algorithm (and for most cases, Double will be good enough and much much much faster than BigDecimal).
I understand shiftRight(5) means divide by 2^5 ie.by 32 in decimal and so on..but why 8 is added here after shift operation? why exactly 5 shifts? as a first guess?
These aren't the only options. The point is that for every positive n, n/32 + 8 >= sqrt(n) (where sqrt is the mathematical square root). This is easiest to show by a bit of calculus (or just by building a graph of the difference). So at the start we know a <= sqrt(n) <= b (unless n == 0 which can be checked separately), and you can verify this remains true on each step.

Which method is used in Kotlin's Double.toInt(), rounding or truncation?

On the official API doc, it says:
Returns the value of this number as an Int, which may involve rounding or truncation.
I want truncation, but not sure. Can anyone explain the exact meaning of may involve rounding or truncation?
p.s.: In my unit test, (1.7).toInt() was 1, which might involve truncation.
The KDoc of Double.toInt() is simply inherited from Number.toInt(), and for that, the exact meaning is, it is defined in the concrete Number implementation how it is converted to Int.
In Kotlin, the Double operations follow the IEEE 754 standard, and the semantics of the Double.toInt() conversion is the same as that of casting double to int in Java, i.e. normal numbers are rounded toward zero, dropping the fractional part:
println(1.1.toInt()) // 1
println(1.7.toInt()) // 1
println(-2.3.toInt()) // -2
println(-2.9.toInt()) // -2
First of all, this documentation is straight up copied from Java's documentation.
As far as I know it only truncates the decimal points, e.g. 3.14 will become 3, 12.345 will become 12, and 9.999 will become 9.
Reading this answer and the comments under it suggests that there is no actual rounding. The "rounding" is actually truncating. The rounding differs from Math.floor that instead of rounding to Integer.MIN_VALUE it rounds to 0.
use this roundToInt() in kotlin
import kotlin.math.roundToInt
fun main() {
var r = 3.1416
var c:Int = r.roundToInt()
println(c)
}
Use the function to.Int(), and send the value to the new variable which is marked as Int:
val x: Int = variable_name.toInt()

How to fix stackoverflow exception in scala

def factorial(n : Int) : Int = {
if(n==1)
1
else
n * factorial(n-1)
}
println(factorial(500000))
While I am passing large value its throwing stackoverflow exception Can we fix it?
The question seems theoretical, because factorial of 500000 is really a huge number. The result is so huge it is not representable by IEEE Double, and I doubt there is any practical reason why to compute it.
Some math calculators (like SpeedCrunch) let you compute factorial using the gamma function, probably using some approximation for large numbers. The SpeedCrunch result of gamma(500000 + 1)is 1.02280158465190236533 * 10 2632341.
However, if you insist on doing it, this is how it can be done:
Implement factorial using tail recursion instead. See Tail Recursion in Scala: A Simple Example or http://alvinalexander.com/scala/scala-recursion-examples-recursive-programming
Note: you will still get integer arithmetics overflow for large inputs, and the result will be wrong for them. The largest input for a 32b signed integer for which the result will still fit in a 32b signed result is 12 (cf. Factorial does not work for all values)
You can avoid this by using Double to compute the result (you will get approximate result only for large numbers, and Infinity for 500000) or by using BigInt - the calculation will work for all values, but it will get slower.
Following code should produce the correct result, but it might take very long, and the result will be very long - you might perhaps even get out of memory errors. I tried computing factorial of 50000 with it and it took several seconds, and the resulting number was several pages long.
def factorial(n: Long): BigInt = {
#tailrec
def factorialAccumulator(acc: BigInt, n: Long): BigInt = {
if (n == 0) acc
else factorialAccumulator(n*acc, n-1)
}
factorialAccumulator(1, n)
}

how does compare two different type of objects in scala?

When i am checking values inside scala interpreter like:
scala> 1==1.0000000000000001
res1: Boolean = true
scala> 1==1.000000000000001
res2: Boolean = false
Here I am not getting clear view related with "how does scala compiler interpret these as integer or floating points or doubles(and comparing)" .
It is not really Scala related, it is more of a ieee-754 floating-point arithmetic issue. First of all when comparing Int with Double it will cast Int to Double (always safe). The second case is obvious - values are different.
What happens with the first case is that Double type is not capable of storing that many significant digits (17 in your case, 64-bit floating point can store up-to 16 decimal digits) so it rounds the value to 1. And 1 == 1.

Common practice how to deal with Integer overflows?

What's the common practice to deal with Integer overflows like 999999*999999 (result > Integer.MAX_VALUE) from an Application Development Team point of view?
One could just make BigInt mandatory and prohibit the use of Integer, but is that a good/bad idea?
If it is extremely important that the integer not overflow, you can define your own overflow-catching operations, e.g.:
def +?+(i: Int, j: Int) = {
val ans = i.toLong + j.toLong
if (ans < Int.MinValue || ans > Int.MaxValue) {
throw new ArithmeticException("Int out of bounds")
}
ans.toInt
}
You may be able to use the enrich-your-library pattern to turn this into operators; if the JVM manages to do escape analysis properly, you won't get too much of a penalty for it:
class SafePlusInt(i: Int) {
def +?+(j: Int) = { /* as before, except without i param */ }
}
implicit def int_can_be_safe(i: Int) = new SafePlusInt(i)
For example:
scala> 1000000000 +?+ 1000000000
res0: Int = 2000000000
scala> 2000000000 +?+ 2000000000
java.lang.ArithmeticException: Int out of bounds
at SafePlusInt.$plus$qmark$plus(<console>:12)
...
If it is not extremely important, then standard unit testing and code reviews and such should catch the problem in the large majority of cases. Using BigInt is possible, but will slow your arithmetic down by a factor of 100 or so, and won't help you when you have to use an existing method that takes an Int.
By far the most common practice regarding integer overflows is that programmers are expected to know that the issue exists, to watch for cases where they might happen, and to make the appropriate checks or rearrange the math so that overflows won't happen, things like doing a * (b / c) rather than (a * b) / c . If the project uses unit test, they will include cases to try and force overflows to happen.
I have never worked on or seen code from a team that required more than that, so I'm going to say that's good enough for almost all software.
The one embedded application I've seen that actually, honest-to-spaghetti-monster NEEDED to prevent overflows, they did it by proving that overflows weren't possible in each line where it looked like they might happen.
If you're using Scala (and based on the tag I'm assuming you are), one very generic solution is to write your library code against the scala.math.Integral type class:
def naturals[A](implicit f: Integral[A]) =
Stream.iterate(f.one)(f.plus(_, f.one))
You can also use context bounds and Integral.Implicits for nicer syntax:
import scala.math.Integral.Implicits._
def squares[A: Integral] = naturals.map(n => n * n)
Now you can use these methods with either Int or Long or BigInt as needed, since instances of Integral exist for all of them:
scala> squares[Int].take(10).toList
res0: List[Int] = List(1, 4, 9, 16, 25, 36, 49, 64, 81, 100)
scala> squares[Long].take(10).toList
res0: List[Long] = List(1, 4, 9, 16, 25, 36, 49, 64, 81, 100)
scala> squares[BigInt].take(10).toList
res1: List[BigInt] = List(1, 4, 9, 16, 25, 36, 49, 64, 81, 100)
No need to change the library code: just use Long or BigInt where overflow is a concern and Int otherwise.
You will pay some penalty in terms of performance, but the genericity and the ability to defer the Int-or-BigInt decision may be worth it.
In addition to simple mindfulness, as noted by #mjfgates, there are a couple of practices that I always use when dealing with scaled-decimal (non-floating-point) real-world quantities. This may not be on point for your particular application - apologies in advance if not.
First, if there are multiple units of measure in use, values must always clearly identify what they are. This can be by naming convention, or by using a separate class for each unit of measure. I've always just used names - a suffix on every variable name. In addition to eliminating errors from confusion over the units, it encourages thinking about overflow because the measures are less likely to be thought of as just numbers.
Second, my most frequent source of overflow concern is usually rescaling - converting from one measure to another - when it requires a lot of significant digits. For example, the conversion factor from cm to inches is 0.393700787402. In order to avoid both overflow and loss of significant digits, you need to be careful to multiply and divide in the right order. I haven't done this in a long time, but I believe what you want is something like:
Add to Rational.scala, from The Book:
def rescale(i:Int) : Int = {
(i * (numer/denom)) + (i/denom * (numer % denom))
Then you get as results (shortened from a specs2 test):
val InchesToCm = new Rational(1000000000,393700787)
InchesToCm.rescale(393700787) must_== 1000000000
InchesToCm.rescale(1) must_== 2
This doesn't round, or deal with negative scaling factors.
A production implementation may want to factor out numer/denom and numer % denom.