Is(and why) this really should be prohibited with exception?
scala> val r2 = 15 until (10, 0)
java.lang.IllegalArgumentException: requirement failed
scala> new Range(10,15,0)
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:133)
Is(and why) this really should be prohibited with exception?
Quoting from scaladoc:
The Range class represents integer values in range [start;end) with non-zero step value step. Sort of acts like a sequence also (supports length and contains).
This restriction makes sense. A range with step-size zero would always be infine and just consist of the lower bound value. Whereas one could argue that infinite ranges are possible (lazy evaluation), the concept of an upper bound in the range would be taken ad absurdum. A range with step 0 is simply not a range, even if it's infinitely long, because the upper bound has no importance.
So if one really wants an infinite stream of a single value, Scala rightfully forces us to be more explicit.
Related
Question:
Let B = {0, 1}. Bn is the set of binary strings with n bits. Define the set En to be the set of binary strings with n bits that have an even number of 1's. Note that zero is an even number, so a string with zero 1's (i.e., a string that is all 0's) has an even number of 1's.
(a)
Show a bijection between B^9 and E^10. Explain why your function is a bijection.
(b)
What is |E^10|?
I having trouble finding a solution that satisfies the set and is a bijection. How do I approach solving this problem.
Is it something to do with cases? For exampple, if B^9 has an even number of one's add a zero, and if there is an odd number of one's add a one to obtain E^10?
Thanks!
(a) Every string in E^10 begins with a prefix of length nine which is also a member of B^9. Given the prefix of length nine, the last bit is uniquely determined since it either must be 0 (if the prefix is also in E^9) or it must be 1 (if the prefix is not also in E^9). Therefore, for each element of E^10, there is exactly one element of B^9 to which it is uniquely mapped. Similarly, for any element in B^9, an element of E^10 can be uniquely formed by adding either a 0 or a 1 to the end of the element in B^9 (choosing the one that results in parity). This operation - appending either 0 or 1 to create parity - maps each element of B^9 to a unique element of E^10. Because there is a unique mapping from all E^10 to B^9, and from all B^9 to E^10, we have our bijection.
(b) Because there is a bijection between B^9 and E^10, we know |E^10| = |B^9|. But |B^9| = 2^9, since for each of the nine positions in any string in B^9 we can independently choose one of two values for the bit. Therefore, |E^10| = 2^9 also.
I want to sample a numeric range (start, end and increment provided). I'm wondering why the last element sometimes exceeds my upper limit?
E.g.
(9.474 to 49.474 by 1.0).last>49.474 // gives true
This is because the last element is 49.474000000000004
I know that I could also use until to make an exclusive range, but then I "loose" 1 element (the range stops at 48.474000000000004).
Is there a way to sample a range having the start and end "exactely" set to the provided bounds? (background : The above solution gets me into trouble when e.g. doing interpolation using apache commons math (as extrapolation is not allowed))
What I'm currently doing to prevent this is rounding :
def roundToScale(d:Double,scale:Int) = BigDecimal(d).setScale(scale,BigDecimal.RoundingMode.HALF_EVEN).toDouble
(9.474 to 49.474 by 1.0).map(roundToScale(_:Double,5))
That's due to double arithmetics: 9.474 + 1.0 * 40 = 49.474000000000004.
You can follow that expression from the source: https://github.com/scala/scala/blob/v2.12.5/src/library/scala/collection/immutable/NumericRange.scala#L56
If you want to have something exactly, you should use Int or Long types.
you should go with BigDecimal:
(BigDecimal(9.474) to BigDecimal(49.474) by BigDecimal(1.0))
.map(_.toDouble)
I'm playing around with Scala AnyVal Types and having trouble to unterstand the following: I convert Long.MaxValue to Double and back to Long. As Long (64bit) can hold more digits than Double's mantissa (52 bits), I expected that this conversion will make some data be lost, but somehow this is not the case:
Long.MaxValue.equals(Long.MaxValue.toDouble.toLong) // is true
I thought there is maybe some magic/optimisation in Long.MaxValue.toDouble.toLong such that the conversion is not really happening. So I also tried:
Long.MaxValue.equals("9.223372036854776E18".toDouble.toLong) // is true
If I evaluate the expression "9.223372036854776E18".toDouble.toLong, this gives:
9223372036854775807
This really freaks me out, the last 4 digits seem just to pop up from nowhere!
First of all: as usual with questions like this, there is nothing special about Scala, all modern languages (that I know of) use IEEE 754 floating point (at least in practice, if the language specification doesn't require it) and will behave the same, just with different type and operation names.
Yes, data is lost. If you try e.g. (Long.MaxValue - 1).toDouble.toLong, you'll still get Long.MaxValue back. You can find the next smallest Long you can get from someDouble.toLong as follows:
scala> Math.nextDown(Long.MaxValue.toDouble).toLong
res0: Long = 9223372036854774784
If I evaluate the expression "9.223372036854776E18".toDouble.toLong, this gives:
9223372036854775808
This really freaks me out, the last 4 digits seem just to pop up from nowhere!
You presumably mean 9223372036854775807. 9.223372036854776E18 is of course actually larger than that: it represents 9223372036854776000. But you'll get the same result if you use any other Double larger than Long.MaxValue as well, e.g. 1E30.toLong.
Just remark, Double and Long values in Scala are equivalent of primitive types double and long.
The result of Long.MaxValue.toDouble is in reality bigger then Long.MaxValue, the reason is, that value is rounded. Next conversion i.e. Long.MaxValue.toDouble.toLong is "rounded" back to the value of Long.MaxValue.
I was thinking whether it would be possible in Scala to define a type like NegativeNumber. This type would represent a negative number and it would be checked by the compiler similarly to Ints, Strings etc.
val x: NegativeNumber = -34
val y: NegativeNumber = 34 // should not compile
Likewise:
val s: ContainsHello = "hello world"
val s: ContainsHello = "foo bar" // this should not compile either
I could use these types just like other types, eg:
def myFunc(x: ContainsHello): Unit = println(s"$x contains hello")
These constrained types could be backed by casual types (Int, String).
Is it possible to implement these types (maybe with macros)?
How about custom literals?
val neg = -34n //neg is of type NegativeNumber because of the suffix
val pos = 34n // compile error
Unfortunately, no this isn't something you could easily check at compile time. Well - at least not if you aren't restricting the operations on your type. If your goal is simply to check that a number literal is non-zero, you could easily write a macro that checks this property. However, I do not see any benefit in proving that a negative literal is indeed negative.
The problem isn't a limitation of Scala - which has a very strong type system - but the fact that (in a reasonably complex program) you can't statically know every possible state. You can however try to overapproximate the set of all possible states.
Let us consider the example of introducing a type NegativeNumber that only ever represents a negative number. For simplicity, we define only one operation: plus.
Say you would only allow addition of multiple NegativeNumber, then, the type system could be used to guarantee that each NegativeNumber is indeed a negative number. But this seems really restrictive, so a useful example would certainly allow us to add at least a NegativeNumber and a general Int.
What if you had an expression val z: NegativeNumber = plus(x, y) where you don't know the value of x and y statically (maybe they are returned by a function). How do you know (statically) that z is indead a negative number?
An approach to solve the problem would be to introduce Abstract Interpretation which must be run on a representation of your program (Source Code, Abstract Syntax Tree, ...).
For example, you could define a Lattice on the numbers with the following elements:
Top: all numbers
+: all positive numbers
0: the number 0
-: all negative numbers
Bottom: not a number - only introduced that each pair of elements has a greatest lower bound
with the ordering Top > (+, 0, -) > Bottom.
Then you'd need to define semantics for your operations. Taking the commutative method plus from our example:
plus(Bottom, something) is always Bottom, as you cannot calculate something using invalid numbers
plus(Top, x), x != Bottom is always Top, because adding an arbitrary number to any number is always an arbitrary number
plus(+, +) is +, because adding two positive numbers will always yield a positive number
plus(-, -) is -, because adding two negative numbers will always yield a negative number
plus(0, x), x != Bottom is x, because 0 is the identity of the addition.
The problem is that
plus - + will be Top, because you don't know if it's a positive or negative number.
So to be statically safe, you'd have to take the conservative approach and disallow such an operation.
There are more sophisticated numerical domains but ultimately, they all suffer from the same problem: They represent an overapproximation to the actual program state.
I'd say the problem is similar to integer overflow/underflow: Generally, you don't know statically whether an operation exhibits an overflow - you only know this at runtime.
It could be possible if SIP-23 was implemented, using implicit parameters as a form of refinement types. However, it would be of questionable value though as the Scala compiler and type system is not really not well equipped for proving interesting things about for example integers. For that it would be much nicer to use a language with dependent types (Idris etc.) or refinement types checked by an SMT solver (LiquidHaskell etc.).
The Scaladoc page for Random ( http://www.scala-lang.org/api/current/scala/util/Random.html ) specifies that nextDouble "Returns the next pseudorandom, uniformly distributed double value between 0.0 and 1.0 from this random number generator's sequence."
Pretty glaringly, it leaves out "inclusion". For instance, are value 0.0 and 1.0 possible? If so, which one (or both)? For instance, if I am dividing by this number, I'd really want to make sure that 0 was not a returned value. In other cases, I might not want 1.0. Quite obviously, this is hard to get via testing, as there about 10^15 values between 0.0 and 1.0.
Which is it? Please note, since not everyone can remember which of "(" and "[" means "inclusive", please just say "includes X".
This is due to the fact that scala.util.Random is a very thin wrapper around java.util.Random (see the source), which is documented in sufficient detail:
The general contract of nextDouble is that one double value, chosen (approximately) uniformly from the range 0.0d (inclusive) to 1.0d (exclusive), is pseudorandomly generated and returned.