Expressing square in Scala - scala

For some reason (that escapes me), Scala math library does not have a pow-function for integers, but only for Doubles.
I need a square function for integers and was figuring what might be the usual way to do this in Scala.
object TestX extends App {
def pow2(v: Int)= v*v
//class MyRichInt( val v: Int ) {
// def ² : Int = v*v // says: "illegal character" for UTF-8 power-of-two
//}
println( pow2(42) )
//println( 42² )
println( math.pow(42,2).toInt )
}
I was surprised to see that the '²' character is not liked by Scala. Maybe it's taken to be a number? Usually all kinds of weird Unicode values are valid and using 42² in code would, indeed, be fancy.
Never mind. Should I shut up and just start using my own pow2 function?

Yes, use your own pow2. If you need higher powers, you probably won't have room in an Int anyway. Consider using BigInt.pow:
scala> BigInt(40).pow(40)
res0: scala.math.BigInt = 12089258196146291747061760000000000000000000000000000000000000000
Of course, if you need not N2 but 2N, just use shifts. (1 << k = 2k) These work with BigInt also.

Use backticks for Unicode characters, and implicit classes (Scala 2.10) to add operation on arbitrary types:
implicit class PowerInt(i: Int) {
def `²`: Int = i * i
}
Usage:
3 `²`
Result:
9

Related

How can I cast a a string to generic number using scala?

I'm trying to convert a generic string to a number using scala
object h extends App {
def castTo[T](s: String): T = {
s.asInstanceOf[T]
}
print(castTo[Int]("20"))
print(castTo[Double]("20.1"))
}
the data:
name | value
a | "1"
b | "2.123"
c | "abd"
the usecase:
riight now I'm exporting the data to the user a method for each conversion.
getNameAsDouble, getNameAsInteger and so forth.
I wish to do getNameT to save lot's of code and make it a bit more pretty and easy to read the doc.
so, in case a programmer does :
getNameInt i want the program to print in this case: 1
getNameDouble i want the program to print in this case: 2.123
in cpp i could use dynamic_cast. there a way to do so in scala?
( i also tried to do so in java but couldn't find a way)
p.s.
i've tried something like this, but i wandered if there is more generic way.
castTo[T] (s:String): T = {
...
case T instance of Integer => s.toInt
case T instance of Long => s.toLong
...
}
I believe it would be better if you can expand more on your use case.
But, this should do what you want.
def readAs[T](str: String)(implicit num: Numeric[T]): Option[T] =
num.parseString(str)
Which you can test like:
readAs[Int]("10")
// res: Option[Int] = Some(10)
readAs[Double]("10")
// res: Option[Double] = Some(10.0)
readAs[Double]("10.0d")
// res: Option[Double] = Some(10.0)
readAs[Int]("10.0d")
// res: Option[Int] = None
readAs[Int]("blah")
// res: Option[Int] = None
Scala is not javascript. Scala is a real programming language with types. Strong types even. So, it treats conversions between types as what they really are: conversions. Not "casts". So, the string will have to be parsed into a number. And if you wrap this parsing in a function, it is utterly wrong to call this conversion a "cast".
And no, you cannot cast a string to a number in C++ either. Not with a dynamic cast, nor with any other kind of cast. You also have to parse it in C++, because C++ is also a real programming language.
As for simplifying your pattern matching expression, you might be able to first parse the string into a double, and then use a generic cast to convert that double into a number of lesser precision, but I do not have a Scala compiler at hand to prove the concept.

Scala - simple design by contract

I'm learning Scala as a personal project as I'm fed up with the verbosity of Java. I like a lot of what I see, but wonder if there's a way to efficiently implement some simple contracts on methods. I'm not (necessarily) after full DbC, but is there a way to: -
indicate that a parameter or a class field is REQUIRED, i.e. CANNOT be null. The Option thing seems to indicate cleanly if an OPTIONAL value is present, but I want to specify class invariants (x is required) and also to succinctly specify that a parameter is required. I know I can do "if's" throwing some kind of exception, but I want a language feature for this VERY common use-case. I like my interfaces tight, I dislike defensive programming.
Is it possible to define succinct and efficient (runtime performance) ranged types, such as "NonNegativeInt" - I want to say that a parameter is >= 0. Or within a range. PASCAL had these types and I found them excellent for communicating intent. That is one of the big drawbacks of C, C++, Java, etc. When I say succinct I mean I want to declare a variable of this type as easily as a normal int, not having to new each and every instance on the heap.
For point (1), Option should indeed be enough. This is because while scala supports null values, it does so mainly for compatibility with Java. Scala code should not contain null, values, and where it does it should be constrained to very localized places, and converted to an option as soon as possible (good scala code will never let null values propagate).
So in idiomatic scala, if a field or parameter is not of type Option this really means that it is required.
Now, there is also the (experimental and never fully supported as far as I can tell) NotNull trait. See How does the NotNull trait work in 2.8 and does anyone actually use it?
For point (2) scala 2.10 introduces value classes. With them, you could define your very own class that wraps Int without runtime overhead, and implement its operators as you see fit. The only places where you would have a runtime check would be when converting from a normal Int to your NonNegativeInt (throw an exception if the int is negative). Note that this check would be performed everytime you create a new NonNegativeInt, which also means everytime you perform an operation, so there would be a non-null runtime impact. But Pascal was in the very same situation (range checks are performed at runtime in Pascal) so I guess that you're OK with this.
UPDATE: Here is an example implementation of NonNegativeInt (here renamed to UInt):
object UInt {
def apply( i: Int ): UInt = {
require( i >= 0 )
new UInt( i )
}
}
class UInt private ( val i: Int ) extends AnyVal {
override def toString = i.toString
def +( other: UInt ) = UInt( i + other.i)
def -( other: UInt ) = UInt( i - other.i)
def *( other: UInt ) = UInt( i * other.i)
def /( other: UInt ) = UInt( i / other.i)
def <( other: UInt ) = i < other.i
// ... and so on
}
and some example usage in the REPL:
scala> UInt(123)
res40: UInt = 123
scala> UInt(123) * UInt(2)
res41: UInt = 246
scala> UInt(5) - UInt(8)
java.lang.IllegalArgumentException: requirement failed
at scala.Predef$.require(Predef.scala:221)
at UInt$.apply(<console>:15)
...
What is this null of which you speak?
Seriously, bar null at the borders of your system, where it comes into contact with code you did not write. At that boundary you make sure all nullable values are converted to Option.
Likewise, don't use exceptions. As with null, bar them at the gate. Turn them into Either or use ScalaZ Validation.
As for dependent types (where the type interacts with or depends on specific values or subsets of values such as the natural numbers) it's more work. However, Spire has a Natural type. It might not be exactly what you want since it's arbitrary precision but it does impose the non-negative aspect of the natural numbers.
Addendum
Conversion from a nullable value to Option is trivially accommodated by the Scala Standard Library itself in the form of the Option factroy. To wit:
scala> val s1 = "Stringy goodness"
s1: String = Stringy goodness
scala> val s2: String = null
s2: String = null
scala> val os1 = Option(s1)
os1: Option[String] = Some(Stringy goodness)
scala> val os2 = Option(s2)
os2: Option[String] = None
The Scala standard library comes built-in with exactly these kinds of assertion mechanisms: the assert, assume, required, and ensuring methods. The latter two especially allow you to write preconditions and postconditions in a Design-By-Contract style. Simple example of natural number division:
def divide(x: Int, y: Int): Int = {
require(x > y, s"$x > $y")
require(y > 0, s"$y > 0")
x / y
} ensuring (_ * y == x)
The require calls throw an IllegalArgumentException if the requirements are not met, and show the interpolated string as the exception's message. The ensuring call throws an exception if the given condition doesn't hold.
More details at: https://madusudanan.com/blog/scala-tutorials-part-29-design-by-contract/
There's also a tool that does formal verification on a subset of Scala written in this style: https://github.com/epfl-lara/stainless

Why is implicit transformation of numerical types inconsistent between "for/comprehension" expressions compared to(!) assignment operations?

Why is desugaring and implicit transformation of numerical types inconsistent between "for/comprehension" expressions compared to(!) assignment operations?
I'm sure there are many general perspectives on this but I couldn't figure out a concise and logical explanation for the current behavior. [Ref:"Behavior of Scala for/comprehension..."
]
For the sake of correctness all translations below was generated with the scala compiler ("scalac -Xprint:typer -e")
For example, during implicit numeric assignment transformation the Destination type is dominant:
Source: var l:Long = 0
Result : val l: Long = 0L
Source: var l:Long = 0.toInt
Result : var l: Long = 0.toInt.toLong
During implicit transformation of "for/comprehension" expressions the Source type is dominant:
Source: for (i:Long <- 0 to 1000000000L) { }
Result : 0.to(1000000000L).foreach(((i: Long) => ()))
Source: for (i <- 0L to 1000000000L) { }
Result : scala.this.Predef.longWrapper(0L).to(1000000000L).foreach[Unit](((i: Long) => ()))
There are two completely different things going on. First, assignment:
val l: Long = 0
We have an Int that is being assigned to a Long. That shouldn't be possible, unless there is an implicit conversion from Int to Long, which we can verify like this:
scala> implicitly[Int => Long]
res1: Int => Long = <function1>
Since there is such a conversion, that conversion is applied.
Next, the for-comprehension:
for (i:Long <- 0 to 1000000000L) { }
This doesn't work because the to method called on Int (actually called on scala.runtime.RichInt, through an implicit conversion) only admits an Int argument, not a Long argument.
The to method called on a Long (RichLong) does admit a Long argument, but there are two reasons why that doesn't apply on the expression above:
To get to RichLong's to method, the Int would have to be first converted into a Long, and then into a RichLong, and Scala does not apply two chained implicit conversions, ever. It can only convert Int to RichInt or to Long, not Int to Long to RichLong.
To apply such conversion, there would have to be some indication a Long was required in first place, and there isn't. The i: Long does not refer to the type of 0 to 1000000000L, whereas l: Long does refer to the type of 0 in the first example.
I'm not sure what you mean by "destination type" and "source type", but I don't see any problem.
If you have an Int, you can assign it to be a Long. That's fine because the range of Int is subset of the range of Long. You can't pass a Long when you expect an Int because Long has a larger range and, thus, might generate invalid Int values. And the bit about to methods is answered in your other question.
Going through your cases:
var l:Long = 0 // fine because Int up-casts to Long
var l:Long = 0.toInt // fine because Int up-casts to Long
for (i:Long <- 0 to 1000000000L) { } // bad because...
0 to 1000000000L // bad because RichInt.to doesn't accept a Long argument
for (i <- 0L to 1000000000L) { } // fine because...
0L to 1000000000L // fine because RichLong.to accepts a Long argument, and produces a Range of Longs
The cases in your for examples have nothing to do with the for. It only has to do with the fact that you are calling the to method. An explanation:
0 to 1000000000L is syntactic sugar for 0.to(1000000000L) since to is just a method. When you call to on an Int, the implicit conversion to RichInt happens, so you're really calling (new scala.runtime.RichInt(0)).to(1000000000L). But, since RichInt's to method only accepts an Int argument, passing in a Long (ie, 1000000000L) is illegal since Long cannot be cast to Int (since it might contain values that are out of Int's range.
0L to 1000000000L is syntactic sugar, again, for 0L.to(1000000000L). But now, since the to method is being called on a Long, the implicit conversion is to RichLong: (new scala.runtime.RichLong(0L)).to(1000000L). And, since RichLong's to method accepts a Long as a parameter, then everything is fine, because that's what you're giving it.
EDIT based on your comments:
It seems like your confusion here is derived from your belief that = and to should work the same way. They do not, and should not. The assignment operator, =, is a very special keyword in Scala (and any language) whereas to is not a keyword at all -- it's just a method that happens to be found on both RichInt and RichLong.
That said, there is still no inconsistency. Here's why:
Scala allows you to automatically cast up, but not down. The reason for this is very simple: if an B is is a kind of A, then an B can be substituted for a A without fear. The opposite is not true.
Lets assume you have two classes:
class A
class B extends A
val a: A = null
val b: B = null
So think about assignments:
val x: A = b // fine, since B is a subtype of A, so b *is* an A
val x: B = a // error, since As aren't necessarily Bs
Not lets look at function calls:
def f(x: A) {} // only accept As
f(b) // fine, because a B *is* an A
def g(x: B) {} // only accept Bs
g(a) // error, because As aren't necessarily Bs
So you can see, for both assignment operators and functions, you can substitute the a subtype for its supertype. If you think of Int as a kind of Long (with a more limited range), then it is perfectly analogous. Now, since methods are pretty much just functions sitting on a class, we would expect the behavior to be the same regarding arguments.
So think about what you are asking for when you say that RichInt.to(Int) should be able to accept a Long. This would mean that Scala would perform some automatic and safe conversion from Long to Int, which doesn't make sense.
FINALLY: if your real issue is that you just think that RichInt should have a method to(Long), then, well, I guess that's something to complain to the language designers about. But they'd probably just tell you to use .toLong and get on with your life.

How to write binary literals in Scala?

Scala has direct support for using hex and octal numbers:
scala> 01267 + 0100
res1: Int = 759
scala> 0x12AF + 0x100
res2: Int = 5039
but how do you do express an integer as a binary number in Scala ?.
If performance is not an issue, you can use a String and convert it to an integer.
val x = Integer.parseInt("01010101", 2)
Binary numbers aren't supported directly in part because you can easily convert from hexadecimal to binary and vice versa. To make your code clearer, you can put the binary number in a comment.
val x = 0x55 //01010101
In 2.10 you can create a string interpolator for that, e.g. it's possible to write b"0010" to mean 2. Using macros you can get rid of associated runtime overhead and do the conversion at compile-time. Take a look at Jason Zaugg's macrocosm to see it in action:
scala> b"101010"
res4: Int = 42
scala> b"102"
<console>:11: error: exception during macro expansion: invalid binary literal
b"102"
^
Using the new "implicit class" and "value class" mechanisms in 2.10, you can write something like this to add convenience methods without the overhead of object creation:
implicit class IntToBase( val digits:String ) extends AnyVal {
def base(b:Int) = Integer.parseInt( digits, b )
def b = base(2)
def o = base(8)
def x = base(16)
}
That allows you to do things like
"555".o // 365 decimal
and no IntToBase object is ever actually created.
You would need to be careful if you're converting from an integer that "looks like" binary as #agilesteel suggests. For example 0101.b would try to convert 65 decimal to binary (initial 0 signifying octal), whereas 101.b would try to convert 101 decimal to binary. It only really makes sense to try to convert from a String, for which there is Integer.parseInt, and from a number to the binary String representation, for which there is Integer.toString(x, 2).
I can't think of too many use-cases for programmatic binary literals. That said, they've made it to Java 7 as a number with prefix 0b, so I'd be surprised if they didn't appear in Scala soon. Java seems to have done fine without them for 15 years though.
If you are planning on using it a lot you can simulate the behavior with an implicit conversion.
object Extensions {
implicit def conversion(x: Int) = new BinaryInt(x)
class BinaryInt(x: Int) {
def b = {
// Conversion code like Integer.parseInt
// as Kim suggested
}
}
}
Now you can do stuff like
import Extensions._
val x = 0101.b
// or
val x = 5.b
You have to decide for yourself, which direction the conversion should go.
If you want to get a string of the binary representation of an Int you can call 4.toBinaryString. Padding is more difficult. You'll have to do something like: 4.toBinaryString.reverse.padTo(8, "0").reverse.mkString
def _0b(row: Int): Int = {
row.toString
.reverse
.split("")
.zipWithIndex
.map(x => (x._1.toInt, x._2))
.filter(_._1 == 1)
.map(x => Math.pow(2,x._2).toInt)
.sum
}
_0b(10011001) = 153
Though it is limited to 32Bit Values

Any reason for having "val capacity : Int" instead of "val Int Capacity" in Scala

I am reading Scala and I am wondering ...
Why
val capacity : Int
instead of
val Int capacity.
Any reason why this choice was made. If not, it does not seem to me like a good choice to move away from the Java way of declaring it. Would have made the transition from Java to Scala easier (not by much, but little bit)
Because the majority of the time you can leave off the Int part. Scala has a much neater system of type inference than Java does.
I think I read a statement by Martin Odersky himself somewhere saying that this decision was also made in order to improve readability. This is certainly the case, e.g. compare
val Double number = ...
val Array[(Seq[String], Map[Seq[String], Double])] something = ...
val String pattern = ...
with
val number : Double = ...
val something : Array[(Seq[String], Map[Seq[String], Double])] = ...
val pattern : String = ...
Most of the time you need to find names of references/mathods fast (visually), not types.
x : T is the standard notation for types in logic and many programming languages. C and its descendants, with Java among them, deviates from this. But the type notation of C is really awful (try to write down the type for some moderately complicated higher order function like map).
Also, with this notation it is easy to leave out the type (as Wysawyg has already written), or to add a type inside an expression.
In Programming in Scala it says the technical reason for this syntax is it makes the type inference easier.
Here's an example to Wysawyg's statement:
val capacity = 2
But you typically might not do this with just a val.
trait Foo {
def capacity = 2 // Allow child classes to override and decide the value later
}
// Force instances of Bar to set the value
class Bar( override val capacity : Int ) extends Foo
// Have Bat determine it on the fly
trait Bat extends Foo {
def onlyAThird : Int
override def capacity = onlyAThird * 3
}
(I tried to insert this as a comment, but alas, no formatting.)
I think Daniel thought of something like that:
val a = 7
val b: Int = 8
var x = "Foo"
var y: String = "Bar"
def sum (a: Int, b: Int) = a + b
def mul (a: Int, b: Int): Int = a * b
The type can often be inferred.