In Scala, def defined a function, But i don't understand the below code.
Ex.
def v = 10
what's v defination? v is a variable or a function or anything else?
it's a function that always returns 10. in Java, the equivalent would be
public int v() { return 10; }
this might seem pointless, but the difference is real, and sometimes importantly useful. for example, suppose i define a trait like this:
trait Wrench {
val size = 14 //millimeters, the default, most common size
}
if i need different size wrench, i can refine the trait
val bigWrench = new Wrench {
override val size = 21
}
but what if I want an adjustable wrench?
// mutable! not thread safe!
class AdjustableWrench extends Wrench {
var adjustment = 0
override val size = 14 + (3 * adjustment) // oops!
def adjust( turns : Int ) : Unit = {
adjustment += turns
}
}
this won't work! size will always be 14!
if I had defined my trait originally as
trait Wrench {
def size = 14 //millimeters, the default, most common size
}
i'd be able to define bigWrench exactly as I did above, because a val can override a def. but now i can write a functional adjustable wrench too:
// mutable! not thread safe!
class AdjustableWrench extends Wrench {
var adjustment = 0
override def size = 14 + (3 * adjustment) // this works
def adjust( turns : Int ) : Unit = {
adjustment += turns
}
}
by originally defining size as a def, rather than a val in the base trait, even though it looked dumb, I preserved the flexibility to override with def or val. it's quite common to define a base trait with very simple defaults, but where implementations might want to do something more complicated. so statements like
def v = 10
are not at all rare.
to get your head around the difference a bit more, compare these two:
def vDef = {
println("vDef")
10
}
and
val vVal = {
println("vVal")
10
}
both vDef and vVal will evaluate to 10 whenever you access them. but each time you access vDef, you will see the side effect, a print out of vDef. no matter how any times you access vVal, you will see vVal printed out just once.
v is a function that always returns 10.
Equivalent code in Java would be:
public int v() {
return 10;
}
Also see 2nd chapter from "Programming in Scala" book.
Related
I want to check if a specify id that contained in an Enumeration.
So I write down the contains function
object Enum extends Enumeration {
type Enum = Value
val A = Value(2, "A")
def contains(value: Int): Boolean = {
Enum.values.map(_.id).contains(value)
}
}
But the time cost is unexpected while id is a big number, such as over eight-digit
val A = Value(222222222, "A")
Then the contains function cost over 1000ms per calling.
And I also noticed the first time calling always cost hundreds millisecond whether the id is big or small.
I can't figure out why.
First, lets talk about the cost of Enum.values. This is implemented here:
See here: https://github.com/scala/scala/blob/0b47dc2f28c997aed86d6f615da00f48913dd46c/src/library/scala/Enumeration.scala#L83
The implementation is essentially setting up a mutable map. Once it is set up, it is re-used.
The cost for big numbers in your Value is because, internally Scala library uses a BitSet.
See here: https://github.com/scala/scala/blob/0b47dc2f28c997aed86d6f615da00f48913dd46c/src/library/scala/Enumeration.scala#L245
So, for larger numbers, BitSet will be bigger. That only happens when you call Enum.values.
Depending on your specific uses case you can choose between using Enumeration or Case Object:
Case objects vs Enumerations in Scala
It sure looks like the mechanics of Enumeration don't handle large ints well in that position. The Scaladocs for the class don't say anything about this, but they don't advertise using Enumeration.Value the way you do either. They say, e.g., val A = Value, where you say val A = Value(2000, "A").
If you want to keep your contains method as you have it, why don't you cache the Enum.values.map(_.id)? Much faster.
object mult extends App {
object Enum extends Enumeration {
type Enum = Value
val A1 = Value(1, "A")
val A2 = Value(2, "A")
val A222 = Enum.Value(222222222, "A")
def contains(value: Int): Boolean = {
Enum.values.map(_.id).contains(value)
}
val cache = Enum.values.map(_.id)
def contains2(value: Int): Boolean = {
cache.contains(value)
}
}
def clockit(desc: String, f: => Unit) = {
val start = System.currentTimeMillis
f
val end = System.currentTimeMillis
println(s"$desc ${end - start}")
}
clockit("initialize Enum ", Enum.A1)
clockit("contains 2 ", Enum.contains(2))
clockit("contains 222222222 ", Enum.contains(222222222))
clockit("contains 222222222 ", Enum.contains(222222222))
clockit("contains2 2 ", Enum.contains2(2))
clockit("contains2 222222222", Enum.contains2(222222222))
}
This question already has answers here:
Equivalent to Ruby's #tap method in Scala [duplicate]
(1 answer)
how to keep return value when logging in scala
(6 answers)
Closed 9 years ago.
Often I have functions like this:
{
val x = foo;
bar(x);
x
}
For example, bar is often something like Log.debug.
Is there a shorter, idiomatic way how to run it? For example, a built-in function like
def act[A](value: A, f: A => Any): A = { f(value); value }
so that I could write just act(foo, bar _).
I'm not sure if i understood the question correctly, but if i do, then i often use this method taken from the Spray toolkit:
def make[A, B](obj: A)(f: A => B): A = { f(obj); obj }
then you can write the following things:
utils.make(new JobDataMap()) { map =>
map.put("phone", m.phone)
map.put("medicine", m.medicine.name)
map.put("code", utils.genCode)
}
Using your act function as written seems perfectly idiomatic to me. I don't know of a built-in way to do it, but I'd just throw this kind of thing in a "commons" or "utils" project that I use everywhere.
If the bar function is usually the same (e.g. Log.debug) then you could also make a specific wrapper function for that. For instance:
def withDebug[A](prefix: String)(value: A)(implicit logger: Logger): A = {
logger.debug(prefix + value)
value
}
which you can then use as follows:
implicit val loggerI = logger
def actExample() {
// original method
val c = act(2 + 2, logger.debug)
// a little cleaner?
val d = withDebug("The sum is: ") {
2 + 2
}
}
Or for even more syntactic sugar:
object Tap {
implicit def toTap[A](value: A): Tap[A] = new Tap(value)
}
class Tap[A](value: A) {
def tap(f: A => Any): A = {
f(value)
value
}
def report(prefix: String)(implicit logger: Logger): A = {
logger.debug(prefix + value)
value
}
}
object TapExample extends Logging {
import Tap._
implicit val loggerI = logger
val c = 2 + 2 tap { x => logger.debug("The sum is: " + x) }
val d = 2 + 2 report "The sum is: "
assert(d == 4)
}
Where tap takes an arbitrary function, and report just wraps a logger. Of course you could add whatever other commonly used taps you like to the Tap class.
Note that Scala already includes a syntactically heavyweight version:
foo match { case x => bar(x); x }
but creating the shorter version (tap in Ruby--I'd suggest using the same name) can have advantages.
Disclaimer: Before someone says it: yes, I know it's bad style and not encouraged. I'm just doing this to play with Scala and try to learn more about how the type inference system works and how to tweak control flow. I don't intend to use this code in practice.
So: suppose I'm in a rather lengthy function, with lots of successive checks at the beginning, which, if they fail, are all supposed to cause the function to return some other value (not throw), and otherwise return the normal value. I cannot use return in the body of a Function. But can I simulate it? A bit like break is simulated in scala.util.control.Breaks?
I have come up with this:
object TestMain {
case class EarlyReturnThrowable[T](val thrower: EarlyReturn[T], val value: T) extends ControlThrowable
class EarlyReturn[T] {
def earlyReturn(value: T): Nothing = throw new EarlyReturnThrowable[T](this, value)
}
def withEarlyReturn[U](work: EarlyReturn[U] => U): U = {
val myThrower = new EarlyReturn[U]
try work(myThrower)
catch {
case EarlyReturnThrowable(`myThrower`, value) => value.asInstanceOf[U]
}
}
def main(args: Array[String]) {
val g = withEarlyReturn[Int] { block =>
if (!someCondition)
block.earlyReturn(4)
val foo = precomputeSomething
if (!someOtherCondition(foo))
block.earlyReturn(5)
val bar = normalize(foo)
if (!checkBar(bar))
block.earlyReturn(6)
val baz = bazify(bar)
if (!baz.isOK)
block.earlyReturn(7)
// now the actual, interesting part of the computation happens here
// and I would like to keep it non-nested as it is here
foo + bar + baz + 42 // just a dummy here, but in practice this is longer
}
println(g)
}
}
My checks here are obviously dummy, but the main point is that I'd like to avoid something like this, where the actually interesting code ends up being way too nested for my taste:
if (!someCondition) 4 else {
val foo = precomputeSomething
if (!someOtherCondition(foo)) 5 else {
val bar = normalize(foo)
if (!checkBar(bar)) 6 else {
val baz = bazify(bar)
if (!baz.isOK) 7 else {
// actual computation
foo + bar + baz + 42
}
}
}
}
My solution works fine here, and I can return early with 4 as return value if I want. Trouble is, I have to explicitly write the type parameter [Int] — which is a bit of a pain. Is there any way I can get around this?
It's a bit unrelated to your main question, but I think, a more effective approach (that doesn't require throwing an exception) to implement return would involve continuations:
def earlyReturn[T](ret: T): Any #cpsParam[Any, Any] = shift((k: Any => Any) => ret)
def withEarlyReturn[T](f: => T #cpsParam[T, T]): T = reset(f)
def cpsunit: Unit #cps[Any] = ()
def compute(bool: Boolean) = {
val g = withEarlyReturn {
val a = 1
if(bool) earlyReturn(4) else cpsunit
val b = 1
earlyReturn2(4, bool)
val c = 1
if(bool) earlyReturn(4) else cpsunit
a + b + c + 42
}
println(g)
}
The only problem here, is that you have to explicitly use cpsunit.
EDIT1: Yes, earlyReturn(4, cond = !checkOK) can be implemented, but it won't be that general and elegant:
def earlyReturn2[T](ret: T, cond: => Boolean): Any #cpsParam[Any, Any] =
shift((k: Any => Any) => if(cond) ret else k())
k in the snippet above represents the rest of the computation. Depending on the value of cond, we either return the value, or continue the computation.
EDIT2: Any chance we might get rid of cpsunit? The problem here is that shift inside the if statement is not allowed without else. The compiler refuses to convert Unit to Unit #cps[Unit].
I think a custom exception is the right instinct here.
Is there any reason for Scala not support the ++ operator to increment primitive types by default?
For example, you can not write:
var i=0
i++
Thanks
My guess is this was omitted because it would only work for mutable variables, and it would not make sense for immutable values. Perhaps it was decided that the ++ operator doesn't scream assignment, so including it may lead to mistakes with regard to whether or not you are mutating the variable.
I feel that something like this is safe to do (on one line):
i++
but this would be a bad practice (in any language):
var x = i++
You don't want to mix assignment statements and side effects/mutation.
I like Craig's answer, but I think the point has to be more strongly made.
There are no "primitives" -- if Int can do it, then so can a user-made Complex (for example).
Basic usage of ++ would be like this:
var x = 1 // or Complex(1, 0)
x++
How do you implement ++ in class Complex? Assuming that, like Int, the object is immutable, then the ++ method needs to return a new object, but that new object has to be assigned.
It would require a new language feature. For instance, let's say we create an assign keyword. The type signature would need to be changed as well, to indicate that ++ is not returning a Complex, but assigning it to whatever field is holding the present object. In Scala spirit of not intruding in the programmers namespace, let's say we do that by prefixing the type with #.
Then it could be like this:
case class Complex(real: Double = 0, imaginary: Double = 0) {
def ++: #Complex = {
assign copy(real = real + 1)
// instead of return copy(real = real + 1)
}
The next problem is that postfix operators suck with Scala rules. For instance:
def inc(x: Int) = {
x++
x
}
Because of Scala rules, that is the same thing as:
def inc(x: Int) = { x ++ x }
Which wasn't the intent. Now, Scala privileges a flowing style: obj method param method param method param .... That mixes well C++/Java traditional syntax of object method parameter with functional programming concept of pipelining an input through multiple functions to get the end result. This style has been recently called "fluent interfaces" as well.
The problem is that, by privileging that style, it cripples postfix operators (and prefix ones, but Scala barely has them anyway). So, in the end, Scala would have to make big changes, and it would be able to measure up to the elegance of C/Java's increment and decrement operators anyway -- unless it really departed from the kind of thing it does support.
In Scala, ++ is a valid method, and no method implies assignment. Only = can do that.
A longer answer is that languages like C++ and Java treat ++ specially, and Scala treats = specially, and in an inconsistent way.
In Scala when you write i += 1 the compiler first looks for a method called += on the Int. It's not there so next it does it's magic on = and tries to compile the line as if it read i = i + 1. If you write i++ then Scala will call the method ++ on i and assign the result to... nothing. Because only = means assignment. You could write i ++= 1 but that kind of defeats the purpose.
The fact that Scala supports method names like += is already controversial and some people think it's operator overloading. They could have added special behavior for ++ but then it would no longer be a valid method name (like =) and it would be one more thing to remember.
I think the reasoning in part is that +=1 is only one more character, and ++ is used pretty heavily in the collections code for concatenation. So it keeps the code cleaner.
Also, Scala encourages immutable variables, and ++ is intrinsically a mutating operation. If you require +=, at least you can force all your mutations to go through a common assignment procedure (e.g. def a_=).
The primary reason is that there is not the need in Scala, as in C. In C you are constantly:
for(i = 0, i < 10; i++)
{
//Do stuff
}
C++ has added higher level methods for avoiding for explicit loops, but Scala has much gone further providing foreach, map, flatMap foldLeft etc. Even if you actually want to operate on a sequence of Integers rather than just cycling though a collection of non integer objects, you can use Scala range.
(1 to 5) map (_ * 3) //Vector(3, 6, 9, 12, 15)
(1 to 10 by 3) map (_ + 5)//Vector(6, 9, 12, 15)
Because the ++ operator is used by the collection library, I feel its better to avoid its use in non collection classes. I used to use ++ as a value returning method in my Util package package object as so:
implicit class RichInt2(n: Int)
{
def isOdd: Boolean = if (n % 2 == 1) true else false
def isEven: Boolean = if (n % 2 == 0) true else false
def ++ : Int = n + 1
def -- : Int = n - 1
}
But I removed it. Most of the times when I have used ++ or + 1 on an integer, I have later found a better way, which doesn't require it.
It is possible if you define you own class which can simulate the desired output however it may be a pain if you want to use normal "Int" methods as well since you would have to always use *()
import scala.language.postfixOps //otherwise it will throw warning when trying to do num++
/*
* my custom int class which can do ++ and --
*/
class int(value: Int) {
var mValue = value
//Post-increment
def ++(): int = {
val toReturn = new int(mValue)
mValue += 1
return toReturn
}
//Post-decrement
def --(): int = {
val toReturn = new int(mValue)
mValue -= 1
return toReturn
}
//a readable toString
override def toString(): String = {
return mValue.toString
}
}
//Pre-increment
def ++(n: int): int = {
n.mValue += 1
return n;
}
//Pre-decrement
def --(n: int): int = {
n.mValue -= 1
return n;
}
//Something to get normal Int
def *(n: int): Int = {
return n.mValue
}
Some possible test cases
scala>var num = new int(4)
num: int = 4
scala>num++
res0: int = 4
scala>num
res1: int = 5 // it works although scala always makes new resources
scala>++(num) //parentheses are required
res2: int = 6
scala>num
res3: int = 6
scala>++(num)++ //complex function
res4: int = 7
scala>num
res5: int = 8
scala>*(num) + *(num) //testing operator_*
res6: Int = 16
Of course you can have that in Scala, if you really want:
import scalaz._
import Scalaz._
case class IncLens[S,N](lens: Lens[S,N], num : Numeric[N]) {
def ++ = lens.mods(num.plus(_, num.one))
}
implicit def incLens[S,N:Numeric](lens: Lens[S,N]) =
IncLens[S,N](lens, implicitly[Numeric[N]])
val i = Lens[Int,Int](identity, (x, y) => y)
val imperativeProgram = for {
_ <- i := 0;
_ <- i++;
_ <- i++;
x <- i++
} yield x
def runProgram = imperativeProgram ! 0
And here you go:
scala> runProgram
runProgram: Int = 3
It isn't included because Scala developers thought it make the specification more complex while achieving only negligible benefits and because Scala doesn't have operators at all.
You could write your own one like this:
class PlusPlusInt(i: Int){
def ++ = i+1
}
implicit def int2PlusPlusInt(i: Int) = new PlusPlusInt(i)
val a = 5++
// a is 6
But I'm sure you will get into some trouble with precedence not working as you expect. Additionally if i++ would be added, people would ask for ++i too, which doesn't really fit into Scala's syntax.
Lets define a var:
var i = 0
++i is already short enough:
{i+=1;i}
Now i++ can look like this:
i(i+=1)
To use above syntax, define somewhere inside a package object, and then import:
class IntPostOp(val i: Int) { def apply(op: Unit) = { op; i } }
implicit def int2IntPostOp(i: Int): IntPostOp = new IntPostOp(i)
Operators chaining is also possible:
i(i+=1)(i%=array.size)(i&=3)
The above example is similar to this Java (C++?) code:
i=(i=i++ %array.length)&3;
The style could depend, of course.
I'm trying to build some image algebra code that can work with images (basically a linear pixel buffer + dimensions) that have different types for the pixel. To get this to work, I've defined a parametrized Pixel trait with a few methods that should be able to get used with any Pixel subclass. (For now, I'm only interested in operations that work on the same Pixel type.) Here it is:
trait Pixel[T <: Pixel[T]] {
def div(v: Double): T
def div(v: T): T
}
Now I define a single Pixel type that has storage based on three doubles (basically RGB 0.0-1.0), I've called it TripleDoublePixel:
class TripleDoublePixel(v: Array[Double]) extends Pixel[TripleDoublePixel] {
var data: Array[Double] = v
def this() = this(Array(0.0, 0.0, 0.0))
def div(v: Double): TripleDoublePixel = {
new TripleDoublePixel(data.map(x => x / v))
}
def div(v: TripleDoublePixel): TripleDoublePixel = {
var tmp = new Array[Double](3)
tmp(0) = data(0) / v.data(0)
tmp(1) = data(1) / v.data(1)
tmp(2) = data(2) / v.data(2)
new TripleDoublePixel(tmp)
}
}
Then we define an Image using Pixels:
class Image[T](nsize: Array[Int], ndata: Array[T]) {
val size: Array[Int] = nsize
val data: Array[T] = ndata
def this(isize: Array[Int]) {
this(isize, new Array[T](isize(0) * isize(1)))
}
def this(img: Image[T]) {
this(img.size, new Array[T](img.size(0) * img.size(1)))
for (i <- 0 until img.data.size) {
data(i) = img.data(i)
}
}
}
(I think I should be able to do without the explicit declaration of size and data, and use just what has been named in the default constructor, but I haven't gotten that to work.)
Now I want to write code to use this, that doesn't have to know what type the pixels are. For example:
def idiv[T](a: Image[T], b: Image[T]) {
for (i <- 0 until a.data.size) {
a.data(i) = a.data(i).div(b.data(i))
}
}
Unfortunately, this doesn't compile:
(fragment of lindet-gen.scala):145:
error: value div is not a member of T
a.data(i) = a.data(i).div(b.data(i))
I was told in #scala that this worked for someone else, but that was on 2.8. I've tried to get 2.8-rc1 going, but the RC1 doesn't compile for me. Is there any way to get this to work in 2.7.7?
Your idiv function has to know it'll actually be working with pixels.
def idiv[T <: Pixel[T]](a: Image[T], b: Image[T]) {
for (i <- 0 until a.data.size) {
a.data(i) = a.data(i).div(b.data(i))
}
}
A plain type parameter T would define the function for all possible types T, which of course don't all support a div operation. So you'll have to put a generic constraint limiting the possible types to Pixels.
(Note that you could put this constraint on the Image class as well, assuming that an image out of something different than pixels doesn't make sense)