In my console, when I try to check (0xFFFF.toShort).toBinaryString, it returns
(0xFFFF.toShort).toBinaryString
res1: String = 11111111111111111111111111111111
Shouldn't it return 1111111111111111, as in 16 1s? (16bits)
How do I fix this?
THanks
The bug is https://github.com/scala/bug/issues/10216 that the extension methods are only defined for int.
The workaround is to supply them yourself for byte and short, as shown here https://github.com/scala/scala/pull/8383/files
For example
scala> implicit class MyRichByte(val b: Byte) extends AnyVal {
| def toBinaryString: String = java.lang.Integer.toBinaryString(java.lang.Byte.toUnsignedInt(b))
| }
class MyRichByte
scala> (0xFFFF.toByte).toBinaryString
val res4: String = 11111111
In the REPL, use // print followed by tab (for tab completion) to see
scala> (0xFFFF.toShort).toBinaryString //print
scala.Predef.intWrapper(65535.toShort.toInt).toBinaryString // : String
use ammonite desugar,
# desugar((0xFFFF.toShort).toBinaryString)
res5: Desugared = scala.Predef.intWrapper(65535.toShort.toInt).toBinaryString
toBinaryString is a method of Int and there is an implicit conversion from Short to Int (intWrapper).
and the toShort is convert to -1, and -1 in turn converted to Int -1 and that is how we get the 32 of "1".
Related
scala> val blank_line_accumulator = sc.accumulator(0,"Blank Lines")
blank_line_accumulator: org.apache.spark.Accumulator[Int] = 0
val input_file2 = sc.textFile("file:///home/cloudera/input2.txt").foreach{x=>if(x.length()==0)blank_line_accumulator +=1}
input_file2: Unit = ()
scala> input_file2.value :40: error: value value is not a
member of Unit
input_file2.value
This is my problem while accessing the Value.
There is no error for me to access the value it worked like a charm.. may be you are doing some simple mistake somewher else. take fresh spark-shell and try again..
scala> blank_line_accumulator.value
res3: Int = 3
to debug this try below... should give Class[Int] = int
scala> blank_line_accumulator.value.getClass
res4: Class[Int] = int
and try to debug scala> blank_line_accumulator.getClass
should give below...
res6: Class[_ <: org.apache.spark.Accumulator[Int]] = class
org.apache.spark.Accumulator
foreach doesn't return any useful value, which is represented as Unit (which you can see in the type: input_file2: Unit = ()). Unit doesn't have a value, so there's nothing to access. Probably you meant blank_line_accumulator.value as Ram Ghadiyaram's answer shows.
Suppose I am trying to "abstract over execution":
import scala.language.higherKinds
class Operator[W[_]]( f : Int => W[Int] ) {
def operate( i : Int ) : W[Int] = f(i)
}
Now I can define an Operator[Future] or Operator[Task] etc. For example...
import scala.concurrent.{ExecutionContext,Future}
def futureSquared( i : Int ) = Future( i * i )( ExecutionContext.global )
In REPL-style...
scala> val fop = new Operator( futureSquared )
fop: Operator[scala.concurrent.Future] = Operator#105c54cb
scala> fop.operate(4)
res0: scala.concurrent.Future[Int] = Future(<not completed>)
scala> res0
res1: scala.concurrent.Future[Int] = Future(Success(16))
Hooray!
But I also might want a straightforward synchronous version, so I define somewhere
type Identity[T] = T
And I can define a synchronous operator...
scala> def square( i : Int ) : Identity[Int] = i * i
square: (i: Int)Identity[Int]
scala> val sop = new Operator( square )
sop: Operator[Identity] = Operator#18f2960b
scala> sop.operate(9)
res2: Identity[Int] = 81
Sweet.
But, it's awkward that the inferred type of the result is Identity[Int], rather than the simpler, straightforward Int. Of course the two types are really the same, and so are identical in every way. But I'd like clients of my library who don't know anything about this abstracting-over-execution stuff not to be confused.
I could write a wrapper by hand...
class SimpleOperator( inner : Operator[Identity] ) extends Operator[Identity]( inner.operate ) {
override def operate( i : Int ) : Int = super.operate(i)
}
which does work...
scala> val simple = new SimpleOperator( sop )
simple: SimpleOperator = SimpleOperator#345c744e
scala> simple.operate(7)
res3: Int = 49
But this feels very boiler-platey, especially if my abstracted-over-execution class has lots of methods rather than just one. And I'd have to remember to keep the wrapper in sync as the generic class evolves.
Is there some more generic, maintainable way to get a version of Operator[Identity] that makes the containing type disappear from the type inference and API docs?
This more of long comment rather than an answer...
But, it's awkward that the inferred type of the result is Identity[Int], rather than the simpler, straightforward Int. Of course the two types apparent types are really the same, and so are identical in every way. But I'd like clients of my library who don't know anything about this abstracting-over-execution stuff not to be confused.
This sounds like you want to convert Indentity[T] back to T... Have you considered type ascription?
scala>def f[T](t: T): Identity[T] = t
scala>f(3)
// res11: Identity[Int] = 3
scala>f(3): Int
// res12: Int = 3
// So in your case
scala>sop.operate(9): Int
// res18: Int = 81
As Steve Waldman suggested in comments given type Identity[T] = T, the types T and Identity[T] really are identical without any ceremony, substitutable and transparent at call sites or anywhere else. For example, following works fine out-of-the-box
sop.operate(9) // res2: cats.Id[Int] = 81
def foo(i: Int) = i
foo(sop.operate(9)) // res3: Int = 81
extract from Cats is the dual of pure and extracts the value from its context, so perhaps we could provide similar methods for users not familiar with the above equivalence (like myself if you see my previous edit).
Can be done by providing types explicitly, but still looks magical for external users investigating method signature.
type Identity[T] = T
def square( i : Int ):Int = i * i
class Operator[W[_], T <: W[Int] ]( f : Int => T ) {
def operate(i : Int):T = f(i)
}
val op = new Operator[Identity,Int](square)
op.operate(5)
//res0: Int = 25
Works for new Operator[Future,Future[Int]] as well.
I am representing a data object as an Iterator[Byte], which is created from an InputStream instance.
The problem lies in that Byte is a signed integer from -128 to 127, while the read method in InputStream returns an unsigned integer from 0 to 255. This is in particular problematic since by semantics -1 should denote the end of an input stream.
What is the best way to alleviate the incompatibility between these two types? Is there an elegant way of converting between one to another? Or should I just use Int instead of Bytes, even though it feels less elegant?
def toByteIterator(in: InputStream): Iterator[Byte] = {
Iterator.continually(in.read).takeWhile(-1 !=).map { elem =>
convert // need to convert unsigned int to Byte here
}
}
def toInputStream(_it: Iterator[Byte]): InputStream = {
new InputStream {
val (it, _) = _it.duplicate
override def read(): Int = {
if (it.hasNext) it.next() // need to convert Byte to unsigned int
else -1
}
}
}
Yes, you can convert byte to int and vice versa easily.
First, int to byte can be converted with just toByte:
scala> 128.toByte
res0: Byte = -128
scala> 129.toByte
res1: Byte = -127
scala> 255.toByte
res2: Byte = -1
so your elem => convert could be just _.toByte.
Second, a signed byte can be converted to an unsigned int with a handy function in java.lang.Byte, called toUnsignedInt:
scala> java.lang.Byte.toUnsignedInt(-1)
res1: Int = 255
scala> java.lang.Byte.toUnsignedInt(-127)
res2: Int = 129
scala> java.lang.Byte.toUnsignedInt(-128)
res3: Int = 128
so you can write java.lang.Byte.toUnsignedInt(it.next()) in your second piece of code.
However, the last method is only available since Java 8. I don't know about its alternatives in older versions of Java, but its actual implementation is astonishingly simple:
public static int toUnsignedInt(byte x) {
return ((int) x) & 0xff;
}
so all you need is just to write
it.next().toInt & 0xff
Unfortunately it is something related with a bad design of the class InputStream. If you use read() you will have that problem. You should use read(byte[]) instead.
But as you say, you could also use Int. That is up to you.
How can I override "toString" to make this Scala code acts like the following Java code.
Code in Scala
object BIT extends Enumeration {
type BIT = Value
val ZERO, ONE, ANY = Value
override def toString() =
this match {
case ANY => "x "
case ZERO=> "0 "
case ONE => "1 "
}
}
val b = ONE
println(ONE) // returns ONE
Wanted toString behaviour should produce same output as the following Java code.
public enum BIT {
ZERO, ONE, ANY;
/** print BIT as 0,1, and X */
public String toString() {
switch (this) {
case ZERO:
return "0 ";
case ONE:
return "1 ";
default://ANY
return "X ";
}
}
}
BIT b = ONE;
System.out.println(b); // returns 1
I think I am overriding the wrong "toString" method.
First, yes you are overriding the wrong toString method. You're overriding the method on the BIT object itself, which is not very useful.
Second, you do this much easier by simply doing
object BIT extends Enumeration {
type BIT = Value
val ZERO = Value("0")
val ONE = Value("1")
val ANY = Value("x")
}
Then you can do
println(BIT.ONE) //prints "1"
If you want to set the value and the string you can do it like this:
scala> object BIT extends Enumeration {
| type BIT = Value
| val ZERO = Value(0, "0")
| val ONE = Value(1, "1")
| val ANY = Value("x")
| }
defined module BIT
scala> BIT.ZERO.toString
res2: String = 0
scala> BIT.ZERO.id
res3: Int = 0
scala> BIT.ANY.id
res4: Int = 2
scala> BIT.ANY.toString
res5: String = x
"%.3f".format(1) returns 1.000.
"%.3f".format(4.0/3.0) returns 1.333.
Is there some easy way to have these return 1 and 1.333? I thought the standard printf format specified that precision as the maximum already, but apparently not in Scala.
The default formatter used by printf seems to be a generic one that doesn't have all the same support than [DecimalFormat][1]. You can instantiate a custom formatter along those lines:
scala> import java.text.DecimalFormat
import java.text.DecimalFormat
scala> val formatter = new DecimalFormat("#.###")
formatter: java.text.DecimalFormat = java.text.DecimalFormat#674dc
scala> formatter.format(1)
res36: java.lang.String = 1
scala> formatter.format(1.34)
res37: java.lang.String = 1.34
scala> formatter.format(4.toFloat / 3)
res38: java.lang.String = 1.333
scala> formatter.format(1.toFloat)
res39: java.lang.String = 1
See: http://docs.oracle.com/javase/tutorial/java/data/numberformat.html for more information.
"%.3f".format(1) will throw an java.util.IllegalFormatConversionException because of the wrong type (Float is expected and you give a Int).
Even if you use "%.3f".format(1.0), you will get 1.000.
You can use a method like the following to obtain the expected result :
def format(x:AnyVal):String = x match {
case x:Int => "%d".format(x)
case x:Long => "%d".format(x)
case x:Float => "%.3f".format(x)
case x:Double => "%.3f".format(x)
case _ => ""
}
This method will return the expected format based on argument type.
How about simply getting rid of the zeroes after formatting?
scala> Array(1.0,1.10,1.110).map("%.3g" format _).map(_.replaceAll("[.0]*$",""))
res7: Array[java.lang.String] = Array(1, 1.1, 1.11)