"%.3f".format(1) returns 1.000.
"%.3f".format(4.0/3.0) returns 1.333.
Is there some easy way to have these return 1 and 1.333? I thought the standard printf format specified that precision as the maximum already, but apparently not in Scala.
The default formatter used by printf seems to be a generic one that doesn't have all the same support than [DecimalFormat][1]. You can instantiate a custom formatter along those lines:
scala> import java.text.DecimalFormat
import java.text.DecimalFormat
scala> val formatter = new DecimalFormat("#.###")
formatter: java.text.DecimalFormat = java.text.DecimalFormat#674dc
scala> formatter.format(1)
res36: java.lang.String = 1
scala> formatter.format(1.34)
res37: java.lang.String = 1.34
scala> formatter.format(4.toFloat / 3)
res38: java.lang.String = 1.333
scala> formatter.format(1.toFloat)
res39: java.lang.String = 1
See: http://docs.oracle.com/javase/tutorial/java/data/numberformat.html for more information.
"%.3f".format(1) will throw an java.util.IllegalFormatConversionException because of the wrong type (Float is expected and you give a Int).
Even if you use "%.3f".format(1.0), you will get 1.000.
You can use a method like the following to obtain the expected result :
def format(x:AnyVal):String = x match {
case x:Int => "%d".format(x)
case x:Long => "%d".format(x)
case x:Float => "%.3f".format(x)
case x:Double => "%.3f".format(x)
case _ => ""
}
This method will return the expected format based on argument type.
How about simply getting rid of the zeroes after formatting?
scala> Array(1.0,1.10,1.110).map("%.3g" format _).map(_.replaceAll("[.0]*$",""))
res7: Array[java.lang.String] = Array(1, 1.1, 1.11)
Related
In my console, when I try to check (0xFFFF.toShort).toBinaryString, it returns
(0xFFFF.toShort).toBinaryString
res1: String = 11111111111111111111111111111111
Shouldn't it return 1111111111111111, as in 16 1s? (16bits)
How do I fix this?
THanks
The bug is https://github.com/scala/bug/issues/10216 that the extension methods are only defined for int.
The workaround is to supply them yourself for byte and short, as shown here https://github.com/scala/scala/pull/8383/files
For example
scala> implicit class MyRichByte(val b: Byte) extends AnyVal {
| def toBinaryString: String = java.lang.Integer.toBinaryString(java.lang.Byte.toUnsignedInt(b))
| }
class MyRichByte
scala> (0xFFFF.toByte).toBinaryString
val res4: String = 11111111
In the REPL, use // print followed by tab (for tab completion) to see
scala> (0xFFFF.toShort).toBinaryString //print
scala.Predef.intWrapper(65535.toShort.toInt).toBinaryString // : String
use ammonite desugar,
# desugar((0xFFFF.toShort).toBinaryString)
res5: Desugared = scala.Predef.intWrapper(65535.toShort.toInt).toBinaryString
toBinaryString is a method of Int and there is an implicit conversion from Short to Int (intWrapper).
and the toShort is convert to -1, and -1 in turn converted to Int -1 and that is how we get the 32 of "1".
This piece of code works fine and returns 343423 as expected:
val longList: ListBuffer[Long] = ListBuffer(103948,343423,209754)
val maxLong = longList.max
But it doesn't work for Some[Long]:
val longSomeList: ListBuffer[Some[Long]] = ListBuffer(Some(103948),Some(343423),Some(209754))
val maxSomeLong = longSomeList.max
Error: No implicit Ordering defined for Some[Long].
val maxSomeLong = longSomeList.max
Is there any simple solution to get the max of the second list?
max function from TraversableForwarder(scala.collection.generic)
In which real world scenario would you have a ListBuffer[Some[Long]]? You could just as well have a ListBuffer[Long] then.
This works:
val longSomeList: ListBuffer[Option[Long]] = ListBuffer(Some(103948),Some(343423),Some(209754))
val maxSomeLong = longSomeList.max
You are looking for .flatten.
longSomeList.flatten.max
Or give it the ordering to use explicitly:
longSomeList
.max(Ordering.by[Option[Int], Int](_.getOrElse(Int.MinValue)))
Also, don't use mutable collections.
longSomeList.collect { case Some(n) => n }.max
The problem is you are trying to order elements of type Some[Long], which is not defined. So you are telling the compiler to know how to order these:
scala> Some(1) < Some(2)
<console>:8: error: value < is not a member of Some[Int]
Some(1) < Some(2)
^
What you can do is either unwrap the Somes to get the Longs
longSomeList.flatten.max
or to define your implicit ordering likewise:
implicit object Ord extends Ordering[Some[Long]] {
def compare(a: Some[Long], b: Some[Long]) = a.getOrElse(Long.MinValue) compare b.getOrElse(Long.MinValue)
}
and then:
scala> longSomeList.max
res12: Some[Long] = Some(343423)
There is a program where I would like to limit the range on a set of ints from 5 to 15.
Is there a way to define a type which allows this?
An example of how would like to use this:
// Define type Good X as range from 5 to 15
class Foo(val x: GoodX)
{
//blah blah
}
I would also like to preserve the "Int-iness" of GoodX.
val base:GoodX=5
val f=Foo(base+4)
Take a look at https://github.com/fthomas/refined . It allows you to refine (constrain) existing types at type level. E.g. positive integers, which still have a subtype relationship with integers.
The syntax is a bit verbose, and it will box primitives (see below for details). But other than that it does exactly what you want.
Here is a short demo. Define a refinement and a method using a refined type:
import eu.timepit.refined._
import eu.timepit.refined.api.Refined
import eu.timepit.refined.auto._
import eu.timepit.refined.numeric._
type FiveToFifteen = GreaterEqual[W.`5`.T] And Less[W.`15`.T]
type IntFiveToFifteen = Int Refined FiveToFifteen
def sum(a: IntFiveToFifteen, b: IntFiveToFifteen): Int = a + b
Use it with constants (note the good compile error messages):
scala> sum(5,5)
res6: Int = 10
scala> sum(0,10)
<console>:60: error: Left predicate of (!(0 < 5) && (0 < 15)) failed: Predicate (0 < 5) did not fail.
sum(0,10)
^
scala> sum(5,20)
<console>:60: error: Right predicate of (!(20 < 5) && (20 < 15)) failed: Predicate failed: (20 < 15).
sum(5,20)
^
When you have variables, you do not know at compile time whether they are in range or not. So downcasting from Int to a refined int can fail. Throwing exceptions is not considered good style in functional libraries. So the refineV method returns an Either:
val x = 20
val y = 5
scala> refineV[FiveToFifteen](x)
res14: Either[String,eu.timepit.refined.api.Refined[Int,FiveToFifteen]] = Left(Right predicate of (!(20 < 5) && (20 < 15)) failed: Predicate failed: (20 < 15).)
scala> refineV[FiveToFifteen](y)
res16: Either[String,eu.timepit.refined.api.Refined[Int,FiveToFifteen]] = Right(5)
I think Partial Function would help.
case class GoodX(x: Int)
object GoodX {
def apply: PartialFunction[Int, GoodX] =
{ case i if i > 5 && i < 15 => new GoodX(i) }
}
// implicits to remain int-fulness
implicit def goodXToInt(goodX: GoodX): Int = goodX.x
GoodX(5) // throw Match Error
GoodX(10) // GoodX(10)
This solution requires no library.
Hope this help.
As shown in examples section of refined library we can define a custom refined type whose value is between 7 and 77
// Here we define a refined type "Int with the predicate (7 <= value < 77)".
scala> type Age = Int Refined Interval.ClosedOpen[W.`7`.T, W.`77`.T]
Furthermore, if on scala 2.13.x, one can also use literal based singleton types as shown below there by not needing Witness from shapeless ;)
import eu.timepit.refined.numeric.Interval.Closed
type AgeOfChild = Int Refined Closed[2, 12]
case class Child(name: NonEmptyString, age:AgeOfChild = 2)
Please refer to SIP and official documentation for more details.
Sure ...
object FiveToFifteen extends Enumeration {
val _5 = Value(5)
val _6,_7,_8,_9,_10,_11,_12,_13,_14,_15 = Value
}
Edit if you want to "preserve int-ness", you could also add conversions like this:
implicit def toInt(v: Value) = v.id
implicit def fromInt(i: Int) = apply(i)
But this, obviously, won't make your type much more "int-ful" then it already is (which is, pretty much none), because things like
val v: Value = _15 - _10 or val v: Value = _5 * 3 or even val v = _15 * _5 will work, but others, like val v: Value = _5 - 1 will crash
Is there a way to get the Type of a field with scala reflection?
Let's see the standard reflection example:
scala> class C { val x = 2; var y = 3 }
defined class C
scala> val m = ru.runtimeMirror(getClass.getClassLoader)
m: scala.reflect.runtime.universe.Mirror = JavaMirror ...
scala> val im = m.reflect(new C)
im: scala.reflect.runtime.universe.InstanceMirror = instance mirror for C#5f0c8ac1
scala> val fieldX = ru.typeOf[C].declaration(ru.newTermName("x")).asTerm.accessed.asTerm
fieldX: scala.reflect.runtime.universe.TermSymbol = value x
scala> val fmX = im.reflectField(fieldX)
fmX: scala.reflect.runtime.universe.FieldMirror = field mirror for C.x (bound to C#5f0c8ac1)
scala> fmX.get
res0: Any = 2
Is there a way to do something like
val test: Int = fmX.get
That means can I "cast" the result of a reflection get to the actual type of the field? And otherwise: is it possible to do a reflection set from a string? In the example something like
fmx.set("10")
Thanks for hints!
Here's the deal... the type is not known at compile time, so, basically, you have to tell the compiler what the type it's supposed to be. You can do it safely or not, like this:
val test: Int = fmX.get.asInstanceOf[Int]
val test: Int = fmX.get match {
case n: Int => n
case _ => 0 // or however you want to handle the exception
}
Note that, since you declared test to be Int, you have to assign an Int to it. And even if you kept test as Any, at some point you have to pick a type for it, and it is always going to be something static -- as in, in the source code.
The second case just uses pattern matching to ensure you have the right type.
I'm not sure I understand what you mean by the second case.
My code is as follows
import scala.collection.mutable.HashMap
type CrossingInterval = (Date, Date)
val crossingMap = new HashMap[String, CrossingInterval]
val crossingData: String = ...
Firstly why does the following line compile?
val time = crossingMap.getOrElse(crossingData, -1)
I would have thought -1 would have been an invalid value
Secondly how do I do a basic check such as the following
if (value exists in map) {
}
else {
}
In Java I would just check for null values. I'm not sure about the proper way to do it in Scala
Typing your code in the interpreter shows why the first statement compiles:
type Date = String
scala> val time = crossingMap.getOrElse(crossingData, -1)
time: Any = -1
Basically, getOrElse on a Map[A, B] (here B = CrossingDate) accepts a parameter of any type B1 >: B: that means that B1 must be a supertype of B. Here B1 = Any, and -1 is of course a valid value of type Any. In this case you actually want to have a type declaration for time.
For testing whether a key belongs to the map, just call the contains method. An example is below - since Date was not available, I simply defined it as an alias to String.
scala> crossingMap.contains(crossingData)
res13: Boolean = false
scala> crossingMap += "" -> ("", "")
res14: crossingMap.type = Map("" -> ("",""))
//Now "" is a map of the key
scala> crossingMap.contains("")
res15: Boolean = true
If you want to check whether a value is part of the map, the simplest way is to write this code:
crossingMap.values.toSet.contains("")
However, this builds a Set containing all values. EDIT: You can find a better solution for this subproblem in Kipton Barros comment.