I am not able to generate Scala ranges for Double.
I have read StackOverflow and there are many snippets which show double ranges but none of that works on my Scala REPL 2.13.0
9.474 to 49.474 by 1.0
1d to 1000d by 1d
(1.0 to 2.0 by 0.01)
^
error: value to is not a member of Double
What is the reason I cannot use to and by to generate double ranges in my Scala REPL
I am on a macOS with Scala 2.13.0
With Scala 2.12 I get a deprecation warning:
scala> 9.474 to 49.474 by 1.0
<console>:12: warning: method to in trait FractionalProxy is deprecated (since 2.12.6): use BigDecimal range instead
9.474 to 49.474 by 1.0
So maybe it is not supported anymore in 2.13. According to the warning you can do:
scala> BigDecimal(9.474) to BigDecimal(49.474) by BigDecimal(1.0)
res6: scala.collection.immutable.NumericRange.Inclusive[scala.math.BigDecimal] = NumericRange 9.474 to 49.474
This also works:
BigDecimal(9.474) to BigDecimal(49.474) by 1
If you do .foreach(println) on both versions you see that without BigDecimal the result looks not so great:
9.474
10.474
..
31.474
32.474000000000004
33.474000000000004
...
From the Release Notes:
Assorted deprecated methods and classes throughout the standard library have been removed entirely.
You can always create your own function that will create Range (or Seq/Iterator) for you.
// https://scastie.scala-lang.org/ZPfpF37bRlKfUPnMyOzJDw
import scala.util.Try
def rangeInclusiveDouble(from:Double, to:Double, by:Double = 1.0) = {
assume(by != 0, "'by' cannot by 0")
assume((to - from) * by > 0, "range has reversed order (arguments 'from', 'to' and 'by' will produce infinite range)")
val check: Double => Boolean = if (by > 0) _ <= to else _ >= to
Iterator.iterate(from)(_+by).takeWhile(check)
}
//usage example:
rangeInclusiveDouble(1.1, 5.1).toSeq
//List(1.1, 2.1, 3.1, 4.1, 5.1)
rangeInclusiveDouble(1.1, 2.1, 0.1).toSeq //here you will see why range over double is tricky!
//List(1.1, 1.2000000000000002, 1.3000000000000003, 1.4000000000000004,...
Try(rangeInclusiveDouble(5.0, 1.0).toSeq)
// Failure(java.lang.AssertionError: assumption failed: range has reversed order (arguments 'from', 'to' and 'by' will produce infinite range))
Try(rangeInclusiveDouble(5.0, 1.0, 0).toSeq)
//Failure(java.lang.AssertionError: assumption failed: 'by' cannot by 0)
rangeInclusiveDouble(5.0, 1.0, -1).toSeq
//List(5.0, 4.0, 3.0, 2.0, 1.0)
It has its own problems as you can see... but works if you are careful about range limits.
Related
How to keep precision and the trailing zero while converting a Double or a String to scala.math.BigDecimal ?
Use Case - In a JSON message, an attribute is of type String and has a value of "1.20". But while reading this attribute in Scala and converting it to a BigDecimal, I am loosing the precision and it is converted to 1.2
#Saurabh What a nice question! It is crucial that you shared the use case!
I think my answer lets to solve it in a most safe and efficient way... In a short form it is:
Use jsoniter-scala for parsing BigDecimal values precisely.
Encoding/decoding to/from JSON strings for any numeric type can by defined per codec or per class field basis. Please see code bellow:
Add dependencies into your build.sbt:
libraryDependencies ++= Seq(
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-core" % "2.17.4",
"com.github.plokhotnyuk.jsoniter-scala" %% "jsoniter-scala-macros" % "2.17.4" % Provided // required only in compile-time
)
Define data structures, derive a codec for the root structure, parse the response body and serialize it back:
import com.github.plokhotnyuk.jsoniter_scala.core._
import com.github.plokhotnyuk.jsoniter_scala.macros._
case class Response(
amount: BigDecimal,
#stringified price: BigDecimal)
implicit val codec: JsonValueCodec[Response] = JsonCodecMaker.make {
CodecMakerConfig
.withIsStringified(true) // switch it on to stringify all numeric and boolean values in this codec
.withBigDecimalPrecision(34) // set a precision to round up to decimal128 format: java.math.MathContext.DECIMAL128.getPrecision
.withBigDecimalScaleLimit(6178) // limit scale to fit the decimal128 format: BigDecimal("0." + "0" * 33 + "1e-6143", java.math.MathContext.DECIMAL128).scale + 1
.withBigDecimalDigitsLimit(308) // limit a number of mantissa digits to be parsed before rounding with the specified precision
}
val response = readFromArray("""{"amount":1000,"price":"1.20"}""".getBytes("UTF-8"))
val json = writeToArray(Response(amount = BigDecimal(1000), price = BigDecimal("1.20")))
Print results to the console and verify them:
println(response)
println(new String(json, "UTF-8"))
Response(1000,1.20)
{"amount":1000,"price":"1.20"}
Why the proposed approach is safe?
Well... Parsing of JSON is a minefield, especially when you are going to have precise BigDecimal values after that. Most JSON parsers for Scala do it using Java's constructor for string representation which has O(n^2) complexity (where n is a number of digits in the mantissa) and do not round results to the safe option of MathContext (by default the MathContext.DECIMAL128 value is used for that in Scala's BigDecimal constructors and operations).
It introduces vulnerabilities under low bandwidth DoS/DoW attacks for systems that accept untrusted input. Below is a simple example how it can be reproduced in Scala REPL with the latest version of the most popular JSON parser for Scala in the classpath:
...
Starting scala interpreter...
Welcome to Scala 2.12.8 (OpenJDK 64-Bit Server VM, Java 1.8.0_222).
Type in expressions for evaluation. Or try :help.
scala> def timed[A](f: => A): A = { val t = System.currentTimeMillis; val r = f; println(s"Elapsed time (ms): ${System.currentTimeMillis - t}"); r }
timed: [A](f: => A)A
scala> timed(io.circe.parser.decode[BigDecimal]("9" * 1000000))
Elapsed time (ms): 29192
res0: Either[io.circe.Error,BigDecimal] = Right(999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999...
scala> timed(io.circe.parser.decode[BigDecimal]("1e-100000000").right.get + 1)
Elapsed time (ms): 87185
res1: scala.math.BigDecimal = 1.0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000...
For contemporary 1Gbit networks 10ms of receiving a malicious message with the 1M-digit number can produce 29 seconds of 100% CPU load on a single core. More than 256 cores can be effectively DoS-ed at the full bandwidth rate. The last expression demonstrates how to burn a CPU core for ~1.5 minutes using a message with a 13-byte number if subsequent + or - operations were used with Scala 2.12.8.
And, jsoniter-scala take care about all these cases for Scala 2.11.x, 2.12.x, 2.13.x, and 3.x.
Why it is the most efficient?
Below are charts with throughput (operations per second, so greater is better) results of JSON parsers for Scala on different JVMs during parsing of an array of 128 small (up to 34-digit mantissas) values and a medium (with a 128-digit mantissa) value of BigDecimal accordingly:
The parsing routine for BigDecimal in jsoniter-scala:
uses BigDecimal values with compact representation for small numbers up to 36 digits
uses more efficient hot-loops for medium numbers that have from 37 to 284 digits
switches to the recursive algorithm which has O(n^1.5) complexity for values that have more than 285 digits
Moreover, jsoniter-scala parses and serializes JSON directly from UTF-8 bytes to your data structures and back, and does it crazily fast without using of run-time reflection, intermediate ASTs, strings or hash maps, with minimum allocations and copying. Please see here the results of 115 benchmarks for different data types and real-life message samples for GeoJSON, Google Maps API, OpenRTB, and Twitter API.
For Double, 1.20 is exactly the same as 1.2, so you can't convert them to different BigDecimals. For String, you are not losing precision; you can see that because res3: scala.math.BigDecimal = 1.20 and not ... = 1.2! But equals on scala.math.BigDecimal happens to be defined so that numerically equal BigDecimals are equal even though they are distinguishable.
If you want to avoid that, you could use java.math.BigDecimals for which
Unlike compareTo, this method considers two BigDecimal objects equal only if they are equal in value and scale (thus 2.0 is not equal to 2.00 when compared by this method).
For your case, res2.underlying == res3.underlying will be false.
Of course, its documentation also states
Note: care should be exercised if BigDecimal objects are used as keys in a SortedMap or elements in a SortedSet since BigDecimal's natural ordering is inconsistent with equals. See Comparable, SortedMap or SortedSet for more information.
which is probably part of the reason why the Scala designers decided on different behavior.
I don't normally do numbers, but:
scala> import java.math.MathContext
import java.math.MathContext
scala> val mc = new MathContext(2)
mc: java.math.MathContext = precision=2 roundingMode=HALF_UP
scala> BigDecimal("1.20", mc)
res0: scala.math.BigDecimal = 1.2
scala> BigDecimal("1.2345", mc)
res1: scala.math.BigDecimal = 1.2
scala> val mc = new MathContext(3)
mc: java.math.MathContext = precision=3 roundingMode=HALF_UP
scala> BigDecimal("1.2345", mc)
res2: scala.math.BigDecimal = 1.23
scala> BigDecimal("1.20", mc)
res3: scala.math.BigDecimal = 1.20
Edit: also, https://github.com/scala/scala/pull/6884
scala> res3 + BigDecimal("0.003")
res4: scala.math.BigDecimal = 1.20
scala> BigDecimal("1.2345", new MathContext(5)) + BigDecimal("0.003")
res5: scala.math.BigDecimal = 1.2375
Hi does anyone know of a standard library that can do what is specified in the title. Ideally the usage should be something like this: https://docs.scipy.org/doc/numpy/reference/generated/numpy.trapz.html
I researched quite a lot and couldn't finding anything similar.
Everything used apache PolynomialFunction class which takes as inputs the polynomial parameters and not the y-coordinates
Using Breeze you can write any type of function that type checks and pass it to trapezoid, you can just include the mapping in the function:
val f = (x: Double) => {
val xToY = Map(1.0 -> 1.0 , 2.0 -> 2.0, 3.0 -> 3.0)
xToY(x)
}
scala> import breeze.integrate._
scala> trapezoid(f, 1, 3, 3)
res0: Double = 4.0
Although it has restricted use, as it needs the mapping not to have gaps between the range it defined.
I'm migrating from Scala 2.9 to Scala 2.11.0-M5.
Following double field initialization with a constant floating point literal fails.
Code example:
class Test {
val okDouble = 0.0
val badDouble = 0.
val nextValue = 0
}
Scala interpreter error:
scala> class Test {
| val okDouble = 0.0
| val badDouble = 0.
| val nextValue = 0
<console>:4: error: identifier expected but 'val' found.
val nextValue = 0
The problem here is the dot at the end of badDouble definition.
Should 0.0 be always used for double literals now?
Double literals ending with dot were deprecated in Scala 2.10 and removed from the language in Scala 2.11:
Welcome to Scala version 2.10.3 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_45).
scala> 3.
<console>:1: warning: This lexical syntax is deprecated. From scala 2.11, a dot
will only be considered part of a number if it is immediately followed by a digit.
3.
^
EDIT: I'm using Scala 2.9.2
In Scala, I've defined a custom class which wraps a Double:
class DoubleWrap( d : Double ) {
def double( ) = d * 2
}
and an implicit conversion from Double to DoubleWrap:
implicit def wrapDouble( d : Double ) = new DoubleWrap( d )
This allows me to do the following:
scala> 2.5.double
res0: Double = 5.0
However, since there is an implicit conversion in Scala from Int to Double, I can also do the following:
scala> 2.double
res1: Double = 4.0
This operator can also be applied to all elements of a double-type collection using map
scala> List( 1.0, 2.0, 3.0 ).map( _.double )
res2: List[Double] = List(2.0, 4.0, 6.0)
However, if I attempt to apply the function to all elements of an integer collection, it doesn't work:
scala> List( 1, 2, 3 ).map( _.double )
<console>:10: error: value double is not a member of Int
List( 1, 2, 3 ).map( _.double )
Does anyone know why this is the case?
In scala, implicit conversions are not automatically chained. In other words, the compiler will look for a single implicit conversion that will allow the code to make sense, it will never try to apply two (or more) successive implicit conversions.
In your example, the fact that you can do 2.double has nothing to do with the fact that there is an implicit conversion from Double to Int in Predef.
As a proof, try this in the REPL:
scala> val i: Int = 2
i: Int = 2
scala> i.double
<console>:13: error: value double is not a member of Int
i.double
It does not compile.
So why does 2.double compile? Good question.
I thought I understood this intuitively: 2 can be interpreted as the Int value 2 or as the Double value 2.0 in the first place, so my intuition was that 2 is somehow already a Double in this context.
However, I think this is wrong, because even the following will compile, surpisingly: (2:Int).double (or even more strange: ((1+1):Int).double). I'll be honest, I am flabbergasted and have no idea why this compiles while val i: Int = 2; i.double does not.
So to sum up, it is normal that scala does not try to apply two implicit conversions at the same time, but for some reason this rule does not seem to apply to constant expressions.
And now for a way to fix your issue: simply modify your implicit conversion so that it accepts any type that is itself implicitly convertible to Double. In effect, this allows to chain the implicit conversions:
implicit def wrapDouble[T <% Double]( d : T ) = new DoubleWrap( d )
It's a bug which should soon be fixed.
I am using the Breeze library's math part and have the following Matrix:
val matrix = breeze.linalg.DenseMatrix((1.0,2.0),(3.0,4.0))
I want to scale this by a scalar Double (and add the result to another Matrix) using one
of the *= and :*= operators:
val scale = 2.0
val scaled = matrix * scale
This works just fine (more details in my answer below).
Update This code does work in isolation. I seem to have a problem elsewhere. Sorry for wasting your bandwidth...
Update 2 However, the code fails to compile if I specifically assign the type Matrix to the variable matrix:
val matrix: Matrix[Double] = breeze.linalg.DenseMatrix((1.0,2.0),(3.0,4.0))
val scaled = matrix * scale // does not compile
The compiler keeps complaining that it "could not find implicit value for parameter op".
Can anyone explain this please? Is this a bug in Breeze or intentional? TIA.
For those of you who struggle with Scala and the Breeze library, I would like to detail some of the functions / operators available for Matrix instances here.
Our starting point is a simple Double matrix (Matrix and the related operations also support Float and Int):
scala> val matrix = breeze.linalg.DenseMatrix((1.0,2.0),(3.0,4.0))
You can easily pretty-print this using
scala> println(matrix)
1.0 2.0
3.0 4.0
Breeze supports operators that leave the left operand intact and those that modify the left operand - e.g. * and *=:
scala> val scaled1 = matrix * 2.0 // returns new matrix!
scala> println(matrix)
1.0 2.0
3.0 4.0
scala> println(scaled1)
2.0 4.0
6.0 8.0
scala> println(matrix == scaled1)
false
scala> val scaled2 = matrix *= 2.0 // modifies and returns matrix!
scala> println(matrix)
2.0 4.0
6.0 8.0
scala> println(scaled2)
2.0 4.0
6.0 8.0
scala> println(matrix == scaled2) // rough equivalent of calling Java's equals()
true
The hash codes of both variables indicate that they actually point to the same object (which true according to the javadoc and can be verified by looking at the sources):
scala> println(matrix.##)
12345678
scala> println(scaled2.##)
12345678
This is further illustrated by:
scala> val matrix2 = breeze.linalg.DenseMatrix((2.0,4.0),(6.0,8.0))
scala> println(matrix == matrix2)
true
scala> println(matrix2.##)
34567890