Extending collection classes with extra fields in Scala - scala

I'm looking to create a class that is basically a collection with an extra field. However, I keep running into problems and am wondering what the best way of implementing this is. I've tried to follow the pattern given in the Scala book. E.g.
import scala.collection.IndexedSeqLike
import scala.collection.mutable.Builder
import scala.collection.generic.CanBuildFrom
import scala.collection.mutable.ArrayBuffer
class FieldSequence[FT,ST](val field: FT, seq: IndexedSeq[ST] = Vector())
extends IndexedSeq[ST] with IndexedSeqLike[ST,FieldSequence[FT,ST]] {
def apply(index: Int): ST = return seq(index)
def length = seq.length
override def newBuilder: Builder[ST,FieldSequence[FT,ST]]
= FieldSequence.newBuilder[FT,ST](field)
}
object FieldSequence {
def fromSeq[FT,ST](field: FT)(buf: IndexedSeq[ST])
= new FieldSequence(field, buf)
def newBuilder[FT,ST](field: FT): Builder[ST,FieldSequence[FT,ST]]
= new ArrayBuffer mapResult(fromSeq(field))
implicit def canBuildFrom[FT,ST]:
CanBuildFrom[FieldSequence[FT,ST], ST, FieldSequence[FT,ST]] =
new CanBuildFrom[FieldSequence[FT,ST], ST, FieldSequence[FT,ST]] {
def apply(): Builder[ST,FieldSequence[FT,ST]]
= newBuilder[FT,ST]( _ ) // What goes here?
def apply(from: FieldSequence[FT,ST]): Builder[ST,FieldSequence[FT,ST]]
= from.newBuilder
}
}
The problem is the CanBuildFrom that is implicitly defined needs an apply method with no arguments. But in these circumstances this method is meaningless, as a field (of type FT) is needed to construct a FieldSequence. In fact, it should be impossible to construct a FieldSequence, simply from a sequence of type ST. Is the best I can do to throw an exception here?

Then your class doesn't fulfill the requirements to be a Seq, and methods like flatMap (and hence for-comprehensions) can't work for it.

I'm not sure I agree with Landei about flatMap and map. If you replace with throwing an exception like this, most of the operations should work.
def apply(): Builder[ST,FieldSequence[FT,ST]] = sys.error("unsupported")
From what I can see in TraversableLike, map and flatMap and most other ones use the apply(repr) version. So for comprehensions seemingly work. It also feels like it should follow the Monad laws (the field is just carried accross).
Given the code you have, you can do this:
scala> val fs = FieldSequence.fromSeq("str")(Vector(1,2))
fs: FieldSequence[java.lang.String,Int] = FieldSequence(1, 2)
scala> fs.map(1 + _)
res3: FieldSequence[java.lang.String,Int] = FieldSequence(2, 3)
scala> val fs2 = FieldSequence.fromSeq("str1")(Vector(10,20))
fs2: FieldSequence[java.lang.String,Int] = FieldSequence(10, 20)
scala> for (x <- fs if x > 0; y <- fs2) yield (x + y)
res5: FieldSequence[java.lang.String,Int] = FieldSequence(11, 21, 12, 22)
What doesn't work is the following:
scala> fs.map(_ + "!")
// does not return a FieldSequence
scala> List(1,2).map(1 + _)(collection.breakOut): FieldSequence[String, Int]
java.lang.RuntimeException: unsupported
// this is where the apply() is used
For breakOut to work you would need to implement the apply() method. I suspect you could generate a builder with some default value for field: def apply() = newBuilder[FT, ST](getDefault) with some implementation of getDefault that makes sense for your use case.
For the fact that fs.map(_ + "!") does not preserve the type, you need to modify your signature and implementation, so that the compiler can find a CanBuildFrom[FieldSequence[String, Int], String, FieldSequence[String, String]]
implicit def canBuildFrom[FT,ST_FROM,ST]:
CanBuildFrom[FieldSequence[FT,ST_FROM], ST, FieldSequence[FT,ST]] =
new CanBuildFrom[FieldSequence[FT,ST_FROM], ST, FieldSequence[FT,ST]] {
def apply(): Builder[ST,FieldSequence[FT,ST]]
= sys.error("unsupported")
def apply(from: FieldSequence[FT,ST_FROM]): Builder[ST,FieldSequence[FT,ST]]
= newBuilder[FT, ST](from.field)
}

In the end, my answer was very similar to that in a previous question. The difference with that question and my original and the answer are slight but basically allow anything that has a sequence to be a sequence.
import scala.collection.SeqLike
import scala.collection.mutable.Builder
import scala.collection.mutable.ArrayBuffer
import scala.collection.generic.CanBuildFrom
trait SeqAdapter[+A, Repr[+X] <: SeqAdapter[X,Repr]]
extends Seq[A] with SeqLike[A,Repr[A]] {
val underlyingSeq: Seq[A]
def create[B](seq: Seq[B]): Repr[B]
def apply(index: Int) = underlyingSeq(index)
def length = underlyingSeq.length
def iterator = underlyingSeq.iterator
override protected[this] def newBuilder: Builder[A,Repr[A]] = {
val sac = new SeqAdapterCompanion[Repr] {
def createDefault[B](seq: Seq[B]) = create(seq)
}
sac.newBuilder(create)
}
}
trait SeqAdapterCompanion[Repr[+X] <: SeqAdapter[X,Repr]] {
def createDefault[A](seq: Seq[A]): Repr[A]
def fromSeq[A](creator: (Seq[A]) => Repr[A])(seq: Seq[A]) = creator(seq)
def newBuilder[A](creator: (Seq[A]) => Repr[A]): Builder[A,Repr[A]] =
new ArrayBuffer mapResult fromSeq(creator)
implicit def canBuildFrom[A,B]: CanBuildFrom[Repr[A],B,Repr[B]] =
new CanBuildFrom[Repr[A],B,Repr[B]] {
def apply(): Builder[B,Repr[B]] = newBuilder(createDefault)
def apply(from: Repr[A]) = newBuilder(from.create)
}
}
This fixes all the problems huynhjl brought up. For my original problem, to have a field and a sequence treated as a sequence, a simple class will now do.
trait Field[FT] {
val defaultValue: FT
class FieldSeq[+ST](val field: FT, val underlyingSeq: Seq[ST] = Vector())
extends SeqAdapter[ST,FieldSeq] {
def create[B](seq: Seq[B]) = new FieldSeq[B](field, seq)
}
object FieldSeq extends SeqAdapterCompanion[FieldSeq] {
def createDefault[A](seq: Seq[A]): FieldSeq[A] =
new FieldSeq[A](defaultValue, seq)
override implicit def canBuildFrom[A,B] = super.canBuildFrom[A,B]
}
}
This can be tested as so:
val StringField = new Field[String] { val defaultValue = "Default Value" }
StringField: java.lang.Object with Field[String] = $anon$1#57f5de73
val fs = new StringField.FieldSeq[Int]("str", Vector(1,2))
val fsfield = fs.field
fs: StringField.FieldSeq[Int] = (1, 2)
fsfield: String = str
val fm = fs.map(1 + _)
val fmfield = fm.field
fm: StringField.FieldSeq[Int] = (2, 3)
fmfield: String = str
val fs2 = new StringField.FieldSeq[Int]("str1", Vector(10, 20))
val fs2field = fs2.field
fs2: StringField.FieldSeq[Int] = (10, 20)
fs2field: String = str1
val ffor = for (x <- fs if x > 0; y <- fs2) yield (x + y)
val fforfield = ffor.field
ffor: StringField.FieldSeq[Int] = (11, 21, 12, 22)
fforfield: String = str
val smap = fs.map(_ + "!")
val smapfield = smap.field
smap: StringField.FieldSeq[String] = (1!, 2!)
smapfield: String = str
val break = List(1,2).map(1 + _)(collection.breakOut): StringField.FieldSeq[Int]
val breakfield = break.field
break: StringField.FieldSeq[Int] = (2, 3)
breakfield: String = Default Value
val x: StringField.FieldSeq[Any] = fs
val xfield = x.field
x: StringField.FieldSeq[Any] = (1, 2)
xfield: String = str

Related

Determine non-empty additional fields in a subclass

Assume I have a trait which looks something like this
trait MyTrait {
val x: Option[String] = None
val y: Option[String] = None
}
Post defining the trait I extend this trait to a class MyClass which looks something like this
case class MyClass(
override val x: Option[String] = None,
override val y: Option[String] = None,
z: Option[String] = None
) extends MyTrait
Now I need to find if any other property other than the properties extended by MyTrait is not None. In the sense if I need to write a method which is called getClassInfo which returns true/false based upon the values present in the case class. In this case it should return true if z is Non optional. My getClassInfo goes something like this
def getClassInfo(myClass: MyClass): Boolean = {
myClass
.productIterator
.filterNot(x => x.isInstanceOf[MyTrait])
.exists(_.isInstanceOf[Some[_]])
}
Ideally this should filter out all the fields which are not a part of Mytrait and return me z in this case.
I tried using variance, However It seems like isInstanceOf doesn't take the same
filterNot(x => x.isInstanceOf[+MyTrait])
However this cannot be possible
val a = getClassInfo(MyClass()) //Needs to return false
val b = getClassInfo(MyClass(Some("a"), Some("B"), Some("c"))) //returns true
val c = getClassInfo(MyClass(z = Some("z"))) //needs to return true
val d = getClassInfo(MyClass(x = Some("x"), y = Some("y"))) // needs to return false
The simple answer is to declare an abstract method that gives the result you want and override it in the subclass:
trait MyTrait {
def x: Option[String]
def y: Option[String]
def anyNonEmpty: Boolean = false
}
case class MyClass(x: Option[String] = None, y: Option[String] = None, z: Option[String] = None) extends MyTrait {
override def anyNonEmpty = z.nonEmpty
}
You can then call anyNonEmpty on your object to get the getClassInfo result.
Also note that I've used def here in the trait because val in a trait is generally a bad idea because of initialisation issues.
If you really need reflection you can try
import scala.reflect.runtime.currentMirror
import scala.reflect.runtime.universe._
def getClassInfo(myClass: MyClass): Boolean = {
def fields[A: TypeTag] = typeOf[A].members.collect {
case m: MethodSymbol if m.isGetter && m.isPublic => m
}
val mtFields = fields[MyTrait]
val mcFields = fields[MyClass]
val mtFieldNames = mtFields.map(_.name).toSet
val mcNotMtFields = mcFields.filterNot(f => mtFieldNames.contains(f.name))
val instanceMirror = currentMirror.reflect(myClass)
val mcNotMtFieldValues = mcNotMtFields.map(f => instanceMirror.reflectField(f).get)
mcNotMtFieldValues.exists(_.isInstanceOf[Some[_]])
}
val a = getClassInfo(MyClass()) //false
val b = getClassInfo(MyClass(Some("a"), Some("B"), Some("c"))) //true
val c = getClassInfo(MyClass(z = Some("z"))) //true
val d = getClassInfo(MyClass(x = Some("x"), y = Some("y")))//false

Using a double value in a Fractional[T] method

I have the following function which generates a Uniform distributed value between 2 bounds:
def Uniform(x: Bounded[Double], n: Int): Bounded[Double] = {
val y: Double = (x.upper - x.lower) * scala.util.Random.nextDouble() + x.lower
Bounded(y, x.bounds)
}
and Bounded is defined as follows:
trait Bounded[T] {
val underlying: T
val bounds: (T, T)
def lower: T = bounds._1
def upper: T = bounds._2
override def toString = underlying.toString + " <- [" + lower.toString + "," + upper.toString + "]"
}
object Bounded {
def apply[T : Numeric](x: T, _bounds: (T, T)): Bounded[T] = new Bounded[T] {
override val underlying: T = x
override val bounds: (T, T) = _bounds
}
}
However, I want Uniform to work on all Fractional[T] values so I wanted to add a context bound:
def Uniform[T : Fractional](x: Bounded[T], n: Int): Bounded[T] = {
import Numeric.Implicits._
val y: T = (x.upper - x.lower) * scala.util.Random.nextDouble().asInstanceOf[T] + x.lower
Bounded(y, x.bounds)
}
This works swell when doing a Uniform[Double](x: Bounded[Double]), but the other ones are impossible and get a ClassCastException at runtime because they can not be casted. Is there a way to solve this?
I'd suggest defining a new type class that characterizes types that you can get random instances of:
import scala.util.Random
trait GetRandom[A] {
def next(): A
}
object GetRandom {
def instance[A](a: => A): GetRandom[A] = new GetRandom[A] {
def next(): A = a
}
implicit val doubleRandom: GetRandom[Double] = instance(Random.nextDouble())
implicit val floatRandom: GetRandom[Float] = instance(Random.nextFloat())
// Define any other instances here
}
Now you can write Uniform like this:
def Uniform[T: Fractional: GetRandom](x: Bounded[T], n: Int): Bounded[T] = {
import Numeric.Implicits._
val y: T = (x.upper - x.lower) * implicitly[GetRandom[T]].next() + x.lower
Bounded(y, x.bounds)
}
And use it like this:
scala> Uniform[Double](Bounded(2, (0, 4)), 1)
res15: Bounded[Double] = 1.5325899033654382 <- [0.0,4.0]
scala> Uniform[Float](Bounded(2, (0, 4)), 1)
res16: Bounded[Float] = 0.06786823 <- [0.0,4.0]
There are libraries like rng that provide a similar type class for you, but they tend to be focused on purely functional ways to work with random numbers, so if you want something simpler you're probably best off writing your own.

scala's spire framework : I am unable to operate on a group

I try to use spire, a math framework, but I have an error message:
import spire.algebra._
import spire.implicits._
trait AbGroup[A] extends Group[A]
final class Rationnel_Quadratique(val n1: Int = 2)(val coef: (Int, Int)) {
override def toString = {
coef match {
case (c, i) =>
s"$c + $i√$n"
}
}
def a() = coef._1
def b() = coef._2
def n() = n1
}
object Rationnel_Quadratique {
def apply(coef: (Int, Int),n: Int = 2)= {
new Rationnel_Quadratique(n)(coef)
}
}
object AbGroup {
implicit object RQAbGroup extends AbGroup[Rationnel_Quadratique] {
def +(a: Rationnel_Quadratique, b: Rationnel_Quadratique): Rationnel_Quadratique = Rationnel_Quadratique(coef=(a.a() + b.a(), a.b() + b.b()))
def inverse(a: Rationnel_Quadratique): Rationnel_Quadratique = Rationnel_Quadratique((-a.a(), -a.b()))
def id: Rationnel_Quadratique = Rationnel_Quadratique((0, 0))
}
}
object euler66_2 extends App {
val c = Rationnel_Quadratique((1, 2))
val d = Rationnel_Quadratique((3, 4))
val e = c + d
println(e)
}
the program is expected to add 1+2√2 and 3+4√2, but instead I have this error:
could not find implicit value for evidence parameter of type spire.algebra.AdditiveSemigroup[Rationnel_Quadratique]
val e = c + d
^
I think there is something essential I have missed (usage of implicits?)
It looks like you are not using Spire correctly.
Spire already has an AbGroup type, so you should be using that instead of redefining your own. Here's an example using a simple type I created called X.
import spire.implicits._
import spire.algebra._
case class X(n: BigInt)
object X {
implicit object XAbGroup extends AbGroup[X] {
def id: X = X(BigInt(0))
def op(lhs: X, rhs: X): X = X(lhs.n + rhs.n)
def inverse(lhs: X): X = X(-lhs.n)
}
}
def test(a: X, b: X): X = a |+| b
Note that with groups (as well as semigroups and monoids) you'd use |+| rather than +. To get plus, you'll want to define something with an AdditiveSemigroup (e.g. Semiring, or Ring, or Field or something).
You'll also use .inverse and |-| instead of unary and binary - if that makes sense.
Looking at your code, I am also not sure your actual number type is right. What will happen if I want to add two numbers with different values for n?
Anyway, hope this clears things up for you a bit.
EDIT: Since it seems like you're also getting hung up on Scala syntax, let me try to sketch a few designs that might work. First, there's always a more general solution:
import spire.implicits._
import spire.algebra._
import spire.math._
case class RQ(m: Map[Natural, SafeLong]) {
override def toString: String = m.map {
case (k, v) => if (k == 1) s"$v" else s"$v√$k" }.mkString(" + ")
}
object RQ {
implicit def abgroup[R <: Radical](implicit r: R): AbGroup[RQ] =
new AbGroup[RQ] {
def id: RQ = RQ(Map.empty)
def op(lhs: RQ, rhs: RQ): RQ = RQ(lhs.m + rhs.m)
def inverse(lhs: RQ): RQ = RQ(-lhs.m)
}
}
object Test {
def main(args: Array[String]) {
implicit val radical = _2
val x = RQ(Map(Natural(1) -> 1, Natural(2) -> 2))
val y = RQ(Map(Natural(1) -> 3, Natural(2) -> 4))
println(x)
println(y)
println(x |+| y)
}
}
This allows you to add different roots together without problem, at the cost of some indirection. You could also stick more closely to your design with something like this:
import spire.implicits._
import spire.algebra._
abstract class Radical(val n: Int) { override def toString: String = n.toString }
case object _2 extends Radical(2)
case object _3 extends Radical(3)
case class RQ[R <: Radical](a: Int, b: Int)(implicit r: R) {
override def toString: String = s"$a + $b√$r"
}
object RQ {
implicit def abgroup[R <: Radical](implicit r: R): AbGroup[RQ[R]] =
new AbGroup[RQ[R]] {
def id: RQ[R] = RQ[R](0, 0)
def op(lhs: RQ[R], rhs: RQ[R]): RQ[R] = RQ[R](lhs.a + rhs.a, lhs.b + rhs.b)
def inverse(lhs: RQ[R]): RQ[R] = RQ[R](-lhs.a, -lhs.b)
}
}
object Test {
def main(args: Array[String]) {
implicit val radical = _2
val x = RQ[_2.type](1, 2)
val y = RQ[_2.type](3, 4)
println(x)
println(y)
println(x |+| y)
}
}
This approach creates a fake type to represent whatever radical you are using (e.g. √2) and parameterizes QR on that type. This way you can be sure that no one will try to do additions that are invalid.
Hopefully one of these approaches will work for you.

Instantiating a case class with default args via reflection

I need to be able to instantiate various case classes through reflection, both by figuring out the argument types of the constructor, as well as invoking the constructor with all default arguments.
I've come as far as this:
import reflect.runtime.{universe => ru}
val m = ru.runtimeMirror(getClass.getClassLoader)
case class Bar(i: Int = 33)
val tpe = ru.typeOf[Bar]
val classBar = tpe.typeSymbol.asClass
val cm = m.reflectClass(classBar)
val ctor = tpe.declaration(ru.nme.CONSTRUCTOR).asMethod
val ctorm = cm.reflectConstructor(ctor)
// figuring out arg types
val arg1 = ctor.paramss.head.head
arg1.typeSignature =:= ru.typeOf[Int] // true
// etc.
// instantiating with given args
val p = ctorm(33)
Now the missing part:
val p2 = ctorm() // IllegalArgumentException: wrong number of arguments
So how can I create p2 with the default arguments of Bar, i.e. what would be Bar() without reflection.
So in the linked question, the :power REPL uses internal API, which means that defaultGetterName is not available, so we need to construct that from hand. An adoption from #som-snytt 's answer:
def newDefault[A](implicit t: reflect.ClassTag[A]): A = {
import reflect.runtime.{universe => ru, currentMirror => cm}
val clazz = cm.classSymbol(t.runtimeClass)
val mod = clazz.companionSymbol.asModule
val im = cm.reflect(cm.reflectModule(mod).instance)
val ts = im.symbol.typeSignature
val mApply = ts.member(ru.newTermName("apply")).asMethod
val syms = mApply.paramss.flatten
val args = syms.zipWithIndex.map { case (p, i) =>
val mDef = ts.member(ru.newTermName(s"apply$$default$$${i+1}")).asMethod
im.reflectMethod(mDef)()
}
im.reflectMethod(mApply)(args: _*).asInstanceOf[A]
}
case class Foo(bar: Int = 33)
val f = newDefault[Foo] // ok
Is this really the shortest path?
Not minimized... and not endorsing...
scala> import scala.reflect.runtime.universe
import scala.reflect.runtime.universe
scala> import scala.reflect.internal.{ Definitions, SymbolTable, StdNames }
import scala.reflect.internal.{Definitions, SymbolTable, StdNames}
scala> val ds = universe.asInstanceOf[Definitions with SymbolTable with StdNames]
ds: scala.reflect.internal.Definitions with scala.reflect.internal.SymbolTable with scala.reflect.internal.StdNames = scala.reflect.runtime.JavaUniverse#52a16a10
scala> val n = ds.newTermName("foo")
n: ds.TermName = foo
scala> ds.nme.defaultGetterName(n,1)
res1: ds.TermName = foo$default$1
Here's a working version that you can copy into your codebase:
import scala.reflect.api
import scala.reflect.api.{TypeCreator, Universe}
import scala.reflect.runtime.universe._
object Maker {
val mirror = runtimeMirror(getClass.getClassLoader)
var makerRunNumber = 1
def apply[T: TypeTag]: T = {
val method = typeOf[T].companion.decl(TermName("apply")).asMethod
val params = method.paramLists.head
val args = params.map { param =>
makerRunNumber += 1
param.info match {
case t if t <:< typeOf[Enumeration#Value] => chooseEnumValue(convert(t).asInstanceOf[TypeTag[_ <: Enumeration]])
case t if t =:= typeOf[Int] => makerRunNumber
case t if t =:= typeOf[Long] => makerRunNumber
case t if t =:= typeOf[Date] => new Date(Time.now.inMillis)
case t if t <:< typeOf[Option[_]] => None
case t if t =:= typeOf[String] && param.name.decodedName.toString.toLowerCase.contains("email") => s"random-$arbitrary#give.asia"
case t if t =:= typeOf[String] => s"arbitrary-$makerRunNumber"
case t if t =:= typeOf[Boolean] => false
case t if t <:< typeOf[Seq[_]] => List.empty
case t if t <:< typeOf[Map[_, _]] => Map.empty
// Add more special cases here.
case t if isCaseClass(t) => apply(convert(t))
case t => throw new Exception(s"Maker doesn't support generating $t")
}
}
val obj = mirror.reflectModule(typeOf[T].typeSymbol.companion.asModule).instance
mirror.reflect(obj).reflectMethod(method)(args:_*).asInstanceOf[T]
}
def chooseEnumValue[E <: Enumeration: TypeTag]: E#Value = {
val parentType = typeOf[E].asInstanceOf[TypeRef].pre
val valuesMethod = parentType.baseType(typeOf[Enumeration].typeSymbol).decl(TermName("values")).asMethod
val obj = mirror.reflectModule(parentType.termSymbol.asModule).instance
mirror.reflect(obj).reflectMethod(valuesMethod)().asInstanceOf[E#ValueSet].head
}
def convert(tpe: Type): TypeTag[_] = {
TypeTag.apply(
runtimeMirror(getClass.getClassLoader),
new TypeCreator {
override def apply[U <: Universe with Singleton](m: api.Mirror[U]) = {
tpe.asInstanceOf[U # Type]
}
}
)
}
def isCaseClass(t: Type) = {
t.companion.decls.exists(_.name.decodedName.toString == "apply") &&
t.decls.exists(_.name.decodedName.toString == "copy")
}
}
And, when you want to use it, you can call:
val user = Maker[User]
val user2 = Maker[User].copy(email = "someemail#email.com")
The code above generates arbitrary and unique values. The data aren't exactly randomised. It's best for using in tests.
It works with Enum and nested case class. You can also easily extend it to support some other special types.
Read our full blog post here: https://give.engineering/2018/08/24/instantiate-case-class-with-arbitrary-value.html
This is the most complete example how to create case class via reflection with default constructor parameters(Github source):
import scala.reflect.runtime.universe
import scala.reflect.internal.{Definitions, SymbolTable, StdNames}
object Main {
def newInstanceWithDefaultParameters(className: String): Any = {
val runtimeMirror: universe.Mirror = universe.runtimeMirror(getClass.getClassLoader)
val ds = universe.asInstanceOf[Definitions with SymbolTable with StdNames]
val classSymbol = runtimeMirror.staticClass(className)
val classMirror = runtimeMirror.reflectClass(classSymbol)
val moduleSymbol = runtimeMirror.staticModule(className)
val moduleMirror = runtimeMirror.reflectModule(moduleSymbol)
val moduleInstanceMirror = runtimeMirror.reflect(moduleMirror.instance)
val defaultValueMethodSymbols = moduleMirror.symbol.info.members
.filter(_.name.toString.startsWith(ds.nme.defaultGetterName(ds.newTermName("apply"), 1).toString.dropRight(1)))
.toSeq
.reverse
.map(_.asMethod)
val defaultValueMethods = defaultValueMethodSymbols.map(moduleInstanceMirror.reflectMethod).toList
val primaryConstructorMirror = classMirror.reflectConstructor(classSymbol.primaryConstructor.asMethod)
primaryConstructorMirror.apply(defaultValueMethods.map(_.apply()): _*)
}
def main(args: Array[String]): Unit = {
val instance = newInstanceWithDefaultParameters(classOf[Bar].getName)
println(instance)
}
}
case class Bar(i: Int = 33)

Min/max with Option[T] for possibly empty Seq?

I'm doing a bit of Scala gymnastics where I have Seq[T] in which I try to find the "smallest" element. This is what I do right now:
val leastOrNone = seq.reduceOption { (best, current) =>
if (current.something < best.something) current
else best
}
It works fine, but I'm not quite satisfied - it's a bit long for such a simple thing, and I don't care much for "if"s. Using minBy would be much more elegant:
val least = seq.minBy(_.something)
... but min and minBy throw exceptions when the sequence is empty. Is there an idiomatic, more elegant way of finding the smallest element of a possibly empty list as an Option?
seq.reduceOption(_ min _)
does what you want?
Edit: Here's an example incorporating your _.something:
case class Foo(a: Int, b: Int)
val seq = Seq(Foo(1,1),Foo(2,0),Foo(0,3))
val ord = Ordering.by((_: Foo).b)
seq.reduceOption(ord.min) //Option[Foo] = Some(Foo(2,0))
or, as generic method:
def minOptionBy[A, B: Ordering](seq: Seq[A])(f: A => B) =
seq reduceOption Ordering.by(f).min
which you could invoke with minOptionBy(seq)(_.something)
Starting Scala 2.13, minByOption/maxByOption is now part of the standard library and returns None if the sequence is empty:
seq.minByOption(_.something)
List((3, 'a'), (1, 'b'), (5, 'c')).minByOption(_._1) // Option[(Int, Char)] = Some((1,b))
List[(Int, Char)]().minByOption(_._1) // Option[(Int, Char)] = None
A safe, compact and O(n) version with Scalaz:
xs.nonEmpty option xs.minBy(_.foo)
Hardly an option for any larger list due to O(nlogn) complexity:
seq.sortBy(_.something).headOption
Also, it is available to do like that
Some(seq).filter(_.nonEmpty).map(_.minBy(_.something))
How about this?
import util.control.Exception._
allCatch opt seq.minBy(_.something)
Or, more verbose, if you don't want to swallow other exceptions:
catching(classOf[UnsupportedOperationException]) opt seq.minBy(_.something)
Alternatively, you can pimp all collections with something like this:
import collection._
class TraversableOnceExt[CC, A](coll: CC, asTraversable: CC => TraversableOnce[A]) {
def minOption(implicit cmp: Ordering[A]): Option[A] = {
val trav = asTraversable(coll)
if (trav.isEmpty) None
else Some(trav.min)
}
def minOptionBy[B](f: A => B)(implicit cmp: Ordering[B]): Option[A] = {
val trav = asTraversable(coll)
if (trav.isEmpty) None
else Some(trav.minBy(f))
}
}
implicit def extendTraversable[A, C[A] <: TraversableOnce[A]](coll: C[A]): TraversableOnceExt[C[A], A] =
new TraversableOnceExt[C[A], A](coll, identity)
implicit def extendStringTraversable(string: String): TraversableOnceExt[String, Char] =
new TraversableOnceExt[String, Char](string, implicitly)
implicit def extendArrayTraversable[A](array: Array[A]): TraversableOnceExt[Array[A], A] =
new TraversableOnceExt[Array[A], A](array, implicitly)
And then just write seq.minOptionBy(_.something).
I have the same problem before, so I extends Ordered and implement the compare function.
here is example:
case class Point(longitude0: String, latitude0: String) extends Ordered [Point]{
def this(point: Point) = this(point.original_longitude,point.original_latitude)
val original_longitude = longitude0
val original_latitude = latitude0
val longitude = parseDouble(longitude0).get
val latitude = parseDouble(latitude0).get
override def toString: String = "longitude: " +original_longitude +", latitude: "+ original_latitude
def parseDouble(s: String): Option[Double] = try { Some(s.toDouble) } catch { case _ => None }
def distance(other: Point): Double =
sqrt(pow(longitude - other.longitude, 2) + pow(latitude - other.latitude, 2))
override def compare(that: Point): Int = {
if (longitude < that.longitude)
return -1
else if (longitude == that.longitude && latitude < that.latitude)
return -1
else
return 1
}
}
so if I have a seq of Point
I can ask for max or min method
var points = Seq[Point]()
val maxPoint = points.max
val minPoint = points.min
You could always do something like:
case class Foo(num: Int)
val foos: Seq[Foo] = Seq(Foo(1), Foo(2), Foo(3))
val noFoos: Seq[Foo] = Seq.empty
def minByOpt(foos: Seq[Foo]): Option[Foo] =
foos.foldLeft(None: Option[Foo]) { (acc, elem) =>
Option((elem +: acc.toSeq).minBy(_.num))
}
Then use like:
scala> minByOpt(foos)
res0: Option[Foo] = Some(Foo(1))
scala> minByOpt(noFoos)
res1: Option[Foo] = None
For scala < 2.13
Try(seq.minBy(_.something)).toOption
For scala 2.13
seq.minByOption(_.something)
In Haskell you'd wrap the minimumBy call as
least f x | Seq.null x = Nothing
| otherwise = Just (Seq.minimumBy f x)