According to the docs, None object is intended to "represent non-existent values". As far as I've seen it's mostly used as an empty Option. But do you think it's a good idea to use it for other purposes. For example, in my library I want to have an universal "Empty" object which could be assigned for various missing values, where I would just implicitly convert the "Empty" value to my types as needed:
// In library:
trait A {
implicit def noneToT1(none: Option[Nothing]): T1 = defaultT1
implicit def noneToT2(none: Option[Nothing]): T2 = defaultT2
def f1: T1
def f2: T2
}
// In the code that uses the library
class EmptyA extends A {
def f1 = None
def f2 = None
}
One reason for not (mis)using None in this fashion is that the user would expect that f1 and f2 return Option[T1] and Option[T2] respectively. And they don't. Off course, I could have def f1: Option[T1], but in this case the values are not actually optional, they just can have some default empty value, or a real value, I just want to create the default values "under the hood" and have some uniform way of saying "default" or "empty" through the entire library. So the question is, should I use None to express this "defaultness" or go for some custom type? Right now I'm using my own object Empty, but it feels a bit superfluous.
EDIT:
To ilustrate my question I'll add the code I am using right now:
// In library:
trait Empty
object Empty extends Empty
trait A {
implicit def emptyToT1(none: Empty): T1 = defaultT1
implicit def emptyToT2(none: Empty): T2 = defaultT2
def f1: T1
def f2: T2
}
// In the code that uses the library
class EmptyA extends A {
def f1 = Empty
def f2 = Empty
}
class HalfFullA extends A {
def f1 = Empty
def f2 = someValue2
}
class FullA extends A {
def f1 = someValue1
def f2 = someValue2
}
My question is quite simple: is it a good idea to use scala's None instead of my Empty?
I would just use typeclasses for this:
trait WithDefault[T] {
def default: T
}
object WithDefault {
// if T1 is an existing class
implicit val t1Default = new WithDefault[T1] {def default = defaultT1}
}
//if T2 is your own class:
class T2 ...
object T2 {
implicit val withDefault = new WithDefault[T2] {def default = defaultT2}
}
then somewhere convenient:
def default[T : WithDefault] = implicitly[WithDefault[T]].default
and use:
class EmptyA {
def f1 = default[T1]
def f2 = default[T2]
}
Update: To accomudate Vilius, one can try this:
def default = new WithDefault[Nothing]{def default = error("no default")}
implicit def toDefault[U, T](dummy: WithDefault[U])(implicit withDefault: WithDefault[T]): T = withDefault.default
class EmptyA {
def f1: T1 = default
def f2: T2 = default
}
This has the benefit over the OP's original attempt in that each new class can define its own default (and others in WithDefault), rather than have everything in a trait A.
However, this doesn't work. See https://issues.scala-lang.org/browse/SI-2046
To work around this:
trait A {
def f1: T1
def f2: T2
implicit def toT1Default(dummy: WithDefault[Nothing]) = toDefault[T1](dummy)
implicit def toT2Default(dummy: WithDefault[Nothing]) = toDefault[T2](dummy)
}
class EmptyA extends A {
def f1 = default
def f2 = default
}
I think you should go for something much simpler. For instance, starting with your example and deleting extraneous stuff we very quickly get to,
trait A {
def noT1 = defaultT1
def noT2 = defaultT2
def f1: T1
def f2: T2
}
class EmptyA extends A {
def f1 = noT1
def f2 = noT2
}
I really don't see that the addition of Options or implicits to this would add any value, at least not unless there's some other unstated context for the question.
If you can't or don't want to define the default value using inheritance, I suggest to keep the new object. Reusing None for something else than the Some counterpart seems wrong and doesn't really save you much.
Related
Using context bounds in scala you can do stuff like
trait HasBuild[T] {
def build(buildable: T): Something
}
object Builders {
implict object IntBuilder extends HasBuild[Int] {
override def build(i: Int) = ??? // Construct a Something however appropriate
}
}
import Builders._
def foo[T: HasBuild](input: T): Something = implicitly[HasBuild[T]].build(1)
val somethingFormInt = foo(1)
Or simply
val somethingFromInt = implicitly[HasBuild[Int]].build(1)
How could I express the type of a Seq of any elements that have an appropriate implicit HasBuild object in scope? Is this possible without too much magic and external libraries?
Seq[WhatTypeGoesHere] - I should be able to find the appropriate HasBuild for each element
This obviously doesn't compile:
val buildables: Seq[_: HasBuild] = ???
Basically I'd like to be able to handle unrelated types in a common way (e.g.: build), without the user wrapping them in some kind of adapter manually - and enforce by the compiler, that the types actually can be handled. Not sure if the purpose is clear.
Something you can do:
case class HasHasBuild[A](value: A)(implicit val ev: HasBuild[A])
object HasHasBuild {
implicit def removeEvidence[A](x: HasHasBuild[A]): A = x.value
implicit def addEvidence[A: HasBuild](x: A): HasHasBuild[A] = HasHasBuild(x)
}
and now (assuming you add a HasBuild[String] for demonstration):
val buildables: Seq[HasHasBuild[_]] = Seq(1, "a")
compiles, but
val buildables1: Seq[HasHasBuild[_]] = Seq(1, "a", 1.0)
doesn't. You can use methods with implicit HasBuild parameters when you have only a HasHasBuild:
def foo1[A](x: HasHasBuild[A]) = {
import x.ev // now you have an implicit HasBuild[A] in scope
foo(x.value)
}
val somethings: Seq[Something] = buildables.map(foo1(_))
First things first, contrary to some of the comments, you are relying on context bounds. Requesting an implicit type class instance for a T is what you call a "context bound".
What you want is achievable, but not trivial and certainly not without other libraries.
import shapeless.ops.hlist.ToList
import shapeless._
import shapeless.poly_
object builder extends Poly1 {
implicit def caseGeneric[T : HasBuilder] = {
at[T](obj => implicitly[HasBuilder[T]].build(obj))
}
}
class Builder[L <: HList](mappings: L) {
def build[HL <: HList]()(
implicit fn: Mapper.Aux[builder.type, L, HL],
lister: ToList[Something]
) = lister(mappings map fn)
def and[T : HasBuilder](el: T) = new Builder[T :: L](el :: mappings)
}
object Builder {
def apply[T : HasBuilder](el: T) = new Builder(el :: HNil)
}
Now you might be able to do stuff like:
Builder(5).and("string").build()
This will call out the build methods from all the individual implcit type class instances and give you a list of the results, where every result has type Something. It relies on the fact that all the build methods have a lower upper bound of Something, e.g as per your example:
trait HasBuild[T] {
def build(buildable: T): Something
}
This is a follow-up to my previous question
Suppose I have a trait ConverterTo and two implementations:
trait ConverterTo[T] {
def convert(s: String): Option[T]
}
object Converters1 {
implicit val toInt: ConverterTo[Int] = ???
}
object Converters2 {
implicit val toInt: ConverterTo[Int] = ???
}
I have also two classes A1 and A2
class A1 {
def foo[T](s: String)(implicit ct: ConverterTo[T]) = ct.convert(s)
}
class A2 {
def bar[T](s: String)(implicit ct: ConverterTo[T]) = ct.convert(s)
}
Now I would like any foo[T] call to use Converters1 and any bar[T] call to use Converters2 without importing Converters1 and Converters2 in the client code.
val a1 = new A1()
val a2 = new A2()
...
val i = a1.foo[Int]("0") // use Converters1 without importing it
...
val j = a2.bar[Int]("0") // use Converters2 without importing it
Can it be done in Scala ?
Import Converters in the class.
class A1 {
import Converters1._
private def fooPrivate[T](s: String)(implicit ct: ConverterTo[T]) = ct.convert(s)
def fooShownToClient[T](s: String) = fooPrivate(s)
}
Then use the method, that is shown to client
val a1 = new A1()
a1.fooShownToClient[Int]("0")
Now the client is unaware of the convertors.
If you have a situation where you need more local control; You can just opt to pass the implicit parameters explicitly:
val i = a1.foo("0")(Converters1.toInt)
val j = a2.foo("0")(Converters2.toInt)
It really depends on what you want. If you want to select a particular implementation without polluting local scope, do it like this (or introduce a new scope). mohit's solution works well if the classes need a particular implementation (although in that case, there's no real point in declaring this dependency as implicit anymore).
Is there a way in scala to call a method belonging to a type? For example, suppose I have a trait called Constructable that describes types than can construct a default instance of themselves. Then, I can write the following code:
trait Constructable[A] {
def construct: A
}
class Foo(x: Int) extends Constructable[Foo] {
def construct = new Foo(0)
}
def main(args: Array[String]) {
val f = new Foo(4)
println(f.construct)
}
This is ok, but what I really want is to be able to construct a default object given only the type of object. For example, suppose I want to accept a list of constructables and prepend a default object at the beginning of the list:
def prependDefault1[A <: Constructable[A]](c: List[A]): List[A] = {
val h = c.head
h.construct :: c
}
The above code works, but only if c is not empty. What I'd really like is to write something like the following:
def prependDefault2[A <: Constructable[A]](c: List[A]): List[A] = {
A.construct :: c
}
Is there any way to achieve this, possibly by changing the definition of a Constructable so that the construct method belongs to the "class" rather than the "instance" (to use Java terminology)?
You can't do this way, but you can do this using typeclasses:
trait Constructable[A] {
def construct: A
}
// 'case' just so it's printed nicely
case class Foo(x: Int)
// implicit vals have to be inside some object; omitting it here for clarity
implicit val fooConstructable = new Constructable[Foo] {
def construct = new Foo (0)
}
def prependDefault2[A : Constructable](c: List[A]): List[A] = {
implicitly[Constructable[A]].construct :: c
}
And then:
scala> prependDefault2(Nil: List[Foo])
res7: List[Foo] = List(Foo(0))
Some final remarks:
Implicits have to live inside an object. There are three places it can be located:
object Constructable { implicit val fooConstructable = ... (companion object of the typeclass trait)
object Foo { implicit val fooConstructable = ... (companion object of the class we implement typeclass for)
object SomethingElse { implicit val fooConstructable = ... (some random unrelated object)
Only in the last case you need to use import SomethingElse._ in order to be able to use the implicit.
Ideally I'd like to be able to do the following in Scala:
import Builders._
val myBuilder = builder[TypeToBuild] // Returns instance of TypeToBuildBuilder
val obj = myBuilder.methodOnTypeToBuildBuilder(...).build()
In principle the goal is simply to be able to 'map' TypeToBuild to TypeToBuildBuilder using external mapping definitions (i.e. assume no ability to change these classes) and leverage this in type inferencing.
I got the following working with AnyRef types:
import Builders._
val myBuilder = builder(TypeToBuild)
myBuilder.methodOnTypeToBuildBuilder(...).build()
object Builders {
implicit val typeToBuildBuilderFactory =
new BuilderFactory[TypeToBuild.type, TypeToBuildBuilder]
def builder[T, B](typ: T)(implicit ev: BuilderFactory[T, B]): B = ev.create
}
class BuilderFactory[T, B: ClassTag] {
def create: B = classTag[B].runtimeClass.newInstance().asInstanceOf[B]
}
Note that the type is passed as a function argument rather than a type argument.
I'd be supremely happy just to find out how to get the above working with Any types, rather than just AnyRef types. It seems this limitation comes since Singleton types are only supported for AnyRefs (i.e. my use of TypeToBuild.type).
That being said, an answer that solves the original 'ideal' scenario (using a type argument instead of a function argument) would be fantastic!
EDIT
A possible solution that requires classOf[_] (would really love not needing to use classOf!):
import Builders._
val myBuilder = builder(classOf[TypeToBuild])
myBuilder.methodOnTypeToBuildBuilder(...).build()
object Builders {
implicit val typeToBuildBuilderFactory =
new BuilderFactory[classOf[TypeToBuild], TypeToBuildBuilder]
def builder[T, B](typ: T)(implicit ev: BuilderFactory[T, B]): B = ev.create
}
class BuilderFactory[T, B: ClassTag] {
def create: B = classTag[B].runtimeClass.newInstance().asInstanceOf[B]
}
Being able to just use builder(TypeToBuild) is really just a win in elegance/brevity. Being able to use builder[TypeToBuild] would be cool as perhaps this could one day work (with type inference advancements in Scala):
val obj: TypeToBuild = builder.methodOnTypeToBuildBuilder(...).build();
Here is a complete, working example using classOf: http://ideone.com/94rat3
Yes, Scala supports return types based on the parameters types. An example of this would be methods in the collections API like map that use the CanBuildFrom typeclass to return the desired type.
I'm not sure what you are trying to do with your example code, but maybe you want something like:
trait Builder[-A, +B] {
def create(x: A): B
}
object Builders {
implicit val int2StringBuilder = new Builder[Int, String] {
def create(x: Int) = "a" * x
}
def buildFrom[A, B](x: A)(implicit ev: Builder[A, B]): B = ev.create(x)
}
import Builders._
buildFrom(5)
The magic with newInstance only works for concrete classes that have a constructor that takes no parameters, so it probably isn't generic enough to be useful.
If you're not afraid of implicit conversions, you could do something like this:
import scala.language.implicitConversions
trait BuilderMapping[TypeToBuild, BuilderType] {
def create: BuilderType
}
case class BuilderSpec[TypeToBuild]()
def builder[TypeToBuild] = BuilderSpec[TypeToBuild]
implicit def builderSpecToBuilder[TypeToBuild, BuilderType]
(spec: BuilderSpec[TypeToBuild])
(implicit ev: BuilderMapping[TypeToBuild, BuilderType]) = ev.create
case class Foo(count: Int)
case class FooBuilder() {
def translate(f: Foo) = "a" * f.count
}
implicit val FooToFooBuilder = new BuilderMapping[Foo, FooBuilder] {
def create = FooBuilder()
}
val b = builder[Foo]
println(b.translate(Foo(3)))
The implicit conversions aren't too bad, since they're constrained to these builder-oriented types. The conversion is needed to make b.translate valid.
It looked like wingedsubmariner's answer was most of what you wanted, but you didn't want to specify both TypeToBuild and BuilderType (and you didn't necessarily want to pass a value). To achieve that, we needed to break up that single generic signature into two parts, which is why the BuilderSpec type exists.
It might also be possible to use something like partial generic application (see the answers to a question that I asked earlier), though I can't put the pieces together in my head at the moment.
I'll resort to answering my own question since a Redditor ended up giving me the answer I was looking for and they appear to have chosen not to respond here.
trait Buildable[T] {
type Result
def newBuilder: Result
}
object Buildable {
implicit object ABuildable extends Buildable[A] {
type Result = ABuilder
override def newBuilder = new ABuilder
}
implicit object BBuildable extends Buildable[B] {
type Result = BBuilder
override def newBuilder = new BBuilder
}
}
def builder[T](implicit B: Buildable[T]): B.Result = B.newBuilder
class ABuilder {
def method1() = println("Call from ABuilder")
}
class BBuilder {
def method2() = println("Call from BBuilder")
}
Then you will get:
scala> builder[A].method1()
Call from ABuilder
scala> builder[B].method2()
Call from BBuilder
You can see the reddit post here: http://www.reddit.com/r/scala/comments/2542x8/is_it_possible_to_define_a_function_return_type/
And a full working version here: http://ideone.com/oPI7Az
What I try to do is to come up with a case class which I can use in pattern matching which has exactly one field, e.g. an immutable set. Furthermore, I would like to make use of functions like map, foldLeft and so on which should be passed down to the set. I tried it as in the following:
case class foo(s:Set[String]) extends Iterable[String] {
override def iterator = s.iterator
}
Now if I try to make use of e.g. the map function, I get an type error:
var bar = foo(Set() + "test1" + "test2")
bar = bar.map(x => x)
found : Iterable[String]
required: foo
bar = bar.map(x => x)
^
The type error is perfectly fine (in my understanding). However, I wonder how one would implement a wrapper case class for a collection such that one can call map, foldLeft and so on and still receive an object of the case class. Would one need to override all these functions or is there some other way around?
Edit
I'm inclined to accept the solution of RĂ©gis Jean-Gilles which works for me. However, after Googling for hours I found another interesting Scala trait named SetProxy. I couldn't find any trivial examples so I'm not sure if this trait does what I want:
come up with a custom type, i.e. a different type than Set
the type must be a case class (we want to do pattern matching)
we need "delegate" methods map, foldLeft and so on which should pass the call to our actual set and return the resulting set wrapped arround in our new type
My first idea was to extend Set but my custom type Foo already extends another class. Therefore, the second idea was to mixin the trait Iterable and IterableLike. Now I red about the trait SetProxy which made me think about which is "the best" way to go. What are your thoughts and experiences?
Since I started learning Scala three days ago, any pointers are highly appreciated!
Hmm this sounds promissing to me but Scala says that variable b is of type Iterable[String] and not of type Foo, i.e. I do not see how IterableLike helps in this situation
You are right. Merely inheriting from IterableLike as shown by mpartel will make the return type of some methods more precise (such as filter, which will return Foo), but for others such as map of flatMap you will need to provide an appopriate CanBuildFrom implicit.
Here is a code snippet that does just that:
import collection.IterableLike
import collection.generic.CanBuildFrom
import collection.mutable.Builder
case class Foo( s:Set[String] ) extends Iterable[String] with IterableLike[String, Foo] {
override def iterator = s.iterator
override protected[this] def newBuilder: scala.collection.mutable.Builder[String, Foo] = new Foo.FooBuilder
def +(elem: String ): Foo = new Foo( s + elem )
}
object Foo {
val empty: Foo = Foo( Set.empty[String] )
def apply( elems: String* ) = new Foo( elems.toSet )
class FooBuilder extends Builder[String, Foo] {
protected var elems: Foo = empty
def +=(x: String): this.type = { elems = elems + x; this }
def clear() { elems = empty }
def result: Foo = elems
}
implicit def canBuildFrom[T]: CanBuildFrom[Foo, String, Foo] = new CanBuildFrom[Foo, String, Foo] {
def apply(from: Foo) = apply()
def apply() = new FooBuilder
}
}
And some test in the repl:
scala> var bar = Foo(Set() + "test1" + "test2")
bar: Foo = (test1, test2)
scala> bar = bar.map(x => x) // compiles just fine because map now returns Foo
bar: Foo = (test1, test2)
Inheriting IterableLike[String, Foo] gives you all those methods such that they return Foo. IterableLike requires you to implement newBuilder in addition to iterator.
import scala.collection.IterableLike
import scala.collection.mutable.{Builder, SetBuilder}
case class Foo(stuff: Set[String]) extends Iterable[String] with IterableLike[String, Foo] {
def iterator: Iterator[String] = stuff.iterator
protected[this] override def newBuilder: Builder[String, Foo] = {
new SetBuilder[String, Set[String]](Set.empty).mapResult(Foo(_))
}
}
// Test:
val a = Foo(Set("a", "b", "c"))
val b = a.map(_.toUpperCase)
println(b.toList.sorted.mkString(", ")) // Prints A, B, C