Check for acceptance of type, rather than value, with isDefinedAt - scala

I have a case where I want use isDefinedAt to check if a partial function accepts a type, rather than a specific value.
val test: PartialFunction[Any, Unit] = {
case y: Int => ???
case ComplexThing(x, y, z) => ???
}
Here you could do something like test isDefinedAt 1 to check for acceptance of that value, however, what I really want to do is check for acceptance of all Ints (more specifically, in my case the type I want to check is awkward to initialize (it has a lot of dependencies), so I would really like to avoid creating an instance if possible - for the moment I'm just using nulls, which feels ugly). Unfortunately, there is no test.isDefinedAt[Int].
I'm not worried about it only accepting some instances of that type - I would just like to know if it's completely impossible that type is accepted.

There is no way to make PartialFunction do this. In fact, because of type erasure, it can be difficult to operate on types at runtime. If you want to be able to verify types at compile-time you can use typeclasses instead:
class AllowType[-T] {
def allowed = true
}
object AllowType {
implicit object DontAllowAnyType extends AllowType[Any] {
override def allowed = false
}
}
implicit object AllowInt extends AllowType[Int]
implicit object AllowString extends AllowType[String]
def isTypeAllowed[T](implicit at: AllowType[T]) = at.allowed
isTypeAllowed[Int] // true
isTypeAllowed[Double] // false

The answer appears to be that this simply isn't possible - there are other ways to do this (as in wingedsubmariner's answer), but that requires either duplicating the information (which renders it pointless, as the reason for doing this was to avoid that), or changing not to use partial functions (which is dictated by an outside API).
The best solution is just to use nulls to fill the dependencies to create instances to check with. It's ugly, and has it's own issues, but it appears to be the best possible without substantial change.
test.isDefinedAt(ComplexThing(null, null, null))

Related

How to design abstract classes if methods don't have the exact same signature?

This is a "real life" OO design question. I am working with Scala, and interested in specific Scala solutions, but I'm definitely open to hear generic thoughts.
I am implementing a branch-and-bound combinatorial optimization program. The algorithm itself is pretty easy to implement. For each different problem we just need to implement a class that contains information about what are the allowed neighbor states for the search, how to calculate the cost, and then potentially what is the lower bound, etc...
I also want to be able to experiment with different data structures. For instance, one way to store a logic formula is using a simple list of lists of integers. This represents a set of clauses, each integer a literal. We can have a much better performance though if we do something like a "two-literal watch list", and store some extra information about the formula in general.
That all would mean something like this
object BnBSolver[S<:BnBState]{
def solve(states: Seq[S], best_state:Option[S]): Option[S] = if (states.isEmpty) best_state else
val next_state = states.head
/* compare to best state, etc... */
val new_states = new_branches ++ states.tail
solve(new_states, new_best_state)
}
class BnBState[F<:Formula](clauses:F, assigned_variables) {
def cost: Int
def branches: Seq[BnBState] = {
val ll = clauses.pick_variable
List(
BnBState(clauses.assign(ll), ll :: assigned_variables),
BnBState(clauses.assign(-ll), -ll :: assigned_variables)
)
}
}
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign(ll: Int) :F =
Formula(clauses.filterNot(_ contains ll)
.map(_.filterNot(_==-ll))))
}
Hopefully this is not too crazy, wrong or confusing. The whole issue here is that this assign method from a formula would usually take just the current literal that is going to be assigned. In the case of two-literal watch lists, though, you are doing some lazy thing that requires you to know later what literals have been previously assigned.
One way to fix this is you just keep this list of previously assigned literals in the data structure, maybe as a private thing. Make it a self-standing lazy data structure. But this list of the previous assignments is actually something that may be naturally available by whoever is using the Formula class. So it makes sense to allow whoever is using it to just provide the list every time you assign, if necessary.
The problem here is that we cannot now have an abstract Formula class that just declares a assign(ll:Int):Formula. In the normal case this is OK, but if this is a two-literal watch list Formula, it is actually an assign(literal: Int, previous_assignments: Seq[Int]).
From the point of view of the classes using it, it is kind of OK. But then how do we write generic code that can take all these different versions of Formula? Because of the drastic signature change, it cannot simply be an abstract method. We could maybe force the user to always provide the full assigned variables, but then this is a kind of a lie too. What to do?
The idea is the watch list class just becomes a kind of regular assign(Int) class if I write down some kind of adapter method that knows where to take the previous assignments from... I am thinking maybe with implicit we can cook something up.
I'll try to make my answer a bit general, since I'm not convinced I'm completely following what you are trying to do. Anyway...
Generally, the first thought should be to accept a common super-class as a parameter. Obviously that won't work with Int and Seq[Int].
You could just have two methods; have one call the other. For instance just wrap an Int into a Seq[Int] with one element and pass that to the other method.
You can also wrap the parameter in some custom class, e.g.
class Assignment {
...
}
def int2Assignment(n: Int): Assignment = ...
def seq2Assignment(s: Seq[Int]): Assignment = ...
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign(ll: Assignment) :F = ...
}
And of course you would have the option to make those conversion methods implicit so that callers just have to import them, not call them explicitly.
Lastly, you could do this with a typeclass:
trait Assigner[A] {
...
}
implicit val intAssigner = new Assigner[Int] {
...
}
implicit val seqAssigner = new Assigner[Seq[Int]] {
...
}
case class Formula[F<:Formula[F]](clauses:List[List[Int]]) {
def assign[A : Assigner](ll: A) :F = ...
}
You could also make that type parameter at the class level:
case class Formula[A:Assigner,F<:Formula[A,F]](clauses:List[List[Int]]) {
def assign(ll: A) :F = ...
}
Which one of these paths is best is up to preference and how it might fit in with the rest of the code.

Scala: Why use implicit on function argument?

I have a following function:
def getIntValue(x: Int)(implicit y: Int ) : Int = {x + y}
I see above declaration everywhere. I understand what above function is doing. It is a currying function which takes two arguments. If you omit the second argument, it will invoke implicit definition which returns int instead. So I think it is something very similar to defining a default value for the argument.
implicit val temp = 3
scala> getIntValue(3)
res8: Int = 6
I was wondering what are the benefits of above declaration?
Here's my "pragmatic" answer: you typically use currying as more of a "convention" than anything else meaningful. It comes in really handy when your last parameter happens to be a "call by name" parameter (for example: : => Boolean):
def transaction(conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
conn.startTransaction // start transaction
val booleanResult = codeToExecuteInTransaction //invoke the code block they passed in
//deal with errors and rollback if necessary, or commit
//return connection to connection pool
}
What this is saying is "I have a function called transaction, its first parameter is a Connection and its second parameter will be a code-block".
This allows us to use this method like so (using the "I can use curly brace instead of parenthesis rule"):
transaction(myConn) {
//code to execute in a transaction
//the code block's last executable statement must be a Boolean as per the second
//parameter of the transaction method
}
If you didn't curry that transaction method, it would look pretty unnatural doing this:
transaction(myConn, {
//code block
})
How about implicit? Yes it can seem like a very ambiguous construct, but you get used to it after a while, and the nice thing about implicit functions is they have scoping rules. So this means for production, you might define an implicit function for getting that database connection from the PROD database, but in your integration test you'll define an implicit function that will superscede the PROD version, and it will be used to get a connection from a DEV database instead for use in your test.
As an example, how about we add an implicit parameter to the transaction method?
def transaction(implicit conn: Connection)(codeToExecuteInTransaction : => Boolean) = {
}
Now, assuming I have an implicit function somewhere in my code base that returns a Connection, like so:
def implicit getConnectionFromPool() : Connection = { ...}
I can execute the transaction method like so:
transaction {
//code to execute in transaction
}
and Scala will translate that to:
transaction(getConnectionFromPool) {
//code to execute in transaction
}
In summary, Implicits are a pretty nice way to not have to make the developer provide a value for a required parameter when that parameter is 99% of the time going to be the same everywhere you use the function. In that 1% of the time you need a different Connection, you can provide your own connection by passing in a value instead of letting Scala figure out which implicit function provides the value.
In your specific example there are no practical benefits. In fact using implicits for this task will only obfuscate your code.
The standard use case of implicits is the Type Class Pattern. I'd say that it is the only use case that is practically useful. In all other cases it's better to have things explicit.
Here is an example of a typeclass:
// A typeclass
trait Show[a] {
def show(a: a): String
}
// Some data type
case class Artist(name: String)
// An instance of the `Show` typeclass for that data type
implicit val artistShowInstance =
new Show[Artist] {
def show(a: Artist) = a.name
}
// A function that works for any type `a`, which has an instance of a class `Show`
def showAListOfShowables[a](list: List[a])(implicit showInstance: Show[a]): String =
list.view.map(showInstance.show).mkString(", ")
// The following code outputs `Beatles, Michael Jackson, Rolling Stones`
val list = List(Artist("Beatles"), Artist("Michael Jackson"), Artist("Rolling Stones"))
println(showAListOfShowables(list))
This pattern originates from a functional programming language named Haskell and turned out to be more practical than the standard OO practices for writing a modular and decoupled software. The main benefit of it is it allows you to extend the already existing types with new functionality without changing them.
There's plenty of details unmentioned, like syntactic sugar, def instances and etc. It is a huge subject and fortunately it has a great coverage throughout the web. Just google for "scala type class".
There are many benefits, outside of your example.
I'll give just one; at the same time, this is also a trick that you can use on certain occasions.
Imagine you create a trait that is a generic container for other values, like a list, a set, a tree or something like that.
trait MyContainer[A] {
def containedValue:A
}
Now, at some point, you find it useful to iterate over all elements of the contained value.
Of course, this only makes sense if the contained value is of an iterable type.
But because you want your class to be useful for all types, you don't want to restrict A to be of a Seq type, or Traversable, or anything like that.
Basically, you want a method that says: "I can only be called if A is of a Seq type."
And if someone calls it on, say, MyContainer[Int], that should result in a compile error.
That's possible.
What you need is some evidence that A is of a sequence type.
And you can do that with Scala and implicit arguments:
trait MyContainer[A] {
def containedValue:A
def aggregate[B](f:B=>B)(implicit ev:A=>Seq[B]):B =
ev(containedValue) reduce f
}
So, if you call this method on a MyContainer[Seq[Int]], the compiler will look for an implicit Seq[Int]=>Seq[B].
That's really simple to resolve for the compiler.
Because there is a global implicit function that's called identity, and it is always in scope.
Its type signature is something like: A=>A
It simply returns whatever argument is passed to it.
I don't know how this pattern is called. (Can anyone help out?)
But I think it's a neat trick that comes in handy sometimes.
You can see a good example of that in the Scala library if you look at the method signature of Seq.sum.
In the case of sum, another implicit parameter type is used; in that case, the implicit parameter is evidence that the contained type is numeric, and therefore, a sum can be built out of all contained values.
That's not the only use of implicits, and certainly not the most prominent, but I'd say it's an honorable mention. :-)

Scala: compare type of generic class

There have been many questions on that issue, but sadly none seems to solve my problem.
I've written a generic scala class, let's call it
class MyClass[A]() { ... }
As well as the according object:
object MyClass() { ... }
Inside MyClass I want to define a function whichs behaviour depends on the given type A. For instance, let's just assume I want to define a 'smaller' function of type (A, A) => Boolean, that by default returns 'true' no matter what the elements are, but is meant to return the correct results for certain types such as Int, Float etc.
My idea was to define 'smaller' as member of the class in the following way:
class MyClass[A]() {
val someArray = new Array[A](1) // will be referred to later on
var smaller:(A,A) => Boolean = MyClass.getSmallerFunction(this)
...some Stuff...
}
object MyClass {
def getSmallerFunction[A](m:MyClass[A]):(A,A) => Boolean = {
var func = (a:Boolean, b:Boolean) => true
// This doesn't compile, since the compiler doesn't know what 'A' is
if(A == Int) func = ((a:Int, b:Int) => (a<b)).asInstanceOf[(A,A) => Boolean)]
// This compiles, but always returns true (due to type erasure I guess?)
if(m.isInstanceOf[MyClass[Float]]) func = ((a:Float, b:Float) => (a<b)).asInstanceOf[(A,A) => Boolean)]
// This compiles but always returns true as well due to the newly created array only containing null-elements
if(m.someArray(0).isInstanceOf[Long]) func = ((a:Long, b:Long) => (a<b)).asInstanceOf[(A,A) => Boolean)]
}
...some more stuff...
}
The getSmallerFunction method contains a few of the implementations I experimented with, but none of them works.
After a while of researching the topic it at first seemed as if manifests are the way to go, but unfortunately they don't seem to work here due to the fact that object MyClass also contains some constructor calls of the class - which, no matter how I change the code - always results in the compiler getting angry about the lack of information required to use manifests. Maybe there is a manifest-based solution, but I certainly haven't found it yet.
Note: The usage of a 'smaller' function is just an example, there are several functions of this kind I want to implement. I know that for this specific case I could simply allow only those types A that are Comparable, but that's really not what I'm trying to achieve.
Sorry for the wall of text - I hope it's possible to comprehend my problem.
Thanks in advance for your answers.
Edit:
Maybe I should go a bit more into detail: What I was trying to do was the implementation of a library for image programming (mostly for my personal use). 'MyClass' is actually a class 'Pixelmap' that contains an array of "pixels" of type A as well as certain methods for pixel manipulation. Those Pixelmaps can be of any type, although I mostly use Float and Color datatypes, and sometimes Boolean for masks.
One of the datatype dependent functions I need is 'blend' (although 'smaller' is used too), which interpolates between two values of type A and can for instance be used for smooth resizing of such a Pixelmap. By default, this blend function (which is of type (A,A,Float) => A) simply returns the first given value, but for Pixelmaps of type Float, Color etc. a proper interpolation is meant to be defined.
So every Pixelmap-instance should get one pointer to the appropriate 'blend' function right after its creation.
Edit 2:
Seems like I found a suitable way to solve the problem, at least for my specific case. It really is more of a work around though.
I simply added an implicit parameter of type A to MyClass:
class MyClass[A]()(implicit dummy:A) { ... }
When I want to find out whether the type A of an instance m:MyClass is "Float" for instance, I can just use "m.dummy.isInstanceOf[Float]".
To make this actually work I added a bunch of predefined implicit values for all datatypes I needed to the MyClass object:
object MyClass {
implicit val floatDummy:Float = 0.0f
implicit val intDummy:Int = 0
...
}
Although this really doesn't feel like a proper solution, it seems to get me around the problem pretty well.
I've omitted a whole bunch of stuff because, if I'm honest, I'm still not entirely sure what you're trying to do. But here is a solution that may help you.
trait MyClass[A] {
def smaller: (A,A) => Boolean
}
object MyClass {
implicit object intMyClass extends MyClass[Int] {
def smaller = (a:Int, b:Int) => (a < b)
}
implicit object floatMyClass extends MyClass[Float] {
def smaller = (a:Float, b:Float) => (a < b)
}
implicit object longMyClass extends MyClass[Long] {
def smaller = (a:Long, b:Long) => (a < b)
}
def getSmallerFunction[T : MyClass](a: T, b: T) = implicitly[MyClass[T]].smaller(a, b)
}
The idea is that you define your smaller methods as implicit objects under your MyClass, object, with a getSmallerFunction method. This method is special in the sense that it looks for a type-class instance that satisfies it's type bounds. We can then go:
println(MyClass.getSmallerFunction(1, 2))
And it automagically knows the correct method to use. You could extend this technique to handle your Array example. This is a great tutorial/presentation on what type-classes are.
Edit: I've just realise you are wanting an actual function returned. In my case, like yours the type parameter is lost. But if at the end of the day you just want to be able to selectively call methods depending on their type, the approach I've detailed should help you.

Scala: checking if an object is Numeric

Is it possible for a pattern match to detect if something is a Numeric? I want to do the following:
class DoubleWrapper(value: Double) {
override def equals(o: Any): Boolean = o match {
case o: Numeric => value == o.toDouble
case _ => false
}
override def hashCode(): Int = value ##
}
But of course this doesn't really work because Numeric isn't the supertype of things like Int and Double, it's a typeclass. I also can't do something like def equals[N: Numeric](o: N) because o has to be Any to fit the contract for equals.
So how do I do it without listing out every known Numeric class (including, I guess, user-defined classes I may not even know about)?
The original problem is not solvable, and here is my reasoning why:
To find out whether a type is an instance of a typeclass (such as Numeric), we need implicit resolution. Implicit resolution is done at compile time, but we would need it to be done at runtime. That is currently not possible, because as far as I can tell, the Scala compiler does not leave all necessary information in the compiled class file. To see that, one can write a test class with a method that contains a local variable, that has the implicit modifier. The compilation output will not change when the modifier is removed.
Are you using DoubleWrapper to add methods to Double? Then it should be a transparent type, i.e. you shouldn't be keeping instances, but rather define the pimped methods to return Double instead. That way you can keep using == as defined for primitives, which already does what you want (6.0 == 6 yields true).
Ok, so if not, how about
override def equals(o: Any): Boolean = o == value
If you construct equals methods of other wrappers accordingly, you should end up comparing the primitive values again.
Another question is whether you should have such an equals method for a stateful wrapper. I don't think mutable objects should be equal according to one of the values they hold—you will most likely run into trouble with that.

Could/should an implicit conversion from T to Option[T] be added/created in Scala?

Is this an opportunity to make things a bit more efficient (for the prorammer): I find it gets a bit tiresome having to wrap things in Some, e.g. Some(5). What about something like this:
implicit def T2OptionT( x : T) : Option[T] = if ( x == null ) None else Some(x)
You would lose some type safety and possibly cause confusion.
For example:
val iThinkThisIsAList = 2
for (i <- iThinkThisIsAList) yield { i + 1 }
I (for whatever reason) thought I had a list, and it didn't get caught by the compiler when I iterated over it because it was auto-converted to an Option[Int].
I should add that I think this is a great implicit to have explicitly imported, just probably not a global default.
Note that you could use the explicit implicit pattern which would avoid confusion and keep code terse at the same time.
What I mean by explicit implicit is rather than have a direct conversion from T to Option[T] you could have a conversion to a wrapper object which provides the means to do the conversion from T to Option[T].
class Optionable[T <: AnyRef](value: T) {
def toOption: Option[T] = if ( value == null ) None else Some(value)
}
implicit def anyRefToOptionable[T <: AnyRef](value: T) = new Optionable(value)
... I might find a better name for it than Optionable, but now you can write code like:
val x: String = "foo"
x.toOption // Some("foo")
val y: String = null
x.toOption // None
I believe that this way is fully transparent and aids in the understanding of the written code - eliminating all checks for null in a nice way.
Note the T <: AnyRef - you should only do this implicit conversion for types that allow null values, which by definition are reference types.
The general guidelines for implicit conversions are as follows:
When you need to add members to a type (a la "open classes"; aka the "pimp my library" pattern), convert to a new type which extends AnyRef and which only defines the members you need.
When you need to "correct" an inheritance hierarchy. Thus, you have some type A which should have subclassed B, but didn't for some reason. In that case, you can define an implicit conversion from A to B.
These are the only cases where it is appropriate to define an implicit conversion. Any other conversion runs into type safety and correctness issues in a hurry.
It really doesn't make any sense for T to extend Option[T], and obviously the purpose of the conversion is not simply the addition of members. Thus, such a conversion would be inadvisable.
It would seem that this could be confusing to other developers, as they read your code.
Generally, it seems, implicit works to help cast from one object to another, to cut out confusing casting code that can clutter code, but, if I have some variable and it somehow becomes a Some then that would seem to be bothersome.
You may want to put some code showing it being used, to see how confusing it would be.
You could also try to overload the method :
def having(key:String) = having(key, None)
def having(key:String, default:String) = having(key, Some(default))
def having(key: String, default: Option[String]=Option.empty) : Create = {
keys += ( (key, default) )
this
}
That looks good to me, except it may not work for a primitive T (which can't be null). I guess a non-specialized generic always gets boxed primitives, so probably it's fine.