scala.collection.immutable.NumericRange[UInt]? - scala

Trying to make a scala.collection.immutable.NumericRange[UInt]
Looks like it needs a scala.math.Integral[UInt].
But there does not seem to be a spire.math.Integral[UInt].
I am assuming thats because UInt violates the laws around Integral in some way.
I am mostly interested in NumericRange[UInt].contains(x: UInt)
Is it folly for me to attempt to construct a scala.math.Integral[UInt] on my own?
Or should I find some other way to get contains?
Is there a trait that should exist that should be inherited by Set[T] and Range and NumericRange[T] that declares contains[T]
What should that trait be called?
Should I do this as a type class?
What should I call this type class?

If you just need contains(x: UInt), you should use spire.math.Interval[UInt]
See: https://typelevel.org/spire/api/spire/math/Interval.html
If you need other bits of the NumericRange[UInt] then see other answers that arrive in the future.

Related

Typeclass dependency with tagless-final

After watching John De Goes' "FP to the Max" (https://www.youtube.com/watch?v=sxudIMiOo68) I'm wondering about approaches for writing FP programs in the tagless-final pattern.
Say I have some typeclass for modelling a side-effecty thing (taking his Console example):
trait Console[F[_]] {
def putStrLn(str: String): F[Unit]
def getStrLn: F[String]
}
How would you depend on Console?
Implicitly
Like demonstrated in his video:
def inputLength[F[_]: Functor: Console]: F[Int] =
Console[F].getStrLn.map(_.length)
Pros: The function signature is clean, and you can benefit from typeclass automatic derivation
Explicitly
By passing the instance to the function directly:
def inputLength[F[_]: Functor](console: Console[F]): F[Int] =
console.getStrLn.map(_.length)
Pros: This allows you to explicitly wire your dependencies according to your needs, and feels less "magical"
Not sure what's the best / most idiomatic way to write this function, would appreciate your opinions.
Thanks!
When you rely on a typeclass instance via implicit parameters, there's one thing you're certain of, and that is that you can determine the instance of your typeclass at compile time (unless you provide it explicitly, which kind of defeats the purpose and then we go back to example 2). On the contrary, if you can't determine the instance of that class at compile time, for example when you rely on a configuration parameter to determine the instance type, then an implicit parameter of any kind is no longer suitable.
Thus, I'd say, to my own opinion and liking, is that whenever one can determine the instance at compile time and let the compiler figure out the wiring, do so, because as you said you gain a lot from it, such as the ability to enjoy automatic type class derivation when available.
The argument of "magical", while understandable, signals that whomever says this still has milage to go in the language that he's programming in and needs to learn how things work, which is completely OK, yet not a good enough reason not to use typeclass instances via implicit parameters.

Scala Some redundant covariance

Scala standard library contains Option type.
The Option type itself is covariant type, this is obvious from its declaration sealed abstract class Option[+A].
The questions are:
Why its constructor Some is also covariant
final case class Some[+A](x: A) extends Option[A]?
Is this somehow needed for pattern matching?
Or maybe it's done for better readability?
For me it seems redundant as I don't see any reason to use Some directly anywhere except in pattern matching but currently I can't see how it can depend on covariance.
First, you have to understand that, as #Dima said, Some[T] is not a constructor but a subclass for Option[T].
Once we have established that, the questions with variance are always easier to solve with Dog and Animal:
Is Some[Dog] a Some[Animal]? I think you'll agree that the answer is yes.
Pragmatically, it won't change much, since you'll seldom work with Some[Dog], but rather with Option[Dog], but it may occur (say when you use an unapply of a case class whose signature returns a Some[Tuple]), so why not add the variance while we're at it?

Is it possible to achieve functionality provided by implicit classes via macros?

We are pretty familiar with implicits in Scala for now, but macros are pretty undiscovered area (at least for me) and, despite the presence of some great articles by Eugene Burmako, it is still not an easy material to just dive in.
In this particular question I'd like to find out if there is a possibility to achieve the analogous to the following code functionality using just macros:
implicit class Nonsense(val s: String) {
def ##(i:Int) = s.charAt(i)
}
So "asd" ## 0 will return 'a', for example. Can I implement macros that use infix notation? The reason to this is I'm writing a DSL for some already existing project and implicits allow making the API clear and concise, but whenever I write a new implicit class, I feel like introducing a new speed-reducing factor. And yes, I do know about value classes and stuff, I just think it would be really great if my DSL transformed into the underlying library API calls during compilation rather than in runtime.
TL;DR: can I replace implicits with macros while not changing the API? Can I write macros in infix form? Is there something even more suitable for this case? Is the trouble worth it?
UPD. To those advocating the value classes: in my case I have a little more than just a simple wrapper - they are often stacked. For example, I have an implicit class that takes some parameters, returns a lambda wrapping this parameters (i.e. partial function), and the second implicit class that is made specifically for wrapping this type of functions. I can achieve something like this:
a --> x ==> b
where first class wraps a and adds --> method, and the second one wraps the return type of a --> x and defines ==>(b). Plus it may really be the case when user creates considerable amount of objects in this fashion. I just don't know if this will be efficient, so if you could tell me that value classes cover this case - I'd be really glad to know that.
Back in the day (2.10.0-RC1) I had trouble using implicit classes for macros (sorry, I don't recollect why exactly) but the solution was to use:
an implicit def macro to convert to a class
define the infix operator as a def macro in that class
So something like the following might work for you:
implicit def toNonsense(s:String): Nonsense = macro ...
...
class Nonsense(...){
...
def ##(...):... = macro ...
...
}
That was pretty painful to implement. That being said, macro have become easier to implement since.
If you want to check what I did, because I'm not sure that applies to what you want to do, refer to this excerpt of my code (non-idiomatic style).
I won't address the relevance of that here, as it's been commented by others.

How to constrain a type to a subset of another?

I have that this is a dumb question feeling, but here goes ... Can I define a type that is a subset of the elements of a another type? Here's a simplified example.
scala> class Even(i: Int) {
| assert(i % 2 == 0)
| }
defined class Even
scala> new Even(3)
java.lang.AssertionError: assertion failed
This is a runtime check. Can I define a type such that this is checked at compilation? IE, that the input parameter i is provably always even?
Value-dependent typing in languages such as Coq and Agda can do this, though not Scala.
Depending on the exact use-case, there are ways of encoding peano numbers in the type system that may, however, help you.
You might also want to try defining both Even and Odd along with some sealed abstract supertype (OddOrEven perhaps) and a factory method that returns the correct instance from any given Integer.
Another possibility is to define Even as an extractor.

Practical uses for Structural Types?

Structural types are one of those "wow, cool!" features of Scala. However, For every example I can think of where they might help, implicit conversions and dynamic mixin composition often seem like better matches. What are some common uses for them and/or advice on when they are appropriate?
Aside from the rare case of classes which provide the same method but aren't related nor do implement a common interface (for example, the close() method -- Source, for one, does not extend Closeable), I find no use for structural types with their present restriction. If they were more flexible, however, I could well write something like this:
def add[T: { def +(x: T): T }](a: T, b: T) = a + b
which would neatly handle numeric types. Every time I think structural types might help me with something, I hit that particular wall.
EDIT
However unuseful I find structural types myself, the compiler, however, uses it to handle anonymous classes. For example:
implicit def toTimes(count: Int) = new {
def times(block: => Unit) = 1 to count foreach { _ => block }
}
5 times { println("This uses structural types!") }
The object resulting from (the implicit) toTimes(5) is of type { def times(block: => Unit) }, ie, a structural type.
I don't know if Scala does that for every anonymous class -- perhaps it does. Alas, that is one reason why doing pimp my library that way is slow, as structural types use reflection to invoke the methods. Instead of an anonymous class, one should use a real class to avoid performance issues in pimp my library.
Structural types are very cool constructs in Scala. I've used them to represent multiple unrelated types that share an attribute upon which I want to perform a common operation without a new level of abstraction.
I have heard one argument against structural types from people who are strict about an application's architecture. They feel it is dangerous to apply a common operation across types without an associative trait or parent type, because you then leave the rule of what type the method should apply to open-ended. Daniel's close() example is spot on, but what if you have another type that requires different behavior? Someone who doesn't understand the architecture might use it and cause problems in the system.
I think structural types are one of these features that you don't need that often, but when you need it, it helps you a lot. One area where structural types really shine is "retrofitting", e.g. when you need to glue together several pieces of software you have no source code for and which were not intended for reuse. But if you find yourself using structural types a lot, you're probably doing it wrong.
[Edit]
Of course implicits are often the way to go, but there are cases when you can't: Imagine you have a mutable object you can modify with methods, but which hides important parts of it's state, a kind of "black box". Then you have to work somehow with this object.
Another use case for structural types is when code relies on naming conventions without a common interface, e.g. in machine generated code. In the JDK we can find such things as well, like the StringBuffer / StringBuilder pair (where the common interfaces Appendable and CharSequence are way to general).
Structural types gives some benefits of dynamic languages to a statically linked language, specifically loose coupling. If you want a method foo() to call instance methods of class Bar, you don't need an interface or base-class that is common to both foo() and Bar. You can define a structural type that foo() accepts and whose Bar has no clue of existence. As long as Bar contains methods that match the structural type signatures, foo() will be able to call.
It's great because you can put foo() and Bar on distinct, completely unrelated libraries, that is, with no common referenced contract. This reduces linkage requirements and thus further contributes for loose coupling.
In some situations, a structural type can be used as an alternative to the Adapter pattern, because it offers the following advantages:
Object identity is preserved (there is no separate object for the adapter instance, at least in the semantic level).
You don't need to instantiate an adapter - just pass a Bar instance to foo().
You don't need to implement wrapper methods - just declare the required signatures in the structural type.
The structural type doesn't need to know the actual instance class or interface, while the adapter must know Bar so it can call its methods. This way, a single structural type can be used for many actual types, whereas with adapter it's necessary to code multiple classes - one for each actual type.
The only drawback of structural types compared to adapters is that a structural type can't be used to translate method signatures. So, when signatures doesn't match, you must use adapters that will have some translation logic. I particularly don't like to code "intelligent" adapters because in many times they are more than just adapters and cause increased complexity. If a class client needs some additional method, I prefer to simply add such method, since it usually doesn't affect footprint.