In the Coq proof assistant - which also has implicit conversions - it is possible to search for an implicit conversion using the SearchAbout T command, which returns all the things which have T in their type (which would include conversions to or from T).
Is there a way of finding all conversions to or from a type for Scala programmers? Note that the conversions might be defined outside the project that defines either the source or destination type.
To just quickly see if a conversion exists in the current scope between two reference types S and T, just type
((null:S):T)
and see if it compiles. With Eclipse Scala IDE >= 2.1M2 you can see which conversion is called, if implicit highlighting is enabled in the preferences.
Of course this requires you to guess both types (but you will probably already have a clear idea of what you want to convert to and from), and it requires the conversions to already be in scope.
Related
I want to use sttp library with guice(with scalaguice wrapper) in my app. But seems it is not so easy to correctly bind things like SttpBackend[Try, Nothing]
SttpBackend.scala
Try[_] and Try[AnyRef] show some other errors, but still have no idea how it should be done correctly
the error I got:
kinds of the type arguments (scala.util.Try) do not conform to the expected kinds of the type parameters (type T).
[error] scala.util.Try's type parameters do not match type T's expected parameters:
[error] class Try has one type parameter, but type T has none
[error] bind[SttpBackend[Try, Nothing]].toProvider[SttpBackendProvider]
[error] ` ^
SttpBackendProvider looks like:
def get: SttpBackend[Try, Nothing] = TryHttpURLConnectionBackend(opts)
complete example in scastie
interesting that version scalaguice 4.1.0 show this error, but latest 4.2.2 shows error inside it with converting Nothing to JavaType
I believe you hit two different bugs in the Scala-Guice one of which is not fixed yet (and probably even not submitted yet).
To describe those issues I need a fast intro into how Guice and Scala-Guice work. Essentially what Guice do is have a mapping from type onto the factory method for an object of that type. To support some advanced features types are mapped onto some internal "keys" representation and then for each "key" Guice builds a way to construct a corresponding object. Also it is important that generics in Java are implemented using type erasure. That's why when you write something like:
bind(classOf[SttpBackend[Try, Nothing]]).toProvider(classOf[SttpBackendProvider])
in raw-Guice, the "key" actually becomes something like "com.softwaremill.sttp.SttpBackend". Luckily Guice developers have thought about this issue with generics and introduced TypeLiteral[T] so you can convey the information about generics.
Scala type system is more reach than in Java and it has some better reflection support from the compiler. Scala-Guice exploits it to map Scala-types on those more detailed keys automatically. Unfortunately it doesn't always work perfectly.
The first issue is the result of the facts that the type SttpBackend is defined as
trait SttpBackend[R[_], -S]
so it uses it expects its first parameter to be a type constructor; and that originally Scala-Guice used the scala.reflect.Manifest infrastructure. AFAIU such higher-kind types are not representable as Manifest and this is what the error in your question really says.
Luckily Scala has added a new scala.reflect.runtime.universe.TypeTag infrastructure to tackle this issue in a better and more consistent way and the Scala-Guice migrated to its usage. That's why with the newer version of Scala-Guice the compiler error goes away. Unfortunately there is another bug in the Scala-Guice that makes the code fail in runtime and it is a lack of handling of the Nothing Scala type. You see, the Nothing type is a kind of fake one on the JVM. It is one of the things where the Scala type system is more reach than the Java one. There is no direct mapping for Nothing in the JVM world. Luckily there is no way to create any value of the type Nothing. Unfortunately you still can create a classOf[Nothing]. The Scala-to-JVM compiler handles it by using an artificial scala.runtime.Nothing$. It is not a part of the public API, it is implementation details of specifically Scala over JVM. Anyway this means that the Nothing type needs additional handling when converting into the Guice TypeLiteral and there is none. There is for Any the cousin of Nothing but not for Nothing (see the usage of the anyType in TypeConversions.scala).
So there are really two workarounds:
Use raw Java-based syntax for Guice instead of the nice Scala-Guice one:
bind(new TypeLiteral[SttpBackend[Try, Nothing]]() {})
.toInstance(sttpBackend) // or to whatever
See online demo based on your example.
Patch the TypeConversions.scala in the Scala-Guice as in:
private[scalaguice] object TypeConversions {
private val mirror = runtimeMirror(getClass.getClassLoader)
private val anyType = typeOf[Any]
private val nothingType = typeOf[Nothing] // added
...
def scalaTypeToJavaType(scalaType: ScalaType): JavaType = {
scalaType.dealias match {
case `anyType` => classOf[java.lang.Object]
case `nothingType` => classOf[scala.runtime.Nothing$] //added
...
I tried it locally and it seems to fix your example. I didn't do any extensive tests so it might have broken something else.
I am reading Foundations of path dependent types. On the first page, on the right column it is written:
Our motivation is twofold. First, we believe objects with type members
are not fully understood. It is not clear what causes the complexity,
which pieces of complexity are essential to the concept or accidental
to a language implementation or calculus that tries to achieve
something else. Second, we believe objects with type members are
really useful. They can encode a variety of other, usually separate
type system features. Most importantly, they unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems.
Could someone clarify/explain what does "object vs module" system mean?
Or in general, what does
"they (objects with type members) unify concepts from
object and module systems, by adding a notion of nominality to otherwise structural systems."
mean ?
What concepts? From where ?
Nominality in the object names / values ?
Structure in the types ? Or the other way around?
Where do type members here belong to ? To module system ? Object system ? How? Why?
EDIT:
How does this unification relate to path dependent types ? It seems to me that they allow this unification to happen (objects with type members). Is that so ?
If yes, how ?
Could you give a simple example what that means ? (I.e. path dependent types allowing the unification of module and object systems vs. why would the unification not be possible happen if we would not have path dependent types?)
EDIT 2:
From the paper:
To make any use of type members, programmers need a way to refer to
them. This means that types must be able to refer to objects, i.e.
contain terms that serve as static approximation of a set of dynamic
objects. In other words, some level of dependent types is required;
the usual notion is that of path-dependent types.
So my understanding so far (with the help of Jesper's answer) :
This paragraph above partially answers some of the questions above. The main seems to be to have objects with type members and to have that path dependent types are needed because objects are dynamic/runtime dependent but types are static (defined at compile time) so just by having objects that lead to type members would not work because then those type members would not be defined clearly at compile time.
Path dependent types help here by pinning down the path leading to a type member at compile time (by requiring that the objects are already known/defined at compile time), so even if the path goes via objects (that can change during compile time) but if those objects are fixed already at compile time then their type members can have a clear meaning at compile time too.
I'm not sure I fully understand what your question is, but I'll take a stab at it. :) I think the authors mainly are referring to ML style modules where a signature corresponds to a Scala trait and a structure corresponds to a Scala object. Scala unifies the concepts of record values, objects and modules which in most other languages (like ML, Rust etc.) are separate concepts. The main benefit is that in Scala modules/objects can be passed around as normal function arguments (while in ML you have to use special functors for this).
In ML a module is checked for compatibility with a signature (trait in Scala) based on its structure (similar to structural typing in Scala), but in Scala the module must implement the trait by name (nominal typing). So even if two modules/objects have the same structure in Scala they might not be compatible with each other depending on their super type hierarchy.
A really powerful feature regarding type members in Scala is that you can use a trait even if you don't know the exact type of its type members as long as you do it in a type safe way (I think this is also possible in ML modules), for example:
trait A {
type X
def getX: X
def setX(x: X): Unit
}
def foo(a: A) = a.setX(a.getX)
In foo the Scala compiler doesn't know the exact type of a.X but a value of the type can still be used in a way the compiler knows is safe. This is not possible in Rust for example.
The next version of the Scala compiler, Dotty, will be based on the theory described in the paper you reference. This unification of modules and objects combined with subtyping, traits and type members is one reason that Scala is unique and very powerful.
EDIT: To expand a bit why path dependent types increases the flexibility of Scala's module/object system, let's expand the example above with:
def bar(a: A, b: A) = a.setX(b.getX)
This will result in a compilation error:
error: type mismatch;
found : b.T
required: a.T
def foo(a: A, b: A) = a.setX(b.getX)
^
and correctly so because a.T and b.T could resolve to different types. You can fix it by using a path dependent type:
def bar(a: A)(b: A { type X = a.X }) = a.setX(b.getX)
Or add a type parameter:
def bar[T](a: A { type X = T }, b: A { type X = T }) = a.setX(b.getX)
So, path dependent types eliminates some need of type parameters, and also allows us to express existential types efficiently (corresponding to A[_] or A[T] forSome { type T } if A had a type parameter instead of a type member).
I am writing an autocompleter (i.e., code completion like in Eclipse or IntelliJ) for a domain specific language that is a subset of Scala. Users frequently use implicit conversions to hide the more advanced features of Scala like options or Scalaz disjunctions.
I am looking for a way, either at compile time or runtime, to acquire a list of implicit conversions available for a receiver (i.e., for the ‘x’ in ‘val y = x.foo’). So, I have two specific questions:
Is there some library that, given the type of a receiver, can find all the implicit conversions that the compiler could use to turn that receiver into another type?*
How is the identification of available implicit conversions actually done by the Scala compiler? I am not sure where in the source to look to find it; some documentation about how the compiler does this or the location in the source where it does it would also be very helpful.
*: As you might have guessed, I plan to use the resulting list to get all the available fields and methods of all the types the given variable could be implicitly converted to so that the autocompleter can suggest them all to users. If there’s an even more direct way to do that, that would be great too.
In the Scala reflection guide is written the following:
As with Manifests, one can in effect request that the compiler
generate a TypeTag. This is done by simply specifying an implicit
evidence parameter of type TypeTag[T]. If the compiler fails to find a
matching implicit value during implicit search, it will automatically
generate a TypeTag[T].
This StackOverflow answer beautifully explains the concept of "implicit evidence". However, it is still not completely clear to me what it means that the compiler will
generate a TypeTag[T].
Does this mean that this is a special case of "implicit evidence" search? I.e. the class TypeTag[T] is handled in a special way when the compiler does implicit search ? I tried to look for implicit parameter values in the Scala reflection APIs but I did not find any which provides a TypeTag[T], so I assume the TypeTag[T] implicit parameter is coming from inside the compiler (as the documentation says). So the classname TypeTag[T] is hardcoded into the compiler's source. Is this assumption correct ?
Is the automatic generation of implicit values documented somewhere? In other words, is there a documentation somewhere which lists all the automatically generated implicit evidences ? I did not find TypeTag[T] in the Scala language specification (version 2.9). The closest concept there to TypeTag[T] is Manifest which are automatically generated implicit parameters. Are Manifests the only automatically generated implicit value parameters in Scala 2.9 ?
Yes, TypeTags and WeakTypeTags are treated specially by implicit search. Now that implicit macros actually work, we plan to remove this hardcode, but that remains to be implemented.
So far there's no documentation for automatic generation of implicit values apart from source code, which says that only type tags and manifests are currently generated: https://github.com/scala/scala/blob/38ee986bcad30a8835e8f197112afb5cce2b76c5/src/compiler/scala/tools/nsc/typechecker/Implicits.scala#L1288
Because I know a little bit Java I was trying to use in every Scala Code some Java types like java.lang.Integer, java.lang.Character, java.lang.Boolean ... and so on. Now people told me "No! For everything in Scala there is an own type, the Java stuff will work - but you should always prefer the Scala types and objects".
Ok now I see, there is everything in Scala that is in Java. Not sure why it is better to use for example Scala Boolean instead of Java Boolean, but fine. If I look at the types I see scala.Boolean, scala.Int, scala.Byte ... and then I look at String but its not scala.String, (well its not even java.lang.String confuse) its just a String. But I thought I should use everything that comes direct from scala. Maybe I do not understand scala correctly, could somebody explain it please?
First, 'well its not even java.lang.String' statement is not quite correct. Plain String name comes from type alias defined in Predef object:
type String = java.lang.String
and insides of Predef are imported in every Scala source, hence you use String instead of full java.lang.String, but in fact they are the same.
java.lang.String is very special class treated by JVM in special way. As #pagoda_5b said, it is declared as final, it is not possible to extend it (and this is good in fact), so Scala library provides a wrapper (RichString) with additional operations and an implicit conversion String -> RichString available by default.
However, there is slightly different situation with Integer, Character, Boolean etc. You see, even though String is treated specially by JVM, it still a plain class whose instances are plain objects. Semantically it is not different from, say, List class.
There is another situation with primitive types. Java int, char, boolean types are not classes, and values of these types are not objects. But Scala is fully object-oriented language, there are no primitive types. It would be possible to use java.lang.{Integer,Boolean,...} everywhere where you need corresponding types, but this would be awfully inefficient because of boxing.
Because of this Scala needed a way to present Java primitive types in object-oriented setting, and so scala.{Int,Boolean,...} classes were introduced. These types are treated specially via Scala compiler - scalac generates code working with primitives when it encounters one of these classes. They also extend AnyVal class, which prevents you from using null as a value for these types. This approach solves the problem with efficiency, leaves java.lang.{Integer,Boolean,...} classes available where you really need boxing, and also provides elegant way to use primitives of another host system (e.g. .NET runtime).
I'm just guessing here
If you look at the docs, you can see that the scala version of primitives gives you all the expected operators that works on numeric types, or boolean types, and sensible conversions, without resorting to boxing-unboxing as for java.lang wrappers.
I think this choice was made to give uniform and natural access to what was expected of primitive types, while at the same time making them Objects as any other scala type.
I suppose that java.lang.String required a different approach, being an Object already, and final in its implementation. So the "path of least pain" was to create an implicit Rich wrapper around it to get missing operations on String, while leaving the rest untouched.
To see it another way, java.lang.String was already good enough as-is, being immutable and what-else.
It's worth mentioning that the other "primitive" types in scala have their own Rich wrappers that provides additional sensible operations.