how to understand the following scala call - scala

I have a quite puzzling question. I am playing with squeryl, and found when I used:
package models
import org.squeryl.{Schema, KeyedEntity}
object db extends Schema {
val userTable = table[User]("User")
on(userTable)(b => declare(
b.email is(unique,indexed("idxEmailAddresses"))
))
}
I had to import import org.squeryl.PrimitiveTypeMode._
But this does not make sense to me. Here is is defined in org.squeryl.dsl.NonNumericalExpression, but why do I have to include the seemingly irrelevant import org.squeryl.PrimitiveTypeMode._?
Thank you.

I agree with #sschaef that this is due to required implicit conversions. When APIs (like squeryl) decide to build a DSL (domain specific language) in order to get a slick looking way to code against their API, implicit conversions will be required. The core API probably takes certain types of objects that it might be cumbersome/ugly to be instantiating directly in the code. Thus, they will use implicit conversions to do some of the lifting for you and keep the DSL as clean as possible. If you check out the Scaladoc for the PrimitiveTypeMode object, you can see the many implicit defs that are defined on it. Implicit conversions (used in pimping libraries) will 'upconvert' from one type into another to gain access to more functionality on the pimped out class. When the code is the implicit things are explicitly included into the final compiled code.
http://squeryl.org/api/index.html#org.squeryl.PrimitiveTypeMode$
Also, I believe the implicit conversion you are looking for is:
import org.squeryl.PrimitiveTypeMode.string2ScalarString
which is inherited from org.squeryl.dsl.QueryDsl.

Related

How can I assert if a class extends "AnyVal" using ArchUnit

I want to write an arch unit test to assert that a class extends AnyVal type.
val rule = classes().should().beAssignableTo(classOf[AnyVal])
val importedClasses = new ClassFileImporter().importPackages("a.b.c")
isAnyVal.check(importedClasses) // Always returns true
The above code doesn't actually catch anything and passes for classes that don't extend AnyVal also.
classOf[AnyVal] is java.lang.Object, so you are just asking that all classes extend Object, which they do.
From ArchUnit user guide:
It does so by analyzing given Java bytecode, importing all classes into a Java code structure.
I was hoping you'd get Class etc. and could go into Scala reflection from there, even if you wouldn't get the nice DSL, but they use their own API instead.
So to answer Scala-specific questions, it would need to parse #ScalaSignature annotations and that would probably be a very large effort for the developers (not to mention maintenance, or dependence on specific Scala version at least until Scala 3).

Can I mimic Scala SIP-18-syle imports?

Scala SIP 18 provides a way to force users to provide an import statement to use certain advanced or unsafe language features. For instance, in order to use higher kinded types, you need to
import scala.language.higherKinds
or you will get a compiler warning telling you you are using an advanced feature.
Is there any way that I can reproduce or mimic this behavior in my own library? For example I may have:
trait MongoRepository[E <: Entity] {
val casbahCollection: com.mongodb.casbah.MongoCollection
}
I have made the casbahCollection public to expose the underlying collection to the user in case they need it. But it's really not something I want my users to do because it's a leaky abstraction. So I want to force them to do something like this:
import my.library.mongo.leakyAbstraction
Before doing something like this:
widgetRepo.casbahCollection.find()
Is it possible? Is there some way I might provide a similar behavior that's a little more effective than just placing a big ugly warning in the docs?
You could fake it with an implicit, similar to the way Await.result works in scala.concurrent.
First create a sealed trait that represents a "permit" to directly access your DAO:
#implicitNotFound("Import my.library.mongo.leakyAbstraction to directly access Mongo")
sealed trait CanAccessMongo
And then an object that extends it:
implicit object leakyAbstraction extends CanAccessMongo
These must be in the same file. By making CanAccessMongo sealed, code outside the same file will not be able to extend it.
Then in MongoRepository make cashbahCollection a function (change val to def). You'll probably want a private val that actually creates it, but we need the function to limit access.
def cashbahCollection(implicit permit: CanAccessMongo) = ...
Now users of your library will have to bring leakyAbstraction into scope in order to call that function. If they don't, they'll get the error message specified in implicitNotFound.
The obvious downside is that all your library code will have to have leakyAbstraction in scope as well.

What is the best way to avoid clashing between two typeclass definitions in shapeless

Shapeless has a neat type class derivation mechanism that allows you to define typeclasses and get automatic derivation for any typeclass.
To use the derivation mechanism as a user of a typeclass, you would use the following syntax
import MyTypeClass.auto._
which as far as I understand it is equivalent to
import MyTypeClass.auto.derive
An issue arises when you try and use multiple typeclasses like such within the same scope. It would appear that the Scala compiler only considers the last definition of derive even though there are two versions of the function "overloaded" on their implicit arguments.
There are a couple ways I can think of to fix this. Instead of listing them here, I will mark them as answers that you can vote on to confirm sanity as well as propose any better solution.
I raised this question back in April and proposed two solutions: defining the method yourself (as you suggest):
object AutoCodecJson {
implicit def deriveEnc[T] = macro deriveProductInstance[EncodeJson, T]
implicit def deriveDec[T] = macro deriveProductInstance[DecodeJson, T]
}
Or using aliasing imports:
import AutoEncodeJson.auto.{ derive => deriveEnc }
import AutoDecodeJson.auto.{ derive => deriveDec }
I'd strongly suggest going with aliasing imports—Miles himself said "hadn't anticipated that macro being reused that way: not sure I approve" about the deriveProductInstance approach.
Instead of inheriting from the Companion trait, define the auto object and apply method yourself within your companion object and name them distinctively. A possible drawback to this is that two separate librairies using shapeless could end up defining a derive method with the same name and the user would end up again with a situation where he cannot use the derivation process for both type classes within the same scope in his project.
Another possible drawback is that by dealing with the macro call yourself, you may be more sensitive to shapeless API changes.
Modify/fix the Scala compiler to accept two different methods overloaded on their implicit parameters.
Is there any reason why this is impossible in theory?

Infer multiple generic types in an abstract class that should be available to the compiler

I am working on an abstract CRUD-DAO for my play2/slick2 project. To have convenient type-safe primary IDs I am using Unicorn as additional abstraction and convenience on top of slicks MappedTo & ColumnBaseType.
Unicorn provides a basic CRUD-DAO class BaseIdRepository which I want to further extend for project specific needs. The signature of the class is
class BaseIdRepository[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]]
(tableName: String, val query: TableQuery[T])
(implicit val mapping: BaseColumnType[I])
extends BaseIdQueries[I, A, T]
This leads to DAO implementations looking something like
class UserDao extends
BaseIdRepository[UserId, User, Users]("USERS", TableQuery[Users])
This seems awfully redundant to me. I was able to supply tableName and query from T, giving me the following signature on my own Abstract DAO
abstract class AbstractIdDao[I <: BaseId, A <: WithId[I], T <: IdTable[I, A]]
extends BaseIdRepository[I,A,T](TableQuery[T].baseTableRow.tableName, TableQuery[T])
Is it possible in Scala to somehow infer the types I and A to make a signature like the following possible? (Users is a class extending IdTable)
class UserDao extends AbstractIdDao[Users]
Is this possible without runtime-reflection? If only by runtime-reflection: How do i use the Manifest in a class definition and how big is the performance impact in a reactive Application?
Also, since I am fairly new to the language and work on my own: Is this good practice in scala at all?
Thank you for help. Feel free to criticize my question and english. Improvements will of course be submitted to Unicorn git-repo
EDIT:
Actually, TableQuery[T].baseTableRow.tableName, TableQuery[T] does not work due to the error class type required but T found, IDEA was superficially fine with it, scalac wasn't.
As for your first question, I've encountered this when working with Slick too. But if you think about it, you'll see you cannot do this at compile time. This is because this type information is necessary specify the relations between your type parameters. If you would not, you would be able to construct classes of BaseIdRepository where the types don't make sense, such as IdTables where the table doesn't represent the projection. Since you need names for each of these relations, you need 3 named type parameters. If you omit the first one, it is possible to construct an IdRepository without a projection containing an Id; if you omit the second one it is possible to have a table without an ID column; and if you omit the third one, it is possible to query tables that do not have this combination of a table and a projection with an ID. You might not have the types defined in your application that would break any of these rules presently, but the compiler doesn't know that. Supplying the proper type information is unavoidable.
As for your second question, it is very unadvisable to employ reflection just because you think the syntax is verbose. If you can make guarantees about typesafety by simply by providing type parameters, I would advise you to do so. It is in very bad taste and style to write Scala in such a way. It would be ironic to employ typesafe ID's with Unicorn and later hack around its type safety with reflection.
Furthermore, a Manifest is not what you want: a manifest doesn't allow you to provide less type information to the compiler, it only allows you to be more flexible to specify where you do so. It allows you to leverage the compiler's knowledge of types at compile time to circumvent some issues that type erasure introduces. The problem you face here has nothing to do with type erasure, so Manifests won't work. Lastly, runtime reflection won't help you much here because Slick's internal functions won't allow you to compile if you don't already supply the type information.
So yeah, what you want is impossible. Scala (and Slick) need complete information at compile time and no trick is going to be effective in circumventing that.

How should I organize implicits in my Scala application?

Having written a few scala tools, I'm trying to come to grips with the best way to arrange my code - particularly implicits. I have 2 goals:
Sometimes, I want to be able to import just the implicits I ask for.
Othertimes, I want to just import everything.
To avoid duplicating the implicits, I've come up with this structure (similar to the way scalaz is arranged):
case class StringW(s : String) {
def contrived = s + "?"
}
trait StringWImplicits {
implicit def To(s : String) = StringW(s)
implicit def From(sw : StringW) = sw.s
}
object StringW extends StringWImplicits
// Elsewhere on Monkey Island
object World extends StringWImplicits with ListWImplicits with MoreImplicits
This allows me to just
import StringW._ // Selective import
or (in most cases)
import World._. // Import everything
How does everyone else do it?
I think that implicit conversions are dangerous if you don't know where they are coming from. In my case, I put my implicits in a Conversions class and import it as close to the use as possible
def someMethod(d: Date) ; Unit {
import mydate.Conversions._
val tz = TimeZone.getDefault
val timeOfDay = d.getTimeOfDay(tz) //implicit used here
...
}
I'm not sure I like "inheriting" implicits from various traits for the same reason it was considered bad Java practice to implement an interface so you could use its constants directly (static imports are preferred instead).
I usually had implicit conversions in an object which clearly signals that what it is imported is an implicit conversion.
For example, if I have a class com.foo.bar.FilthyRichString, the implicit conversions would go into com.foo.bar.implicit.FilthyRichStringImplicit. I know the names are a bit long, but that's why we have IDEs (and Scala IDE support is getting better). The way I do this is that I feel it is important that all the implicit conversions can be clearly viewed in a 10 second code review. I could look at the following code:
// other imports
import com.foo.bar.FilthyRichString
import com.foo.bar.util.Logger
import com.foo.bar.util.FileIO
import com.foo.bar.implicits.FilthyRichStringImplicit._
import com.foo.bar.implicits.MyListImplicit._
// other implicits
and at a glance see all the implicit conversions that are active in this source file. They would also be all gathered together, if you use the convention that imports are grouped by packages, with a new line between different packages.
Along the lines of the same argument, I wouldn't like a catch-all object that holds all of the implicit conversions. In a big project, would you really use all of the implicit conversions in all your source files? I think that doing that means very tight coupling between different parts of your code.
Also, a catch-all object is not very good for documentation. In the case of explicitly writing all the implicit conversions used in a file, one can just look at your import statements and straight away jump to the documentation of the implicit class. In the case of a catch-all object, one would have to look at that object (which in a big project might be huge) and then search for the implicit conversion they are after.
I agree with oxbow_lakes that having implicit conversion in traits is bad because of the temptation of inheriting from it, which is, as he said, bad practice. Along those lines, I would make the objects holding the implicit conversions final just to avoid the temptation altogether. His idea of importing them as close to the use as possible is very nice as well, if implicit conversions are just used sparingly in the code.
-- Flaviu Cipcigan