Managing MappedColumnType conversions in a slick 3 project - scala

I am trying to build custom database column converters for a new Slick 3 project. It's pretty easy to make these using the MappedColumnType, but you have to have imported the driver api. For a one-off type in a single DAO class, this is straight forward. But I would like to use my custom column types across all my DAO objects. I have been unable to construct my import in a way that the compiler can recognize the implicits.
Here is an example of the type of library I would like to construct. It has a single converter, very similar to the ubiquitous Joda date converter seen in many Slick 2 examples.
package dao
import java.sql.Date
import data.Timestamp
import play.api.db.slick.{DatabaseConfigProvider, HasDatabaseConfigProvider}
import slick.driver.JdbcProfile
case class StandardConversions(protected val dbConfigProvider: DatabaseConfigProvider)
extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
implicit val timestampColumnType = MappedColumnType.base[Timestamp, Date](
{ data => new Date(data.value) },
{ sql => Timestamp(sql.getTime) }
)
}
In the DAO class I try doing the import like this:
val conversions = StandardConversions(dbConfigProvider)
import conversions._
The compiler error is the familiar:
could not find implicit value for parameter tt: slick.ast.TypedType[data.Timestamp]
I'm basically stuck in dependency injection, implicit hell. Has anybody come up with a good way to maintain their custom conversions in Slick 3? Please share.

This is where traits come in handy:
package dao
import java.sql.Date
import data.Timestamp
import play.api.db.slick.HasDatabaseConfig
import slick.driver.JdbcProfile
trait StandardConversions extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
implicit val timestampColumnType = MappedColumnType.base[Timestamp, Date](
{ data => new Date(data.value) },
{ sql => Timestamp(sql.getTime) }
)
}
And then simply extend from this trait in your DAOs:
class SomeDAO #Inject()(protected val dbConfigProvider: DatabaseConfigProvider)
extends HasDatabaseConfigProvider[JdbcProfile]
with StandardConversions {
import driver.api._
// all implicits of StandardConversions are in scope here
}

In combination with Roman's solution, you should probably add the following import:
import play.api.libs.concurrent.Execution.Implicits.defaultContext

Related

Scala: Play: Action classes — import behavior changed from Play 2.4 to Play 2.8

I am using Play 2.8.7, Scala 2.13.4.
import play.api.mvc._
import play.api.mvc.Results._
import play.mvc.Controller
import play.api.i18n.{I18nSupport, MessagesApi}
import javax.inject._
class Application #Inject() (
val messagesApi: MessagesApi
) extends Controller with I18nSupport {
def greeting = Action { implicit request =>
Ok("hello")
}
}
1. I want to import play.api.mvc.Results.Ok — why is it only imported when I do import play.api.mvc.Results._ but not when I only do import play.api.mvc._?
The latter used to work when I used Play 2.4.3 (Scala 2.11.11).
2. The compiler cannot resolve symbol "Action". Why is that...? I did import play.api.mvc._
UPDATE:
There was a suggestion to import play.mvc.BaseController.
It seems not to exist in Play 2.8.7.
"Controller" is for Java, so you should use "play.mvc.BaseController" in Scala.
Q1 & Q2 both would be resolved if you are using "play.mvc.BaseController". " import play.api.mvc.Results._" is also unnecessary.
import play.mvc.Controller and play.mvc.Result and extends your class to Controller.
Example:
import play.mvc.Controller;
import play.mvc.Result;
public class ClassName extends Controller {
//code
}

Transform a dataframe to a dataset using case class spark scala

I wrote the following code which aims to transform a dataframe to a dataset using a case class
def toDs[T](df: DataFrame): Dataset[T] = {
df.as[T]
}
then case class DATA( name:String, age:Double, location:String)
I am getting:
Unable to find encoder for type stored in a Dataset. Primitive types (Int, String, etc) and Product types (case classes) are supported by importing spark.implicits._ Support for serializing other types will be added in future releases.
[error] df.as[T]
Any idea how to fix this
You can read the data into a Dataset[MyCaseClass] in the following two ways:
Say you have the following class: case class MyCaseClass
1) First way: Import sparksession implicits in the scope and use the as operator to convert your DataFrame to Dataset[MyCaseClass]:
case class MyCaseClass
val spark: SparkSession = SparkSession.builder.enableHiveSupport.getOrCreate()
import spark.implicits._
val ds: Dataset[MyCaseClass]= spark.read.format("FORMAT_HERE").load().as[MyCaseClass]
2) You can create you own encoder in another object and import them in your current code
package com.funky.package
import org.apache.spark.sql.{Encoder, Encoders}
case class MyCaseClass
object MyCustomEncoders{
implicit val mycaseClass:Encoder[MyCaseClass] = Encoders.product[MyCaseClass]
}
In the file containing the main method, import the above implicit value
import com.funky.package.MyCustomEncoders
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.Dataset
val spark: SparkSession = SparkSession.builder.enableHiveSupport.getOrCreate()
val ds: Dataset[MyCaseClass]= spark.read.format("FORMAT_HERE").load().as[MyCaseClass]

using package in Scala?

I have a scala project that uses akka. I want the execution context to be available throughout the project. So I've created a package object like this:
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import com.datastax.driver.core.Cluster
package object connector {
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val executionContext = executionContext
implicit val session = Cluster
.builder
.addContactPoints("localhost")
.withPort(9042)
.build()
.connect()
}
In the same package I have this file:
import akka.stream.alpakka.cassandra.scaladsl.CassandraSource
import akka.stream.scaladsl.Sink
import com.datastax.driver.core.{Row, Session, SimpleStatement}
import scala.collection.immutable
import scala.concurrent.Future
object CassandraService {
def selectFromCassandra()() = {
val statement = new SimpleStatement(s"SELECT * FROM animals.alpakka").setFetchSize(20)
val rows: Future[immutable.Seq[Row]] = CassandraSource(statement).runWith(Sink.seq)
rows.map{item =>
print(item)
}
}
}
However I am getting the compiler error that no execution context or session can be found. My understanding of the package keyword was that everything in that object will be available throughout the package. But that does not seem work. Grateful if this could be explained to me!
Your implementation must be something like this, and hope it helps.
package.scala
package com.app.akka
package object connector {
// Do some codes here..
}
CassandraService.scala
package com.app.akka
import com.app.akka.connector._
object CassandraService {
def selectFromCassandra() = {
// Do some codes here..
}
}
You have two issue with your current code.
When you compile your package object connector it is throwing below error
Error:(14, 35) recursive value executionContext needs type
implicit val executionContext = executionContext
Issue is with implicit val executionContext = executionContext line
Solution for this issue would be as below.
implicit val executionContext = ExecutionContext
When we compile CassandraService then it is throwing error as below
Error:(17, 13) Cannot find an implicit ExecutionContext. You might pass
an (implicit ec: ExecutionContext) parameter to your method
or import scala.concurrent.ExecutionContext.Implicits.global.
rows.map{item =>
Error clearly say that either we need to pass ExecutionContext as implicit parameter or import scala.concurrent.ExecutionContext.Implicits.global. In my system both issues are resolved and its compiled successfully. I have attached code for your reference.
package com.apache.scala
import akka.actor.ActorSystem
import akka.stream.ActorMaterializer
import com.datastax.driver.core.Cluster
import scala.concurrent.ExecutionContext
package object connector {
implicit val system = ActorSystem()
implicit val mat = ActorMaterializer()
implicit val executionContext = ExecutionContext
implicit val session = Cluster
.builder
.addContactPoints("localhost")
.withPort(9042)
.build()
.connect()
}
package com.apache.scala.connector
import akka.stream.alpakka.cassandra.scaladsl.CassandraSource
import akka.stream.scaladsl.Sink
import com.datastax.driver.core.{Row, SimpleStatement}
import scala.collection.immutable
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.Future
object CassandraService {
def selectFromCassandra() = {
val statement = new SimpleStatement(s"SELECT * FROM animals.alpakka").setFetchSize(20)
val rows: Future[immutable.Seq[Row]] = CassandraSource(statement).runWith(Sink.seq)
rows.map{item =>
print(item)
}
}
}

scala import library wildcard

I am new to scala. Please be gentle.
The import below imports everything (every class, trait and object) under ml.
import org.apache.spark.ml._
but NOT ParamMap, which is under
import org.apache.spark.ml.param._
In other words, for the code below, if I do:
import org.apache.spark.ml.param._
import org.apache.spark.ml._
class Kmeans extends Transformer {
def copy(extra: ParamMap): Unit = {
defaultCopy(extra)
}}
Then I have no import errors, but if I comment import org.apache.spark.ml.param._:
//import org.apache.spark.ml.param._
import org.apache.spark.ml._
class Kmeans extends Transformer {
def copy(extra: ParamMap): Unit = {
defaultCopy(extra)
}}
It gives an import error on ParamMap.
Question
why isn't this import org.apache.spark.ml.param.ParamMap included import org.apache.spark.ml.param._
Scala imports are not recursive - import org.apache.spark.ml._ means import all classes and fields directly under ml package but not the ones under its sub-packages.
Since ParamMap is under one of ml's sub-packages (ml.param), you'll have to import that package or ParamMap class directly.

Avoid import tax when using spark implicits

In my testing, I have a test trait to provide spark context:
trait SparkTestTrait {
lazy val spark: SparkSession = SparkSession.builder().getOrCreate()
}
The problem is that I need to add an import in every test function:
test("test1) {
import spark.implicits._
}
I managed to reduce this to on per file by adding to the SparkTestTrait the following:
object testImplicits extends SQLImplicits {
protected override def _sqlContext: SQLContext = spark.sqlContext
}
and then in the constructor of the implementing file:
import testImplicits._
However, I would prefer to have these implicits imported to all classes implementing SparkTestTrait (I can't have SparkTestTrait extend SQLImplicits because the implementing classes already extend an abstract class).
Is there a way to do this?