I'm beginning with slick and scala, using playframewok
I have configured my project to work with play 2.2.0 using play-slick 0.5.0.8
My problem is that I can't execute some essential methods like "list , foreach, for{}yield ..."
I have tried a stand alone example and it works with the same slick version 1.0.1??
Here is the project build file
import sbt._
import Keys._
import play.Project._
object Build extends Build {
val appName = "homePage"
val appVersion = "1.0-ALPHA"
val appDependencies = Seq(
// Add your project dependencies here,
jdbc,
"com.typesafe.play" %% "play-slick" % "0.5.0.8" ,
"postgresql" % "postgresql" % "9.1-901-1.jdbc4",
"com.typesafe.slick" %% "slick" % "1.0.1"
)
val main = play.Project(appName, appVersion, appDependencies).settings(
// add web app as
playAssetsDirectories <+= baseDirectory / "webapp"
)
}
My model ::
package model
import java.util.Calendar
import scala.slick.driver.PostgresDriver.simple._
case class Article(id: Long,
content: String,
date: java.sql.Date)
object Articles extends Table[Article]("articles") {
def id = column[Long]("art_id", O.PrimaryKey , O.AutoInc) // This is the primary key column
def content = column[String]("art_content", O.NotNull)
def date = column[java.sql.Date]("art_date", O.NotNull, O.Default(new java.sql.Date(Calendar.getInstance().getTime.getTime)))
def * = id ~ content ~ date <> (Article.apply _, Article.unapply _)
}
here is the code that doesn't work
query foreach { case (content, date) =>
println(" " + name + ": " + count)
}
//:: cannot resolve the symbol foreach
same thing for [for , yield]
(for(p <- Props if p.key === k) yield p.value).firstOption
I don't know what is the problem, so every help will be appreciated.
import scala.slick.driver.PostgresDriver.simple._ in the file, where you try to call foreach.
I should have used import scala.slick.driver.PostgresDriver.simple._ in my service source file:
import model._
import scala.slick.lifted.Query
import scala.slick.driver.PostgresDriver.simple._
object ArticlesService {
val PAGE_SIZE :Int = 100
def getPaginatedResult(pageNumber : Int) (implicit s: scala.slick.session.Session)= {
val query= Query(Articles)
query.drop(pageNumber * PAGE_SIZE).take(PAGE_SIZE).list()
}
}
Related
Using the Scala play library I'm attempting to parse the string :
var str = "{\"payload\": \"[{\\\"test\\\":\\\"123\\\",\\\"tester\\\":\\\"456\\\"}," +
"{\\\"test1\\\":\\\"1234\\\",\\\"tester2\\\":\\\"4567\\\"}]\"}";
into a list of Payload classes using code below :
import play.api.libs.json._
object TestParse extends App {
case class Payload(test : String , tester : String)
object Payload {
implicit val jsonFormat: Format[Payload] = Json.format[Payload]
}
var str = "{\"payload\": \"[{\\\"test\\\":\\\"123\\\",\\\"tester\\\":\\\"456\\\"}," +
"{\\\"test1\\\":\\\"1234\\\",\\\"tester2\\\":\\\"4567\\\"}]\"}";
println((Json.parse(str) \ "payload").as[List[Payload]])
}
build.sbt :
name := "akka-streams"
version := "0.1"
scalaVersion := "2.12.8"
lazy val akkaVersion = "2.5.19"
lazy val scalaTestVersion = "3.0.5"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-stream" % akkaVersion,
"com.typesafe.akka" %% "akka-stream-testkit" % akkaVersion,
"com.typesafe.akka" %% "akka-testkit" % akkaVersion,
"org.scalatest" %% "scalatest" % scalaTestVersion
)
// https://mvnrepository.com/artifact/com.typesafe.play/play-json
libraryDependencies += "com.typesafe.play" %% "play-json" % "2.10.0-RC6"
It fails with exception :
Exception in thread "main" play.api.libs.json.JsResultException: JsResultException(errors:List((,List(JsonValidationError(List("" is not an object),WrappedArray())))))
Is the case class structure incorrect ?
I've updated the code to :
import play.api.libs.json._
object TestParse extends App {
import TestParse.Payload.jsonFormat
object Payload {
implicit val jsonFormat: Format[RootInterface] = Json.format[RootInterface]
}
case class Payload (
test: Option[String],
tester: Option[String]
)
case class RootInterface (
payload: List[Payload]
)
val str = """{"payload": [{"test":"123","tester":"456"},{"test1":"1234","tester2":"4567"}]}"""
println(Json.parse(str).as[RootInterface])
}
which returns error :
No instance of play.api.libs.json.Format is available for scala.collection.immutable.List[TestParse.Payload] in the implicit scope (Hint: if declared in the same file, make sure it's declared before)
implicit val jsonFormat: Format[RootInterface] = Json.format[RootInterface]
This performs the task but there are cleaner solutions :
import akka.actor.ActorSystem
import akka.stream.scaladsl.{Flow, Sink, Source}
import org.scalatest.Assertions._
import spray.json.{JsObject, JsonParser}
import scala.concurrent.Await
import scala.concurrent.duration.DurationInt
object TestStream extends App {
implicit val actorSystem = ActorSystem()
val mapperFlow = Flow[JsObject].map(x => {
x.fields.get("payload").get.toString().replace("{", "")
.replace("}", "")
.replace("[", "")
.replace("]", "")
.replace("\"", "")
.replace("\\", "")
.split(":").map(m => m.split(","))
.toList
.flatten
.grouped(4)
.map(m => Test(m(1), m(3).toDouble))
.toList
})
val str = """{"payload": [{"test":"123","tester":"456"},{"test":"1234","tester":"4567"}]}"""
case class Test(test: String, tester: Double)
val graph = Source.repeat(JsonParser(str).asJsObject())
.take(3)
.via(mapperFlow)
.mapConcat(identity)
.runWith(Sink.seq)
val result = Await.result(graph, 3.seconds)
println(result)
assert(result.length == 6)
assert(result(0).test == "123")
assert(result(0).tester == 456 )
assert(result(1).test == "1234")
assert(result(1).tester == 4567 )
assert(result(2).test == "123")
assert(result(2).tester == 456 )
assert(result(3).test == "1234")
assert(result(3).tester == 4567 )
}
Alternative, ioiomatic Scala answers are welcome.
i have a R&D project that read data from oracle then write to spark standalone cluster by using scala & intellij
this is my build.sbt with library Dependencies
name := "DB_Oracle_V07"
version := "0.1"
scalaVersion := "2.11.12"
// https://mvnrepository.com/artifact/org.apache.spark/spark-core
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0"
// https://mvnrepository.com/artifact/org.apache.spark/spark-sql
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.0"
// https://mvnrepository.com/artifact/org.apache.spark/spark-hive
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.4.0"
my project main class without any spark syntax
package com.xxxx.spark
import java.io.FileWriter
import java.sql.{Connection, DriverManager}
import java.text.SimpleDateFormat
import java.util.Calendar
import scala.collection.mutable.ArrayBuffer
object query01 {
var dateStart = ""
case class DF_LOT_INFO(LOT_NUMBER: String, MACHINE: String, FACILITY: String, LOT_TYPE: String, REC_DATE: String, FILE_NAME: String)
def main(args:Array[String]): Unit = {
val cal = Calendar.getInstance
cal.add(Calendar.DATE, 1)
val date = cal.getTime
val format1 = new SimpleDateFormat("yyyyMMdd_HHmmss")
dateStart = format1.format(date)
print_log("start")
val url = "jdbc:oracle:thin:#TMDT1PEN.XXXX.XXXX.COM:1521:TMDT1"
//val driver = "oracle.jdbc.OracleDriver"
val driver = "oracle.jdbc.driver.OracleDriver"
val username = "TMDB_XXXX"
val password = "XXXXXXXX"
val connection:Connection = null
val result = ArrayBuffer[String]()
try{
print_log("Class.forName start")
val app_dir = System.getProperty("user.dir")
print_log("current dir: " + app_dir)
val java_class_path = System.getProperty("java.class.path")
print_log("java_class_path: " + java_class_path)
Class.forName(driver)
var testing = Class.forName(driver)
print_log("Class.forName end")
//DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver)
val connection = DriverManager.getConnection(url, username, password)
val statement = connection.createStatement
val rs = statement.executeQuery("select * from tester")
var i = 1
while(rs.next){
val item = rs.getString("tester_name")
println("data:" + item)
print_log("data:" + item)
result.append(item)
i = i + 1
}
values('aaaaa','bbbbb',sysdate)")
}
catch{
//case unknown => println("Got this unknown exception: " + unknown)
case unknown => print_error_log("Unknown exception: " + unknown)
}
finally{
}
print_log("end")
}
def print_log(msg:String): Unit = {
val fw = new FileWriter(dateStart + "_log.txt", true)
try {
fw.write("\n" + msg)
}
finally fw.close()
}
def print_error_log(msg:String): Unit = {
val fw = new FileWriter(dateStart + "_error_log.txt", true)
try {
fw.write("\n" + msg)
}
finally fw.close()
}
}
i build artifact as usual & added ojdbc6.jar in my proect .jar as my oracle library
ojdbc6.jar
but i fail to execute my project jar file and getting below error
Error: Could not find or load main class com.xxxx.spark.query01
blindly i removed all spark extracted jar files in my project .jar
or create a new project without any library Dependencies in build.sbt,
then i able to execute my project .jar without error.
With above blind test i sure that the error is caused by spark library,
Any expert can advise me how to solve this error? Thanks for all :)
I am trying to use http4s, circe and http4s-circe.
Below I am trying to use the auto derivation feature of circe.
import org.http4s.client.blaze.SimpleHttp1Client
import org.http4s.Status.ResponseClass.Successful
import io.circe.syntax._
import org.http4s._
import org.http4s.headers._
import org.http4s.circe._
import scalaz.concurrent.Task
import io.circe._
final case class Login(username: String, password: String)
final case class Token(token: String)
object JsonHelpers {
import io.circe.generic.auto._
implicit val loginEntityEncoder : EntityEncoder[Login] = jsonEncoderOf[Login]
implicit val loginEntityDecoder : EntityDecoder[Login] = jsonOf[Login]
implicit val tokenEntityEncoder: EntityEncoder[Token] = jsonEncoderOf[Token]
implicit val tokenEntityDecoder : EntityDecoder[Token] = jsonOf[Token]
}
object Http4sTest2 extends App {
import JsonHelpers._
val url = "http://"
val uri = Uri.fromString(url).valueOr(throw _)
val list = List[Header](`Content-Type`(MediaType.`application/json`), `Accept`(MediaType.`application/json`))
val request = Request(uri = uri, method = Method.POST)
.withBody(Login("foo", "bar").asJson)
.map{r => r.replaceAllHeaders(list :_*)}.run
val client = SimpleHttp1Client()
val result = client.fetch[Option[Token]](request){
case Successful(response) => response.as[Token].map(Some(_))
case _ => Task(Option.empty[Token])
}.run
println(result)
}
I get multiple instances of these two compiler errors
Error:scalac: missing or invalid dependency detected while loading class file 'GenericInstances.class'.
Could not access type Secondary in object io.circe.Encoder,
because it (or its dependencies) are missing. Check your build definition for
missing or conflicting dependencies. (Re-run with `-Ylog-classpath` to see the problematic classpath.)
A full rebuild may help if 'GenericInstances.class' was compiled against an incompatible version of io.circe.Encoder.
Error:(25, 74) could not find implicit value for parameter encoder: io.circe.Encoder[Login]
implicit val loginEntityEncoder : EntityEncoder[Login] = jsonEncoderOf[Login]
I was able to solve this. I did a search on google on sbt circe dependency and I copy pasted the first search result. that was circe 0.1 and that is why things were not working for me.
I changed my dependencies to
libraryDependencies ++= Seq(
"org.http4s" %% "http4s-core" % http4sVersion,
"org.http4s" %% "http4s-dsl" % http4sVersion,
"org.http4s" %% "http4s-blaze-client" % http4sVersion,
"org.http4s" %% "http4s-circe" % http4sVersion,
"io.circe" %% "circe-core" % "0.7.0",
"io.circe" %% "circe-generic" % "0.7.0"
)
and now automatic derivation works fine and I am able to compile the code below
import org.http4s.client.blaze.SimpleHttp1Client
import org.http4s._
import org.http4s.headers._
import org.http4s.circe._
import scalaz.concurrent.Task
import io.circe.syntax._
import io.circe.generic.auto._
import org.http4s.Status.ResponseClass.Successful
case class Login(username: String, password: String)
case class Token(token: String)
object JsonHelpers {
implicit val loginEntityEncoder : EntityEncoder[Login] = jsonEncoderOf[Login]
implicit val loginEntityDecoder : EntityDecoder[Login] = jsonOf[Login]
implicit val tokenEntityEncoder: EntityEncoder[Token] = jsonEncoderOf[Token]
implicit val tokenEntityDecoder : EntityDecoder[Token] = jsonOf[Token]
}
object Http4sTest2 extends App {
import JsonHelpers._
val url = "http://"
val uri = Uri.fromString(url).valueOr(throw _)
val list = List[Header](`Content-Type`(MediaType.`application/json`), `Accept`(MediaType.`application/json`))
val request = Request(uri = uri, method = Method.POST)
.withBody(Login("foo", "bar").asJson)
.map{r => r.replaceAllHeaders(list :_*)}.run
val client = SimpleHttp1Client()
val result = client.fetch[Option[Token]](request){
case Successful(response) => response.as[Token].map(Some(_))
case _ => Task(Option.empty[Token])
}.run
println(result)
}
I'm migrating from slick 2.1 to 3.0. As you know the function withSession has been deprecated.
How can I change code bellow:
def insert(vote: Vote) = DB.withSession { implicit session =>
insertWithSession(vote)
}
def insertWithSession(vote: Vote)(implicit s: Session) = {
Votes.insert(vote)
}
I've got compile error on Votes.insert and the error is:
could not find implicit value for parameter s: slick.driver.PostgresDriver.api.Session
At last, Is there any document other than official link helping me to migrate. I need more details.
Assuming that you are using play-slick for slick integration with play.
You can look at https://www.playframework.com/documentation/2.5.x/PlaySlick for more details.
Add the slick and jdbc dependencies in build.sbt
libraryDependencies ++= Seq(
"com.typesafe.play" %% "play-slick" % "2.0.0",
"com.typesafe.play" %% "play-slick-evolutions" % "2.0.0"
"org.postgresql" % "postgresql" % "9.4-1206-jdbc4"
)
Add the postgres config in your application.conf
slick.dbs.default.driver="slick.driver.PostgresDriver$"
slick.dbs.default.db.driver="org.postgresql.Driver"
slick.dbs.default.db.url="jdbc:postgresql://localhost/yourdb?user=postgres&password=postgres"
Now define your models like the following,
package yourproject.models
import play.api.db.slick.DatabaseConfigProvider
import slick.driver.JdbcProfile
case class Vote(subject: String, number: Int)
class VoteTable(tag: Tag) extends Table[Vote](tag, "votes") {
def id = column[Int]("id", O.PrimaryKey, O.AutoInc)
def subject = column[String]("subject")
def number = column[Int]("number")
def * = (id.?, subject, number) <> (Vote.tupled, Vote.unapply)
}
class VoteRepo #Inject()()(protected val dbConfigProvider: DatabaseConfigProvider) {
val dbConfig = dbConfigProvider.get[JdbcProfile]
val db = dbConfig.db
import dbConfig.driver.api._
val Votes = TableQuery[VoteTable]
def insert(vote: Vote): DBIO[Long] = {
Votes returning Votes.map(_.id) += vote
}
}
Now your controller will look something like,
import javax.inject.Inject
import yourproject.models.{VoteRepo}
import play.api.libs.concurrent.Execution.Implicits.defaultContext
import play.api.mvc.{Action, Controller}
class Application #Inject()(voteRepo: VoteRepo) extends Controller {
def createProject(subject: String, number: Int) = Action.async {
implicit rs => {
voteRepo.create(Vote(subject, number))
.map(id => Ok(s"project $id created") )
}
}
}
I cannot access the SparkConf in the package. But I have already import the import org.apache.spark.SparkConf. My code is:
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark._
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._
object SparkStreaming {
def main(arg: Array[String]) = {
val conf = new SparkConf.setMaster("local[2]").setAppName("NetworkWordCount")
val ssc = new StreamingContext( conf, Seconds(1) )
val lines = ssc.socketTextStream("localhost", 9999)
val words = lines.flatMap(_.split(" "))
val pairs_new = words.map( w => (w, 1) )
val wordsCount = pairs_new.reduceByKey(_ + _)
wordsCount.print()
ssc.start() // Start the computation
ssc.awaitTermination() // Wait for the computation to the terminate
}
}
The sbt dependencies are:
name := "Spark Streaming"
version := "1.0"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.5.2" % "provided",
"org.apache.spark" %% "spark-mllib" % "1.5.2",
"org.apache.spark" %% "spark-streaming" % "1.5.2"
)
But the error shows that SparkConf cannot be accessed.
[error] /home/cliu/Documents/github/Spark-Streaming/src/main/scala/Spark-Streaming.scala:31: object SparkConf in package spark cannot be accessed in package org.apache.spark
[error] val conf = new SparkConf.setMaster("local[2]").setAppName("NetworkWordCount")
[error] ^
It compiles if you add parenthesis after SparkConf:
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")
The point is that SparkConf is a class and not a function, so you could use class name also for scope purposes. So when you add parenthesis after the class name, you are making sure you are calling the class constructor and not the scoping functionality. Here is an example from Scala shell illustrating the difference:
scala> class C1 { var age = 0; def setAge(a:Int) = {age = a}}
defined class C1
scala> new C1
res18: C1 = $iwC$$iwC$C1#2d33c200
scala> new C1()
res19: C1 = $iwC$$iwC$C1#30822879
scala> new C1.setAge(30) // this doesn't work
<console>:23: error: not found: value C1
new C1.setAge(30)
^
scala> new C1().setAge(30) // this works
scala>
In this case you cannot omit parentheses so it should be:
val conf = new SparkConf().setMaster("local[2]").setAppName("NetworkWordCount")