Slick with PostgreSQL Scala SBT Intellij IDEA - postgresql

I am trying to create a project with Slick in Intellij IDEA with PostgreSQL driver. But I have managed to find only this tutorial
I followed it, but I have got an error:
Exception in thread "main" java.lang.ExceptionInInitializerError
at Main.main(Main.scala)
Caused by: com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'url'
Here is my code for the main class:
import scala.slick.driver.PostgresDriver.simple._
object Main {
case class Song(
id: Int,
name: String,
singer: String)
class SongsTable(tag: Tag) extends Table[Song](tag, "songs") {
def id = column[Int]("id")
def name = column[String]("name")
def singer = column[String]("singer")
def * = (id, name, singer) <> (Song.tupled, Song.unapply)
}
lazy val songsTable = TableQuery[SongsTable]
val db = Database.forConfig("scalaxdb")
def main(args: Array[String]): Unit = {
val connectionUrl = "jdbc:postgresql://localhost/songs?user=postgres&password=postgresp"
Database.forURL(connectionUrl, driver = "org.postgresql.Driver") withSession {
implicit session =>
val songs = TableQuery[SongsTable]
songs.list foreach { row =>
println("song with id " + row.id + " has name " + row.name + " and a singer is " + row.singer)
}
}
}
}
These is application.conf file:
scalaxdb = {
dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
properties = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/dbname?user=user&password=password"
}
}
And this is build.sbt:
libraryDependencies ++= Seq(
"org.postgresql" % "postgresql" % "9.3-1100-jdbc4",
"com.typesafe.slick" %% "slick" % "2.1.0",
"org.slf4j" % "slf4j-nop" % "1.6.4"
)
I can not figure out what I am doing wrong. I would be very grateful for any advice on fixing this.

Don't you mind to use a little more up-to-date version of Slick?
import slick.jdbc.PostgresProfile.api._
import scala.concurrent.Await
import scala.concurrent.ExecutionContext.Implicits.global
import scala.concurrent.duration._
object Main {
case class Song(
id: Int,
name: String,
singer: String)
class SongsTable(tag: Tag) extends Table[Song](tag, "songs") {
def id = column[Int]("id")
def name = column[String]("name")
def singer = column[String]("singer")
def * = (id, name, singer) <> (Song.tupled, Song.unapply)
}
val db = Database.forConfig("scalaxdb")
val songs = TableQuery[SongsTable]
def main(args: Array[String]): Unit = {
Await.result({
db.run(songs.result).map(_.foreach(row =>
println("song with id " + row.id + " has name " + row.name + " and a singer is " + row.singer)))
}, 1 minute)
}
}
build.sbt
scalaVersion := "2.12.4"
libraryDependencies += "com.typesafe.slick" %% "slick" % "3.2.1"
libraryDependencies += "org.slf4j" % "slf4j-nop" % "1.7.25"
libraryDependencies += "com.typesafe.slick" %% "slick-hikaricp" % "3.2.1"
libraryDependencies += "org.postgresql" % "postgresql" % "42.1.4"

Related

Mapping a string to case classes using Scala play

Using the Scala play library I'm attempting to parse the string :
var str = "{\"payload\": \"[{\\\"test\\\":\\\"123\\\",\\\"tester\\\":\\\"456\\\"}," +
"{\\\"test1\\\":\\\"1234\\\",\\\"tester2\\\":\\\"4567\\\"}]\"}";
into a list of Payload classes using code below :
import play.api.libs.json._
object TestParse extends App {
case class Payload(test : String , tester : String)
object Payload {
implicit val jsonFormat: Format[Payload] = Json.format[Payload]
}
var str = "{\"payload\": \"[{\\\"test\\\":\\\"123\\\",\\\"tester\\\":\\\"456\\\"}," +
"{\\\"test1\\\":\\\"1234\\\",\\\"tester2\\\":\\\"4567\\\"}]\"}";
println((Json.parse(str) \ "payload").as[List[Payload]])
}
build.sbt :
name := "akka-streams"
version := "0.1"
scalaVersion := "2.12.8"
lazy val akkaVersion = "2.5.19"
lazy val scalaTestVersion = "3.0.5"
libraryDependencies ++= Seq(
"com.typesafe.akka" %% "akka-stream" % akkaVersion,
"com.typesafe.akka" %% "akka-stream-testkit" % akkaVersion,
"com.typesafe.akka" %% "akka-testkit" % akkaVersion,
"org.scalatest" %% "scalatest" % scalaTestVersion
)
// https://mvnrepository.com/artifact/com.typesafe.play/play-json
libraryDependencies += "com.typesafe.play" %% "play-json" % "2.10.0-RC6"
It fails with exception :
Exception in thread "main" play.api.libs.json.JsResultException: JsResultException(errors:List((,List(JsonValidationError(List("" is not an object),WrappedArray())))))
Is the case class structure incorrect ?
I've updated the code to :
import play.api.libs.json._
object TestParse extends App {
import TestParse.Payload.jsonFormat
object Payload {
implicit val jsonFormat: Format[RootInterface] = Json.format[RootInterface]
}
case class Payload (
test: Option[String],
tester: Option[String]
)
case class RootInterface (
payload: List[Payload]
)
val str = """{"payload": [{"test":"123","tester":"456"},{"test1":"1234","tester2":"4567"}]}"""
println(Json.parse(str).as[RootInterface])
}
which returns error :
No instance of play.api.libs.json.Format is available for scala.collection.immutable.List[TestParse.Payload] in the implicit scope (Hint: if declared in the same file, make sure it's declared before)
implicit val jsonFormat: Format[RootInterface] = Json.format[RootInterface]
This performs the task but there are cleaner solutions :
import akka.actor.ActorSystem
import akka.stream.scaladsl.{Flow, Sink, Source}
import org.scalatest.Assertions._
import spray.json.{JsObject, JsonParser}
import scala.concurrent.Await
import scala.concurrent.duration.DurationInt
object TestStream extends App {
implicit val actorSystem = ActorSystem()
val mapperFlow = Flow[JsObject].map(x => {
x.fields.get("payload").get.toString().replace("{", "")
.replace("}", "")
.replace("[", "")
.replace("]", "")
.replace("\"", "")
.replace("\\", "")
.split(":").map(m => m.split(","))
.toList
.flatten
.grouped(4)
.map(m => Test(m(1), m(3).toDouble))
.toList
})
val str = """{"payload": [{"test":"123","tester":"456"},{"test":"1234","tester":"4567"}]}"""
case class Test(test: String, tester: Double)
val graph = Source.repeat(JsonParser(str).asJsObject())
.take(3)
.via(mapperFlow)
.mapConcat(identity)
.runWith(Sink.seq)
val result = Await.result(graph, 3.seconds)
println(result)
assert(result.length == 6)
assert(result(0).test == "123")
assert(result(0).tester == 456 )
assert(result(1).test == "1234")
assert(result(1).tester == 4567 )
assert(result(2).test == "123")
assert(result(2).tester == 456 )
assert(result(3).test == "1234")
assert(result(3).tester == 4567 )
}
Alternative, ioiomatic Scala answers are welcome.

intellij scala: Error: Could not find or load main class

i have a R&D project that read data from oracle then write to spark standalone cluster by using scala & intellij
this is my build.sbt with library Dependencies
name := "DB_Oracle_V07"
version := "0.1"
scalaVersion := "2.11.12"
// https://mvnrepository.com/artifact/org.apache.spark/spark-core
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0"
// https://mvnrepository.com/artifact/org.apache.spark/spark-sql
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.4.0"
// https://mvnrepository.com/artifact/org.apache.spark/spark-hive
libraryDependencies += "org.apache.spark" %% "spark-hive" % "2.4.0"
my project main class without any spark syntax
package com.xxxx.spark
import java.io.FileWriter
import java.sql.{Connection, DriverManager}
import java.text.SimpleDateFormat
import java.util.Calendar
import scala.collection.mutable.ArrayBuffer
object query01 {
var dateStart = ""
case class DF_LOT_INFO(LOT_NUMBER: String, MACHINE: String, FACILITY: String, LOT_TYPE: String, REC_DATE: String, FILE_NAME: String)
def main(args:Array[String]): Unit = {
val cal = Calendar.getInstance
cal.add(Calendar.DATE, 1)
val date = cal.getTime
val format1 = new SimpleDateFormat("yyyyMMdd_HHmmss")
dateStart = format1.format(date)
print_log("start")
val url = "jdbc:oracle:thin:#TMDT1PEN.XXXX.XXXX.COM:1521:TMDT1"
//val driver = "oracle.jdbc.OracleDriver"
val driver = "oracle.jdbc.driver.OracleDriver"
val username = "TMDB_XXXX"
val password = "XXXXXXXX"
val connection:Connection = null
val result = ArrayBuffer[String]()
try{
print_log("Class.forName start")
val app_dir = System.getProperty("user.dir")
print_log("current dir: " + app_dir)
val java_class_path = System.getProperty("java.class.path")
print_log("java_class_path: " + java_class_path)
Class.forName(driver)
var testing = Class.forName(driver)
print_log("Class.forName end")
//DriverManager.registerDriver(new oracle.jdbc.driver.OracleDriver)
val connection = DriverManager.getConnection(url, username, password)
val statement = connection.createStatement
val rs = statement.executeQuery("select * from tester")
var i = 1
while(rs.next){
val item = rs.getString("tester_name")
println("data:" + item)
print_log("data:" + item)
result.append(item)
i = i + 1
}
values('aaaaa','bbbbb',sysdate)")
}
catch{
//case unknown => println("Got this unknown exception: " + unknown)
case unknown => print_error_log("Unknown exception: " + unknown)
}
finally{
}
print_log("end")
}
def print_log(msg:String): Unit = {
val fw = new FileWriter(dateStart + "_log.txt", true)
try {
fw.write("\n" + msg)
}
finally fw.close()
}
def print_error_log(msg:String): Unit = {
val fw = new FileWriter(dateStart + "_error_log.txt", true)
try {
fw.write("\n" + msg)
}
finally fw.close()
}
}
i build artifact as usual & added ojdbc6.jar in my proect .jar as my oracle library
ojdbc6.jar
but i fail to execute my project jar file and getting below error
Error: Could not find or load main class com.xxxx.spark.query01
blindly i removed all spark extracted jar files in my project .jar
or create a new project without any library Dependencies in build.sbt,
then i able to execute my project .jar without error.
With above blind test i sure that the error is caused by spark library,
Any expert can advise me how to solve this error? Thanks for all :)

Scala Slick compose column type mappers

I'm using slick-pg which adds support(with implicits) for the List and DateTime types in slick.
Unfortunately I cannot use List[DateTime] - slick does not understand composition of those types, but I've checked that both work correctly on their own(for example List[Int] and DateTime).
Is there a way to easily compose those two implicits?
1. Try to add
implicit def dateTimeList =
MappedColumnType.base[List[DateTime], List[Timestamp]](
_.map(dt => new Timestamp(dt.getMillis)),
_.map(ts => new DateTime(ts.getTime))
)
Just in case, the whole code that compiles:
import java.sql.Timestamp
import org.joda.time.DateTime
import slick.jdbc.PostgresProfile.api._
import slick.lifted.ProvenShape
import slick.basic.Capability
import slick.jdbc.JdbcCapabilities
import com.github.tototoshi.slick.PostgresJodaSupport._
import com.github.tminglei.slickpg._
object App {
trait MyPostgresProfile extends ExPostgresProfile
with PgArraySupport
with PgDate2Support
with PgRangeSupport
with PgHStoreSupport
// with PgPlayJsonSupport
with PgSearchSupport
// with PgPostGISSupport
with PgNetSupport
with PgLTreeSupport {
def pgjson = "jsonb" // jsonb support is in postgres 9.4.0 onward; for 9.3.x use "json"
// Add back `capabilities.insertOrUpdate` to enable native `upsert` support; for postgres 9.5+
override protected def computeCapabilities: Set[Capability] =
super.computeCapabilities + /*JdbcProfile.capabilities.insertOrUpdate*/ JdbcCapabilities.insertOrUpdate
override val api = MyAPI
object MyAPI extends API with ArrayImplicits
with DateTimeImplicits
// with JsonImplicits
with NetImplicits
with LTreeImplicits
with RangeImplicits
with HStoreImplicits
with SearchImplicits
with SearchAssistants {
implicit val strListTypeMapper = new SimpleArrayJdbcType[String]("text").to(_.toList)
// implicit val playJsonArrayTypeMapper =
// new AdvancedArrayJdbcType[JsValue](pgjson,
// (s) => utils.SimpleArrayUtils.fromString[JsValue](Json.parse(_))(s).orNull,
// (v) => utils.SimpleArrayUtils.mkString[JsValue](_.toString())(v)
// ).to(_.toList)
}
}
object MyPostgresProfile extends MyPostgresProfile
import MyPostgresProfile.api._
// This can be used instead of slick-joda-mapper library
// implicit def dateTime =
// MappedColumnType.base[DateTime, Timestamp](
// dt => new Timestamp(dt.getMillis),
// ts => new DateTime(ts.getTime)
// )
implicit def dateTimeList =
MappedColumnType.base[List[DateTime], List[Timestamp]](
_.map(dt => new Timestamp(dt.getMillis)),
_.map(ts => new DateTime(ts.getTime))
)
case class Record(id: Int, name: String, friends: List[Int], registered: DateTime, visits: List[DateTime])
class RecordTable(tag: Tag) extends Table[Record](tag, Some("public"), "records") {
def id: Rep[Int] = column[Int]("id", O.PrimaryKey, O.AutoInc)
def name: Rep[String] = column[String]("name")
def friends: Rep[List[Int]] = column[List[Int]]("friends")
def registered: Rep[DateTime] = column[DateTime]("registered")
def visits: Rep[List[DateTime]] = column[List[DateTime]]("visits")
def * : ProvenShape[Record] = (id, name, friends, registered, visits) <> (Record.tupled, Record.unapply)
}
val records: TableQuery[RecordTable] = TableQuery[RecordTable]
}
build.sbt
name := "slickdemo"
version := "0.1"
scalaVersion := "2.12.3"
libraryDependencies += "com.typesafe.slick" %% "slick" % "3.2.1"
libraryDependencies += "org.slf4j" % "slf4j-nop" % "1.7.25"
libraryDependencies += "com.typesafe.slick" %% "slick-hikaricp" % "3.2.1"
libraryDependencies += "org.postgresql" % "postgresql" % "42.1.4"
libraryDependencies += "com.github.tminglei" %% "slick-pg" % "0.15.3"
libraryDependencies += "joda-time" % "joda-time" % "2.9.9"
libraryDependencies += "org.joda" % "joda-convert" % "1.9.2"
libraryDependencies += "com.github.tototoshi" % "slick-joda-mapper_2.12" % "2.3.0"
Based on answer and documentation.
2. Alternatively you can add
implicit val dateTimeArrayTypeMapper =
new AdvancedArrayJdbcType[DateTime]("timestamp",
(s) => utils.SimpleArrayUtils.fromString[DateTime](DateTime.parse)(s).orNull,
(v) => utils.SimpleArrayUtils.mkString[DateTime](_.toString)(v)
).to(_.toList)
after strListTypeMapper and playJsonArrayTypeMapper.

java.lang.ClassNotFoundException: org.apache.spark.sql.DataFrame error when running Scala MongoDB connector

I am trying to run a Scala example with SBT to read data from MongoDB. I am getting this error whenever I try to access the data read from Mongo into the RDD.
Exception in thread "dag-scheduler-event-loop" java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame
at java.lang.Class.getDeclaredMethods0(Native Method)
at java.lang.Class.privateGetDeclaredMethods(Class.java:2701)
at java.lang.Class.getDeclaredMethod(Class.java:2128)
at java.io.ObjectStreamClass.getPrivateMethod(ObjectStreamClass.java:1431)
at java.io.ObjectStreamClass.access$1700(ObjectStreamClass.java:72)
at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:494)
at java.io.ObjectStreamClass$2.run(ObjectStreamClass.java:468)
at java.security.AccessController.doPrivileged(Native Method)
at java.io.ObjectStreamClass.<init>(ObjectStreamClass.java:468)
at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:365)
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1134)
at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
I have imported the Dataframe explicitly, even though it is not used in my code. Can anyone help with this issue?
My code:
package stream
import org.apache.spark._
import org.apache.spark.SparkContext._
import com.mongodb.spark._
import com.mongodb.spark.config._
import com.mongodb.spark.rdd.MongoRDD
import org.bson.Document
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.DataFrame
object SpaceWalk {
def main(args: Array[String]) {
val sparkConf = new SparkConf().setAppName("SpaceWalk")
.setMaster("local[*]")
.set("spark.mongodb.input.uri", "mongodb://127.0.0.1/nasa.eva")
.set("spark.mongodb.output.uri", "mongodb://127.0.0.1/nasa.astronautTotals")
val sc = new SparkContext(sparkConf)
val rdd = sc.loadFromMongoDB()
def breakoutCrew ( document: Document ): List[(String,Int)] = {
println("INPUT"+document.get( "Duration").asInstanceOf[String])
var minutes = 0;
val timeString = document.get( "Duration").asInstanceOf[String]
if( timeString != null && !timeString.isEmpty ) {
val time = document.get( "Duration").asInstanceOf[String].split( ":" )
minutes = time(0).toInt * 60 + time(1).toInt
}
import scala.util.matching.Regex
val pattern = new Regex("(\\w+\\s\\w+)")
val names = pattern findAllIn document.get( "Crew" ).asInstanceOf[String]
var tuples : List[(String,Int)] = List()
for ( name <- names ) { tuples = tuples :+ (( name, minutes ) ) }
return tuples
}
val logs = rdd.flatMap( breakoutCrew ).reduceByKey( (m1: Int, m2: Int) => ( m1 + m2 ) )
//logs.foreach(println)
def mapToDocument( tuple: (String, Int ) ): Document = {
val doc = new Document();
doc.put( "name", tuple._1 )
doc.put( "minutes", tuple._2 )
return doc
}
val writeConf = WriteConfig(sc)
val writeConfig = WriteConfig(Map("collection" -> "astronautTotals", "writeConcern.w" -> "majority", "db" -> "nasa"), Some(writeConf))
logs.map( mapToDocument ).saveToMongoDB( writeConfig )
import org.apache.spark.sql.SQLContext
import com.mongodb.spark.sql._
import org.apache.spark.sql.DataFrame
// load the first dataframe "EVAs"
val sqlContext = new SQLContext(sc);
import sqlContext.implicits._
val evadf = sqlContext.read.mongo()
evadf.printSchema()
evadf.registerTempTable("evas")
// load the 2nd dataframe "astronautTotals"
val astronautDF = sqlContext.read.option("collection", "astronautTotals").mongo[astronautTotal]()
astronautDF.printSchema()
astronautDF.registerTempTable("astronautTotals")
sqlContext.sql("SELECT astronautTotals.name, astronautTotals.minutes FROM astronautTotals" ).show()
sqlContext.sql("SELECT astronautTotals.name, astronautTotals.minutes, evas.Vehicle, evas.Duration FROM " +
"astronautTotals JOIN evas ON astronautTotals.name LIKE evas.Crew" ).show()
}
}
case class astronautTotal ( name: String, minutes: Integer )
This is my sbt file -
name := "Project"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.0.0"
libraryDependencies += "org.apache.spark" %% "spark-streaming" % "2.0.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.0.0"
//libraryDependencies += "org.apache.spark" %% "spark-streaming-twitter" % "1.2.1"
libraryDependencies += "org.apache.bahir" %% "spark-streaming-twitter" % "2.0.0"
libraryDependencies += "org.mongodb.spark" %% "mongo-spark-connector" % "0.1"
addCommandAlias("c1", "run-main stream.SaveTweets")
addCommandAlias("c2", "run-main stream.SpaceWalk")
outputStrategy := Some(StdoutOutput)
//outputStrategy := Some(LoggedOutput(log: Logger))
fork in run := true
This error message is because you are using an incompatible library that only supports Spark 1.x. You should use mongo-spark-connector 2.0.0+ instead. See: https://docs.mongodb.com/spark-connector/v2.0/

NoClassDefFoundError: slick/profile/RelationalProfile$SimpleQL

I am able to connect by using the play's default.db:
db.default.driver="com.microsoft.sqlserver.jdbc.SQLServerDriver"
db.default.url="jdbc:sqlserver://my_host.database.windows.net:1433;database=my_db;"
db.default.username="username"
db.default.password="password"
But when I try with Slick:
slick.dbs.default {
driver = "com.typesafe.slick.driver.ms.SQLServerDriver$"
db {
driver = "com.microsoft.sqlserver.jdbc.SQLServerDriver"
url = "jdbc:sqlserver://my_host.database.windows.net:1433;database=my_db;"
user = "username"
password = "password"
}
}
I get:
ProvisionException: Unable to provision, see the following errors:
1) Error injecting constructor, java.lang.NoClassDefFoundError: slick/profile/RelationalProfile$SimpleQL
Which points to my DAO:
import play.api.db.slick.DatabaseConfigProvider
class UsersDAO #Inject() (override protected val dbConfigProvider: DatabaseConfigProvider) extends Tables(dbConfigProvider) { // <---- This line
...
}
Here is my Tables.scala file:
import models.User
import play.api.db.slick.{DatabaseConfigProvider, HasDatabaseConfigProvider}
import slick.driver.JdbcProfile
class Tables #Inject() (protected val dbConfigProvider: DatabaseConfigProvider) extends HasDatabaseConfigProvider[JdbcProfile] {
import driver.api._
protected val users = TableQuery[Users]
protected class Users(tag: Tag) extends Table[User](tag, "users") {
def id = column[String]("id", O.PrimaryKey)
def firstName = column[String]("first_name")
def lastName = column[String]("last_name")
def * = (id, firstName, lastName) <> ((User.apply _).tupled, User.unapply _)
}
}
Here are the relevant lines in my build.sbt file:
libraryDependencies ++= Seq(
...
"com.typesafe.play" % "play-slick_2.11" % "2.0.2",
"com.typesafe.slick" %% "slick-extensions" % "3.0.0",
...
)
resolvers += "Typesafe Releases" at "http://repo.typesafe.com/typesafe/maven-releases/"
I am unable to find out anything about SimpleQL and why it is missing at run time.
play-slick 2.0.2 uses slick 3.1.0 so you have to match this version in slick-extensions:
libraryDependencies ++= Seq(
...
"com.typesafe.play" % "play-slick_2.11" % "2.0.2",
"com.typesafe.slick" %% "slick-extensions" % "3.1.0",
...
)