Slick code generator with Postgres in Play Framework - postgresql

I'm new to Scala and Slick.
Trying to use code generator example from here https://github.com/slick/slick-codegen-example and change it to Postgres driver.
Here is my code in Build.Scala file:
import sbt._
import Keys._
import Tests._
object myBuild extends Build {
val slickVersion = "3.0.2"
lazy val mainProject = Project(
id="main",
base=file("."),
settings = Project.defaultSettings ++ Seq(
scalaVersion := "2.11.6",
libraryDependencies ++= List(
"com.typesafe.slick" %% "slick" % slickVersion,
"com.typesafe.slick" %% "slick-codegen" % slickVersion,
"org.slf4j" % "slf4j-nop" % "1.7.12",
"org.postgresql" % "postgresql" % "9.4-1201-jdbc41"
),
slick <<= slickCodeGenTask, // register manual sbt command
sourceGenerators in Compile <+= slickCodeGenTask // register automatic code generation on every compile, remove for only manual use
)
)
// code generation task
lazy val slick = TaskKey[Seq[File]]("gen-tables")
lazy val slickCodeGenTask =
(sourceManaged, dependencyClasspath in Compile, runner in Compile, streams) map { (dir, cp, r, s) =>
val outputDir = (dir / "slick").getPath // place generated files in sbt's managed sources folder
val url = "jdbc:postgresql://localhost:5432/db'" // connection info
val jdbcDriver = "org.postgresql.Driver"
val slickDriver = "slick.driver.PostgresDriver"
val pkg = "dao"
val user="postgres"
val password="pass"
toError(r.run("slick.codegen.SourceCodeGenerator", cp.files, Array(slickDriver, jdbcDriver, url, outputDir, pkg, user, password), s.log))
val fname = outputDir + "/dao/Tables.scala"
Seq(file(fname))
}
}
The only things I change were drivers and login, password. As I understand it should generate code each time I run
activator run
but it doesn't generate any code. Also, how it can be run manually?

You can add this to your build.sbt file:
val conf = ConfigFactory.parseFile(new File("conf/application.conf")).resolve()
slick <<= slickCodeGenTask
lazy val slick = TaskKey[Seq[File]]("gen-tables")
lazy val slickCodeGenTask = (sourceManaged, dependencyClasspath in Compile, runner in Compile, streams) map { (dir, cp, r, s) =>
val outputDir = (dir / "slick").getPath
val url = conf.getString("slick.dbs.default.db.url")
val jdbcDriver = conf.getString("slick.dbs.default.db.driver")
val slickDriver = conf.getString("slick.dbs.default.driver").dropRight(1)
val pkg = "test"
val user = conf.getString("slick.dbs.default.db.user")
val password = conf.getString("slick.dbs.default.db.password")
toError(r.run("slick.codegen.SourceCodeGenerator", cp.files, Array(slickDriver, jdbcDriver, url, outputDir, pkg, user, password), s.log))
val fname = outputDir + s"/$pkg/Tables.scala"
Seq(file(fname))
}
And run it with activator gen-tables

Related

Trying to read file from s3 with FLINK using the IDE getting Class org.apache.hadoop.fs.s3a.S3AFileSystem not found

I am trying to read a file from s3 using Flink from IntelliJ and getting the following exception:
Caused by: java.lang.ClassNotFoundException: Class
org.apache.hadoop.fs.s3a.S3AFileSystem not found
This how my code looks like :
import org.apache.flink.api.scala.createTypeInformation
import org.apache.flink.streaming.api.functions.source.SourceFunction
import org.apache.flink.streaming.api.scala.StreamExecutionEnvironment
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path
import org.apache.parquet.column.page.PageReadStore
import org.apache.parquet.example.data.simple.convert.GroupRecordConverter
import org.apache.parquet.hadoop.ParquetFileReader
import org.apache.parquet.hadoop.util.HadoopInputFile
import org.apache.parquet.io.ColumnIOFactory
class ParquetSourceFunction extends SourceFunction[String]{
override def run(ctx: SourceFunction.SourceContext[String]): Unit = {
val inputPath = "s3a://path-to-bucket/"
val outputPath = "s3a://path-to-output-bucket/"
val conf = new Configuration()
conf.set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
val readFooter = ParquetFileReader.open(HadoopInputFile.fromPath(new Path(inputPath), conf))
val metadata = readFooter.getFileMetaData
val schema = metadata.getSchema
val parquetFileReader = new ParquetFileReader(conf, metadata, new Path(inputPath), readFooter.getRowGroups, schema.getColumns)
// val parquetFileReader2 = new ParquetFileReader(new Path(inputPath), ParquetReadOptions)
var pages: PageReadStore = null
try {
while ({ pages = parquetFileReader.readNextRowGroup; pages != null }) {
val rows = pages.getRowCount
val columnIO = new ColumnIOFactory().getColumnIO(schema)
val recordReader = columnIO.getRecordReader(pages, new GroupRecordConverter(schema))
(0L until rows).foreach { _ =>
val group = recordReader.read()
val myString = group.getString("field_name", 0)
ctx.collect(myString)
}
}
}
}
override def cancel(): Unit = ???
}
object Job {
def main(args: Array[String]): Unit = {
// set up the execution environment
lazy val env = StreamExecutionEnvironment.getExecutionEnvironment
lazy val stream = env.addSource(new ParquetSourceFunction)
stream.print()
env.execute()
}
}
Sbt dependencies :
val flinkVersion = "1.12.1"
val flinkDependencies = Seq(
"org.apache.flink" %% "flink-clients" % flinkVersion,// % "provided",
"org.apache.flink" %% "flink-scala" % flinkVersion,// % "provided",
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion, // % "provided")
"org.apache.flink" %% "flink-parquet" % flinkVersion)
lazy val root = (project in file(".")).
settings(
libraryDependencies ++= flinkDependencies,
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "3.3.0" ,
libraryDependencies += "org.apache.parquet" % "parquet-hadoop" % "1.11.1",
libraryDependencies += "org.apache.flink" %% "flink-table-planner-blink" % "1.12.1" //% "provided"
)
S3 is only supported by adding the respective flink-s3-fs-hadoop to your plugin folder as described on the plugin docs. For an IDE local setup, the root that should contain the plugins dir is the working directory by default. You can override it by using the env var FLINK_PLUGINS_DIR.
To get the flink-s3-fs-hadoop into plugin, I'm guessing some sbt glue is necessary (or you do it once manually). In gradle, I'd define a plugin scope and copy the jars in a custom task to the plugin dir.

PlaySpec not found in IntelliJ

Below is a Scala test of websocket:
import java.util.function.Consumer
import play.shaded.ahc.org.asynchttpclient.AsyncHttpClient
import play.api.inject.guice.GuiceApplicationBuilder
import play.api.test.{Helpers, TestServer, WsTestClient}
import scala.compat.java8.FutureConverters
import scala.concurrent.Await
import scala.concurrent.duration._
import org.scalatestplus.play._
class SocketTest extends PlaySpec with ScalaFutures {
"HomeController" should {
"reject a websocket flow if the origin is set incorrectly" in WsTestClient.withClient { client =>
// Pick a non standard port that will fail the (somewhat contrived) origin check...
lazy val port: Int = 31337
val app = new GuiceApplicationBuilder().build()
Helpers.running(TestServer(port, app)) {
val myPublicAddress = s"localhost:$port"
val serverURL = s"ws://$myPublicAddress/ws"
val asyncHttpClient: AsyncHttpClient = client.underlying[AsyncHttpClient]
val webSocketClient = new WebSocketClient(asyncHttpClient)
try {
val origin = "ws://example.com/ws"
val consumer: Consumer[String] = new Consumer[String] {
override def accept(message: String): Unit = println(message)
}
val listener = new WebSocketClient.LoggingListener(consumer)
val completionStage = webSocketClient.call(serverURL, origin, listener)
val f = FutureConverters.toScala(completionStage)
Await.result(f, atMost = 1000.millis)
listener.getThrowable mustBe a[IllegalStateException]
} catch {
case e: IllegalStateException =>
e mustBe an[IllegalStateException]
case e: java.util.concurrent.ExecutionException =>
val foo = e.getCause
foo mustBe an[IllegalStateException]
}
}
}
}
}
But compile is failing on line import org.scalatestplus.play._ with error :
Cannot resolve symbol scalatestplus
From https://www.playframework.com/documentation/2.8.x/ScalaTestingWithScalaTest I have added scalatest and play to build:
build.sbt:
name := "testproject"
version := "1.0"
lazy val `testproject` = (project in file(".")).enablePlugins(PlayScala)
resolvers += "scalaz-bintray" at "https://dl.bintray.com/scalaz/releases"
resolvers += "Akka Snapshot Repository" at "https://repo.akka.io/snapshots/"
scalaVersion := "2.12.2"
libraryDependencies ++= Seq( jdbc , ehcache , ws , guice , specs2 % Test)
// https://mvnrepository.com/artifact/com.typesafe.scala-logging/scala-logging
libraryDependencies += "com.typesafe.scala-logging" %% "scala-logging" % "3.9.2"
libraryDependencies ++= Seq(
"org.scalatestplus.play" %% "scalatestplus-play" % "3.0.0" % "test"
)
unmanagedResourceDirectories in Test <+= baseDirectory ( _ /"target/web/public/test" )
I've tried rebuilding the project and module in IntelliJ "build" option and "Build Option" when I right click on build.sbt but the import is not found.
sbt dist from Intellij "sbt shell" then File -> "Invalidate caches" with restart of IntelliJ seems to fix the issue
:Invalidate caches screenshot

How to run main project in multi-project scala

I've following Build code to have multi-project setup:
import sbt._
import Keys._
import com.typesafe.sbt.SbtScalariform._
import scalariform.formatter.preferences._
import sbtunidoc.Plugin._
object DirectorynameBuild extends Build {
addCommandAlias("rebuild", ";clean; compile; package")
lazy val BoundedContext = Project(id = "BoundedContext",
base = file("."),
settings = commonSettings).aggregate(
persistence,
core,
biz,
protocol,
transport)
lazy val persistence = Project(id = "BoundedContext-persistence",
settings = commonSettings,
base = file("persistence")) dependsOn (protocol)
lazy val core = Project(id = "BoundedContext-core",
settings = commonSettings,
base = file("core")) dependsOn (
persistence,
biz,
protocol)
lazy val biz = Project(id = "BoundedContext-biz",
settings = commonSettings,
base = file("biz")) dependsOn (protocol)
lazy val protocol = Project(id = "BoundedContext-protocol",
settings = commonSettings,
base = file("protocol"))
lazy val transport = Project(id = "BoundedContext-transport",
settings = commonSettings,
base = file("transport")) dependsOn(protocol)
val ORGANIZATION = "my.first.ddd.app"
val PROJECT_NAME = "directoryname"
val PROJECT_VERSION = "0.1-SNAPSHOT"
val SCALA_VERSION = "2.11.4"
val TYPESAFE_CONFIG_VERSION = "1.2.1"
val SCALATEST_VERSION = "2.2.2"
val SLF4J_VERSION = "1.7.9"
val LOGBACK_VERSION = "1.1.2"
lazy val commonSettings = Project.defaultSettings ++
basicSettings ++
formatSettings ++
net.virtualvoid.sbt.graph.Plugin.graphSettings
lazy val basicSettings = Seq(
version := PROJECT_VERSION,
organization := ORGANIZATION,
scalaVersion := SCALA_VERSION,
libraryDependencies ++= Seq(
"com.typesafe" % "config" % TYPESAFE_CONFIG_VERSION,
"org.slf4j" % "slf4j-api" % SLF4J_VERSION,
"ch.qos.logback" % "logback-classic" % LOGBACK_VERSION % "runtime",
"org.scalatest" %% "scalatest" % SCALATEST_VERSION % "test"
),
scalacOptions in Compile ++= Seq(
"-unchecked",
"-deprecation",
"-feature"
),
/*javaOptions += "-Djava.library.path=%s:%s".format(
sys.props("java.library.path")
),*/
fork in run := true,
fork in Test := true,
parallelExecution in Test := false
)
lazy val formatSettings = scalariformSettings ++ Seq(
ScalariformKeys.preferences := FormattingPreferences()
.setPreference(IndentWithTabs, false)
.setPreference(IndentSpaces, 2)
.setPreference(AlignParameters, false)
.setPreference(DoubleIndentClassDeclaration, true)
.setPreference(MultilineScaladocCommentsStartOnFirstLine, false)
.setPreference(PlaceScaladocAsterisksBeneathSecondAsterisk, true)
.setPreference(PreserveDanglingCloseParenthesis, true)
.setPreference(CompactControlReadability, true)
.setPreference(AlignSingleLineCaseStatements, true)
.setPreference(PreserveSpaceBeforeArguments, true)
.setPreference(SpaceBeforeColon, false)
.setPreference(SpaceInsideBrackets, false)
.setPreference(SpaceInsideParentheses, false)
.setPreference(SpacesWithinPatternBinders, true)
.setPreference(FormatXml, true)
)
//credentials += Credentials(Path.userHome / ".ivy2" / ".credentials")
}
I've not defined src/main/scala in root folder.
I could run individual project using sbt "project ****" run
If I follow command: sbt run then I'm getting "No Main Class Detected"
I could see lagom framework is easily implementing the process I wanted. But I couldn't figure out the process.
What should I do to make the main project run without use of src/main/scala/***App.scala?

Read config file while generating Slick code via SBT

I am reading this sample which shows me how to generate source code using Slick-CodeGen
https://github.com/slick/slick-codegen-example/blob/master/build.sbt
And while this sample, is good, I want to modify it so that it reads the database config from application.conf using typesafe config.
Otherwise I will have to replicate the database connection configuration here and also in application.conf file.
Does anyone know, how can this sample be modified so that we can use the typesafe config to read the config values from application.conf?
Edit: Based on the suggestion below, I tried the following
I created a file called build.sbt in the project folder
libraryDependencies += "com.typesafe" % "config" % "1.3.1"
modified my main build.sbt file (in project root) as
val slickVersion = "3.1.1"
lazy val mainProject = Project(
id = "FooBar",
base=file("."),
settings = Defaults.coreDefaultSettings ++ Seq(
scalaVersion := "2.11.8",
libraryDependencies ++= Seq(
"com.typesafe.slick" %% "slick" % slickVersion,
"com.typesafe.slick" %% "slick-codegen" % slickVersion,
"mysql" % "mysql-connector-java" % "5.1.35",
"com.typesafe" % "config" % "1.3.1"
),
myConf := {
ConfigFactory.parseFile(new File("src/main/resources/application.conf"))
},
slick <<= slickCodeGenTask,
sourceGenerators in Compile <+= slickCodeGenTask
)
)
lazy val slick = TaskKey[Seq[File]]("gen-tables")
lazy val myConf = settingKey[Config]("The application properties")
lazy val slickCodeGenTask = (sourceManaged, dependencyClasspath in Compile, runner in Compile, streams) map {(dir, cp, r, s) =>
val outputDir = (dir / "slick").getPath
val username = myConf.value.getString("mysql.username")
val password = myConf.value.getString("mysql.password")
val port = myConf.value.getInt("mysql.port")
val db = myConf.value.getString("mysql.db")
val server = myConf.value.getString("mysql.server")
val url = s"jdbc:mysql://$server:$port/$db?username=$username&password=$password"
val jdbcDriver = myConf.value.getString("mysql.jdbcDriver")
val slickDriver = myConf.value.getString("mysql.slickDriver")
val pkg = "sql"
val fname = outputDir + "/db/Tables.scala"
toError(r.run("slick.codegen.SourceCodeGenerator", cp.files, Array(slickDriver, jdbcDriver, url, outputDir, pkg), s.log))
Seq(file(fname))
}
But it cannot resolve the Config and the ConfigFactory classes.
Declare a dependency on Typesafe Config in project/build.sbt:
libraryDependencies += "com.typesafe" % "config" % "1.3.1"
And define a setting holding your config file in build.sbt:
lazy val myConf = settingKey[Config]("The application properties")
myConf := {
ConfigFactory.parseFile(new File("src/main/resources/application.conf"))
}
Now you can use myConf.value.getString("xyz") to get hold of your configuration values in other tasks or settings.

adding task to sbt 13.x build.sbt

I have added this to build.sbt
libraryDependencies += "com.typesafe.slick" %% "slick-codegen" % "2.1.0"
lazy val slickGenerate = TaskKey[Seq[File]]("slick code generation")
slickGenerate <<= slickGenerateTask
lazy val slickGenerateTask = {
(sourceManaged in Compile, dependencyClasspath in Compile, runner in Compile, streams) map { (dir, cp, r, s) =>
val dbName = "dbname"
val userName = "user"
val password = "password"
val url = s"jdbc:mysql://server:port/$dbName"
val jdbcDriver = "com.mysql.jdbc.Driver"
val slickDriver = "scala.slick.driver.MySQLDriver"
val targetPackageName = "models"
val outputDir = (dir / dbName).getPath // place generated files in sbt's managed sources folder
val fname = outputDir + s"/$targetPackageName/Tables.scala"
println(s"\nauto-generating slick source for database schema at $url...")
println(s"output source file file: file://$fname\n")
r.run("scala.slick.codegen.SourceCodeGenerator", cp.files, Array(slickDriver, jdbcDriver, url, outputDir, targetPackageName, userName, password), s.log)
Seq(file(fname))
}
}
The task's code itself isn't very exciting. It just needs to create an auto-generated scala source file. Problem is, sbt starts fine, yet this new task is evidently not recognized by sbt and cannot be run in the sbt prompt. I have also had very little luck with the := syntax for task definition. Existing documentation has been just confounding.
How can this new task be made available in the sbt prompt?
This works
libraryDependencies += "com.typesafe.slick" %% "slick-codegen" % "2.1.0"
lazy val slickGenerate = taskKey[Seq[File]]("slick code generation")
slickGenerate := {
val dbName = "dbname"
val userName = "user"
val password = "password"
val url = s"jdbc:mysql://server:port/$dbName"
val jdbcDriver = "com.mysql.jdbc.Driver"
val slickDriver = "scala.slick.driver.MySQLDriver"
val targetPackageName = "models"
val outputDir = ((sourceManaged in Compile).value / dbName).getPath // place generated files in sbt's managed sources folder
val fname = outputDir + s"/$targetPackageName/Tables.scala"
println(s"\nauto-generating slick source for database schema at $url...")
println(s"output source file file: file://$fname\n")
(runner in Compile).value.run("scala.slick.codegen.SourceCodeGenerator", (dependencyClasspath in Compile).value.files, Array(slickDriver, jdbcDriver, url, outputDir, targetPackageName, userName, password), streams.value.log)
Seq(file(fname))
}
In sbt 0.13.x you don't need all those blabla map sameblabla boilerplates. Just access value as is (runner in Compile).value - macro will do everything else for you.
> slickGenerate
[info] Updating {file:/Users/user/slick/}slick...
[info] Resolving org.fusesource.jansi#jansi;1.4 ...
[info] Done updating.
auto-generating slick source for database schema at jdbc:mysql://server:port/dbname...
output source file file: file:///Users/user/slick/target/scala-2.10/src_managed/main/dbname/models/Tables.scala
> help slickGenerate
slick code generation
Talking about <<= - your TaskKey is incorrect, see the definition:
def apply[T](label : scala.Predef.String, description : scala.Predef.String // not description
So, the old definition <<= uses "generate slick code" as label, while the new := uses the code-given name for command (new style), so it uses your "generate slick code" as a doc. Which looks strange and inconsistent, but that's a fact and it's partially reasoned by backward-compatibility.
So, correct old-style version is:
import sbt.Keys._
lazy val slickGenerate = TaskKey[Seq[File]]("slick-generate", "generate slick code")
slickGenerate <<= slickGenerateTask
def slickGenerateTask =
(sourceManaged in Compile, dependencyClasspath in Compile, runner in Compile, streams) map { (dir, cp, r, s) =>
...
}
It works in the same way as previous. Note, that you have to use "slickGenerate", not "slick-generate", the last one doesn't work for "help" command.
By the way, you're using Bare build definition now - you may want to switch to Multi-project .sbt definition as it's recommended by sbt docs, see also.