Issues while submitting jars to spark cluster - scala

I was trying to create a basic job in scala using the IntelliJ. Using the following code i have to build the scala and create a jar using sbt assembly. Then would submit this jars along with spark-cassandra connector to spark cluster. So, my question is how do i test my scala code without creating the jar in Intellij.
Also, every time i changes something in my build.sbt file. it starts a background task of downloading the dependencies even though i have put provided in the build.sbt file. So, how do i make it one time?
Code :
import org.apache.spark.SparkContext
import org.apache.spark.SparkConf
import org.apache.spark.sql.cassandra.CassandraSQLContext
object SimpleApp {
def main(args: Array[String]) {
val conf = new SparkConf(true).set("spark.cassandra.connection.host", "Cluster_IP")
val sc = new SparkContext("spark://naresh-pc:7077", "test", conf)
val csc = new CassandraSQLContext(sc)
csc.setKeyspace("KEYSPACE_NAME")
val rdd = csc.sql("Some_Query")
rdd.collect().foreach(a => println(a))
}
}
Build.scala :
name := "SparkCassandraDemo"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.6.1" % "provided"
libraryDependencies += "com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M1" % "provided"
libraryDependencies += "org.apache.spark".%%("spark-sql") % "1.6.1" % "provided"
Edited question :
I have implemented what Yuval Itzchakov suggested. But i am getting the following error :
FYI, earlier i used to submit the job in the following manner after creating the jar using sbt assembly :
bin/spark-submit --class SimpleApp --master spark://naresh-pc:7077 --jars SOME_PATH/SparkCassandraDemo-assembly-1.0.jar SOME_PATH/spark-cassandra-connector-assembly-1.6.0-M1.jar
Which actaully uses the spark-cassandra-connector-assembly. So, i guess it is not able to find that jar. So, how do i make it available to the code.
Error :
Exception in thread "main" org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange rangepartitioning(cnt#0L ASC,200), None
+- ConvertToSafe
+- TungstenAggregate(key=[useragent#10], functions=[(count(if ((gid#12 = 1)) cookie#13 else null),mode=Final,isDistinct=false)], output=[cnt#0L,useragent#10])
+- TungstenExchange hashpartitioning(useragent#10,200), None
+- TungstenAggregate(key=[useragent#10], functions=[(count(if ((gid#12 = 1)) cookie#13 else null),mode=Partial,isDistinct=false)], output=[useragent#10,count#16L])
+- TungstenAggregate(key=[useragent#10,cookie#13,gid#12], functions=[], output=[useragent#10,cookie#13,gid#12])
+- TungstenExchange hashpartitioning(useragent#10,cookie#13,gid#12,200), None
+- TungstenAggregate(key=[useragent#10,cookie#13,gid#12], functions=[], output=[useragent#10,cookie#13,gid#12])
+- Expand [List(useragent#10, cookie#3, 1)], [useragent#10,cookie#13,gid#12]
+- Scan org.apache.spark.sql.cassandra.CassandraSourceRelation#5d1094[useragent#10,cookie#3]
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:49)
at org.apache.spark.sql.execution.Exchange.doExecute(Exchange.scala:247)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.ConvertToUnsafe.doExecute(rowFormatConverters.scala:38)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.Sort.doExecute(Sort.scala:64)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:132)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$5.apply(SparkPlan.scala:130)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:130)
at org.apache.spark.sql.execution.SparkPlan.executeCollect(SparkPlan.scala:166)
at org.apache.spark.sql.execution.SparkPlan.executeCollectPublic(SparkPlan.scala:174)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$execute$1$1.apply(DataFrame.scala:1499)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:56)
at org.apache.spark.sql.DataFrame.withNewExecutionId(DataFrame.scala:2086)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$execute$1(DataFrame.scala:1498)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$collect$1.apply(DataFrame.scala:1503)
at org.apache.spark.sql.DataFrame$$anonfun$org$apache$spark$sql$DataFrame$$collect$1.apply(DataFrame.scala:1503)
at org.apache.spark.sql.DataFrame.withCallback(DataFrame.scala:2099)
at org.apache.spark.sql.DataFrame.org$apache$spark$sql$DataFrame$$collect(DataFrame.scala:1503)
at org.apache.spark.sql.DataFrame.collect(DataFrame.scala:1480)
at SimpleApp$.main(SimpleApp.scala:17)
at SimpleApp.main(SimpleApp.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:144)
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 2 in stage 0.0 failed 4 times, most recent failure: Lost task 2.3 in stage 0.0 (TID 9, pratik-VirtualBox): java.lang.ClassNotFoundException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1431)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1419)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1418)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1418)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:799)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:799)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1640)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1599)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1588)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:620)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1832)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1845)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1858)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:1929)
at org.apache.spark.rdd.RDD$$anonfun$collect$1.apply(RDD.scala:927)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:150)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:111)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:316)
at org.apache.spark.rdd.RDD.collect(RDD.scala:926)
at org.apache.spark.RangePartitioner$.sketch(Partitioner.scala:264)
at org.apache.spark.RangePartitioner.<init>(Partitioner.scala:126)
at org.apache.spark.sql.execution.Exchange.prepareShuffleDependency(Exchange.scala:179)
at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:254)
at org.apache.spark.sql.execution.Exchange$$anonfun$doExecute$1.apply(Exchange.scala:248)
at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:48)
... 34 more
Caused by: java.lang.ClassNotFoundException: com.datastax.spark.connector.rdd.partitioner.CassandraPartition
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.spark.serializer.JavaDeserializationStream$$anon$1.resolveClass(JavaSerializer.scala:68)
at java.io.ObjectInputStream.readNonProxyDesc(ObjectInputStream.java:1613)
at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1518)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1774)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2000)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1924)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1801)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1351)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:371)
at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:76)
at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:115)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

So, my question is how do i test my scala code without creating the
jar in Intellij
One way of achieving this is creating another module, which doesn't use the provided sbt setting but actually compiles the spark jars in order for you to be able to debug your code.
You start by creating an additional module in build.sbt:
name := "SparkCassandraDemo"
version := "1.0"
scalaVersion := "2.11.8"
val sparkDependencies = Seq(
"org.apache.spark" %% "spark-core" % "1.6.1",
"com.datastax.spark" %% "spark-cassandra-connector" % "1.6.0-M2",
"org.apache.spark".%%("spark-sql") % "1.6.1"
)
lazy val sparkDebugger = (project in file("spark-debugger"))
.settings(
libraryDependencies ++= sparkDependencies.map(_ % "compile")
)
libraryDependencies ++= sparkDependencies.map(_ % "provided")
After that, refresh your build.sbt file. You should now see a new module created in the left hand side of IntelliJ called spark-debugger:
Now, create a debug configuration in Intellij:
Go to Edit Configuration:
Create a new application configuration:
Set the newly created spark-debugger module:
Shift + Ctrl + F9, and select the newly created configuration:
Debug your code:

Related

Flink Scala Job Runtime Error : java.lang.NoSuchMethodError

I am trying to run a simple flink streaming job on AWS EMR. The purpose is very simple for now:
Consume data from Kafka in flink
Load to another topic in kafka.
I am using the following dependencies:
scalaVersion := "2.11.8"
val flinkVersion = "1.11.1"
libraryDependencies ++= Seq(
"org.apache.flink" %% "flink-scala" % flinkVersion,
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion,
"org.apache.flink" %% "flink-connector-kafka" % flinkVersion
)
Flink code that i am using is :
private val serdeSchema = new SimpleStringSchema
val env = StreamExecutionEnvironment.getExecutionEnvironment
val stream = env
.addSource(createKafkaConsumer(kafkaInputTopic
, kafkaBrokers, kafkaConfig("consumerGroupId").toString
, kafkaConfig("defaultReset").toString))
stream
.map((s: String) => s)
.addSink(createKafkaProducer(kafkaOutputTopic, kafkaBrokers))
env.execute(jobConfig("jobName").toString)
}
def createKafkaProducer(kafkaTopic: String, kafkaBrokers: String): FlinkKafkaProducer[String] = {
val producer = new FlinkKafkaProducer[String](kafkaBrokers,
kafkaTopic, serdeSchema)
producer
}
def createKafkaConsumer(kafkaInputTopic: String
, kafkaBrokers: String
, consumerGroup:String
, defaultReset: String): FlinkKafkaConsumer[String] = {
val properties = new Properties()
properties.setProperty("bootstrap.servers", kafkaBrokers)
properties.setProperty("group.id", consumerGroup)
properties.setProperty("enable.auto.commit" , "false")
properties.setProperty("auto.offset.reset" , defaultReset)
val consumer = new FlinkKafkaConsumer[String](kafkaInputTopic, serdeSchema, properties)
consumer
}
I generate a assembly jar using sbt. I use the following command to run the job on EMR
/bin/flink run -c com.example.FlinkConsumer flink/target/scala-2.11/flink-assembly-0.1.jar
Below is the stack trace
Caused by: org.apache.flink.client.program.ProgramInvocationException: Job failed (JobID: c458520153e875811c46c386b9ec605e)
at org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:112)
at java.util.concurrent.CompletableFuture.uniApply(CompletableFuture.java:616)
at java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:591)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.client.program.rest.RestClusterClient.lambda$pollResourceAsync$21(RestClusterClient.java:565)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:1975)
at org.apache.flink.runtime.concurrent.FutureUtils.lambda$retryOperationWithDelay$8(FutureUtils.java:291)
at java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:774)
at java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:750)
at java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:488)
at java.util.concurrent.CompletableFuture.postFire(CompletableFuture.java:575)
at java.util.concurrent.CompletableFuture$UniCompose.tryFire(CompletableFuture.java:943)
at java.util.concurrent.CompletableFuture$Completion.run(CompletableFuture.java:456)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:147)
at org.apache.flink.client.deployment.ClusterClientJobClientAdapter.lambda$null$6(ClusterClientJobClientAdapter.java:110)
... 19 more
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:110)
at org.apache.flink.runtime.executiongraph.failover.flip1.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:76)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:192)
at org.apache.flink.runtime.scheduler.DefaultScheduler.maybeHandleTaskFailure(DefaultScheduler.java:186)
at org.apache.flink.runtime.scheduler.DefaultScheduler.updateTaskExecutionStateInternal(DefaultScheduler.java:180)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:484)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:380)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcInvocation(AkkaRpcActor.java:279)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleRpcMessage(AkkaRpcActor.java:194)
at org.apache.flink.runtime.rpc.akka.FencedAkkaRpcActor.handleRpcMessage(FencedAkkaRpcActor.java:74)
at org.apache.flink.runtime.rpc.akka.AkkaRpcActor.handleMessage(AkkaRpcActor.java:152)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:26)
at akka.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:21)
at scala.PartialFunction$class.applyOrElse(PartialFunction.scala:123)
at akka.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:21)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:170)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:171)
at akka.actor.Actor$class.aroundReceive(Actor.scala:517)
at akka.actor.AbstractActor.aroundReceive(AbstractActor.scala:225)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:592)
at akka.actor.ActorCell.invoke(ActorCell.scala:561)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:258)
at akka.dispatch.Mailbox.run(Mailbox.scala:225)
at akka.dispatch.Mailbox.exec(Mailbox.scala:235)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Caused by: java.lang.NoSuchMethodError: org.apache.flink.api.common.serialization.SerializationSchema.open(Lorg/apache/flink/api/common/serialization/SerializationSchema$InitializationContext;)V
at org.apache.flink.streaming.connectors.kafka.internals.KafkaSerializationSchemaWrapper.open(KafkaSerializationSchemaWrapper.java:61)
at org.apache.flink.streaming.connectors.kafka.FlinkKafkaProducer.open(FlinkKafkaProducer.java:808)
at org.apache.flink.api.common.functions.util.FunctionUtils.openFunction(FunctionUtils.java:36)
at org.apache.flink.streaming.api.operators.AbstractUdfStreamOperator.open(AbstractUdfStreamOperator.java:102)
at org.apache.flink.streaming.api.operators.StreamSink.open(StreamSink.java:48)
at org.apache.flink.streaming.runtime.tasks.StreamTask.initializeStateAndOpen(StreamTask.java:1007)
at org.apache.flink.streaming.runtime.tasks.StreamTask.lambda$beforeInvoke$0(StreamTask.java:454)
at org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$SynchronizedStreamTaskActionExecutor.runThrowing(StreamTaskActionExecutor.java:94)
at org.apache.flink.streaming.runtime.tasks.StreamTask.beforeInvoke(StreamTask.java:449)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:461)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:707)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:532)
at java.lang.Thread.run(Thread.java:748)
It looks like a version issue, but i tried with various versions, i don't see this open method, but i think in the serialize it calls the open method and unable to find one. Can someone please help . I am new to flink.
If you're using EMR's Flink support, then most Flink libraries should be flagged as "provided" so that they're not in your jar, as they're on the classpath from the Flink installation that EMR is providing. You'll still need to explicitly include anything that's not provided by EMR (e.g. flink-connector-kafka).

Spark and Kafka integration - KafkaSourceProvider could not be instantiated

I'm working on an integration project of Kafka and Spark and I'm trying to read a Kafka topic using Spark 2.4.5, Scala 2.12.11 and Kafka 2.5.0.
My sbt file is:
name := "Test"
version := "1.0"
scalaVersion := "2.12.11"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-sql_2.12" % "2.4.5",
"org.apache.spark" % "spark-sql-kafka-0-10_2.12" % "2.4.5",
"org.apache.spark" % "spark-streaming-kafka-0-10-assembly_2.12" % "2.4.5",
"org.apache.kafka" % "kafka-clients" % "2.5.0"
)
my code is:
object Test{
def main(args: Array[String]) = {
import org.apache.spark.sql.SparkSession
val spark = SparkSession
.builder()
.appName("SparkTest")
.master("local[*]")
.getOrCreate()
import spark.implicits._
val df = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "localhost:9092")
.option("subscribe", "test")
.option("startingOffsets", "earliest")
.load()
df.selectExpr("CAST(key AS STRING)", "CAST(value AS STRING)")
.as[(String, String)]
df.printSchema()
}}
After having created the topic on Kafka, started zookeeper and Kafka itself, when I launch the code with:
./spark-submit --packages org.apache.spark:spark-sql-kafka-0-10_2.12:2.4.5 --class Test /home/luca/Projects/Test/target/scala-2.12/test_2.12-1.0.jar
I run into the following error:
20/05/06 15:40:29 INFO StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
Exception in thread "main" java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.kafka010.KafkaSourceProvider could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at scala.collection.convert.Wrappers$JIteratorWrapper.next(Wrappers.scala:43)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filter(TraversableLike.scala:259)
at scala.collection.AbstractTraversable.filter(Traversable.scala:104)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:630)
at org.apache.spark.sql.streaming.DataStreamReader.load(DataStreamReader.scala:161)
at Test$.main(projectfile.scala:24)
at Test.main(projectfile.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:845)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NoSuchMethodError: org.apache.spark.internal.Logging.$init$(Lorg/apache/spark/internal/Logging;)V
at org.apache.spark.sql.kafka010.KafkaSourceProvider.<init>(KafkaSourceProvider.scala:44)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
... 26 more
Can someone help me out with this?
Kafka- client version can be one of the reasons. Otherwise, try with spark 2.4.0 and scala 2.12 earlier versions. Looks like a compatibility issue

java.lang.ClassNotFoundException: SparkSql at java.net.URLClassLoader.findClass(Unknown Source)

SparkSql.scala
import org.apache.spark.sql.SparkSession
object SparkSql
{
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().master("local[*]").appName("SparkSql1").getOrCreate()
//val data = spark.sparkContext.textFile(("C:\\Users\\blaskar\\Desktop\\Biswajit\\data\\sample.txt"))
val data = spark.read.format("csv").option("inferSchema","true").option("header","true").load("C:\\Users\\blaskar\\Desktop\\Biswajit\\data\\SampleCSVFile.csv")
val columnsRenamed = Seq("S.no", "Eldonbase", "platnium","Macintyre","count1","count2","count3","Nunavut","Storage","count4")
val original_data = data.toDF(columnsRenamed:_*).persist()
original_data.show(10)
}
}
build.sbt
name := "Spark-Sample1"
version := "1.0"
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"org.apache.spark" % "spark-core_2.11" % "2.1.0" exclude("org.apache.hadoop", "hadoop-yarn-server-web-proxy"),
"org.apache.spark" % "spark-sql_2.11" % "2.1.0" exclude("org.apache.hadoop", "hadoop-yarn-server-web-proxy"))
VERSIONS OF SOFTWARE THE SYSTEM IS USING
spark: 2.1.0
scala: 2.11.8
I am using IntelliJ. The code is running fine but when I am submitting the job to my standalone cluster it is showing the below error:
SPARK-SUBMIT ERROR:
spark-submit --class "SparkSql" C:\Users\blaskar\Desktop\sbt_projects\Spark-Sample1
\out\artifacts\Spark_Sample1_jar\Spark-Sample1.jar
ERROR
java.lang.ClassNotFoundException: SparkSql
at java.net.URLClassLoader.findClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.ClassLoader.loadClass(Unknown Source)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Unknown Source)
at org.apache.spark.util.Utils$.classForName(Utils.scala:229)
at
org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:695)
at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:187)
at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:212)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:126)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

Error while starting scala project that uses SORM framework

I have created simple sbt project from tutorial:
build.sbt:
lazy val sorm_test = (project in file(".")).
settings(
name := "SORM_TEST",
scalaVersion := "2.11.7",
libraryDependencies ++= Seq(
"org.sorm-framework" % "sorm" % "0.3.18",
"com.h2database" % "h2" % "1.4.188"
)
)
test.Main.scala:
package test
case class Artist(
names : Map[Locale, Seq[String]],
genres : Set[Genre]
)
case class Genre(
names : Map[Locale, Seq[String]]
)
case class Locale(
code : String
)
import sorm._
object Db extends Instance(
entities = Set(
Entity[Artist](),
Entity[Genre](),
Entity[Locale](unique = Set() + Seq("code"))
),
url = "jdbc:h2:mem:test",
user = "",
password = "",
initMode = InitMode.Create
)
object Main extends App {
// init
Db.##
}
When i run this project in intellij i have such exceptions :
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Exception in thread "main" java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: scala/runtime/java8/JFunction1
at scala.tools.reflect.ToolBoxFactory$ToolBoxImpl.parse(ToolBoxFactory.scala:414)
at sorm.persisted.PersistedClass$.createClass(PersistedClass.scala:107)
at sorm.persisted.PersistedClass$$anon$1$$anonfun$resolve$1.apply(PersistedClass.scala:125)
at sorm.persisted.PersistedClass$$anon$1$$anonfun$resolve$1.apply(PersistedClass.scala:125)
at scala.collection.mutable.MapLike$class.getOrElseUpdate(MapLike.scala:194)
at scala.collection.mutable.AbstractMap.getOrElseUpdate(Map.scala:80)
at sorm.persisted.PersistedClass$$anon$1.resolve(PersistedClass.scala:125)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at sorm.persisted.PersistedClass$.apply(PersistedClass.scala:129)
at sorm.Instance$Initialization$$anonfun$9$$anonfun$apply$16.apply(Instance.scala:239)
at sorm.Instance$Initialization$$anonfun$9$$anonfun$apply$16.apply(Instance.scala:239)
at embrace.package$EmbraceAny$.$$extension(package.scala:6)
at sorm.Instance$Initialization$$anonfun$9.apply(Instance.scala:239)
at sorm.Instance$Initialization$$anonfun$9.apply(Instance.scala:239)
at scala.collection.immutable.Set$Set3.foreach(Set.scala:145)
at sorm.Instance$Initialization.<init>(Instance.scala:239)
at sorm.Instance.<init>(Instance.scala:38)
at test.Db$.<init>(Main.scala:15)
at test.Db$.<clinit>(Main.scala)
at test.Main$.delayedEndpoint$test$Main$1(Main.scala:29)
at test.Main$delayedInit$body.apply(Main.scala:27)
at scala.Function0$class.apply$mcV$sp(Function0.scala:34)
at scala.runtime.AbstractFunction0.apply$mcV$sp(AbstractFunction0.scala:12)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.App$$anonfun$main$1.apply(App.scala:76)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.generic.TraversableForwarder$class.foreach(TraversableForwarder.scala:35)
at scala.App$class.main(App.scala:76)
at test.Main$.main(Main.scala:27)
at test.Main.main(Main.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:497)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:140)
Caused by: java.lang.NoClassDefFoundError: scala/runtime/java8/JFunction1
... 38 more
Caused by: java.lang.ClassNotFoundException: scala.runtime.java8.JFunction1
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 38 more
It's ok when i use sbt run. This exception is thrown also when i integrate SORM with Play framework. How can i solve this problem ?
I solved this problem by adding dependencyOverrides += "org.scala-lang" % "scala-compiler" % scalaVersion.value to my sbt configuration.
I receive a very similar Exception in a maven project during runtime:
Caused by: java.lang.BootstrapMethodError: java.lang.NoClassDefFoundError: scala/runtime/java8/JFunction0$mcI$sp
I am currently using
<scala.version>2.11.7</scala.version>
<scala.compat.version>2.11</scala.compat.version>
<scalatest.version>2.2.2</scalatest.version>
<scala-maven-plugin.version>3.2.0</scala-maven-plugin.version>
in the pom properties and refer to these properties in all other entries, like
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
Maven packaging/installation works fine. Is this a similar problem? Can someone explain the solution above (or maybe translate it to the maven case)?

Error when using slick code generator on postgres with uuid_generate_v4()

I'm running into an error when I use the slick code generator on an existing postgres 9.3 database that uses uuid_generate_v4() to auto-generate IDs for tables. I'm just getting started with slick and so I'm not sure yet if it's an issue with my code, my schema, with slick itself, or if this scenario is even supported yet.
As an example, I have columns that were created with statements like this:
id uuid NOT NULL DEFAULT uuid_generate_v4()
When I run the slick code generator with sbt run, I get the error listed below. This error makes the code generator fail and thus no code is generated.
[error] (run-main-0) java.lang.IllegalArgumentException: Invalid UUID string: uuid_generate_v4()
java.lang.IllegalArgumentException: Invalid UUID string: uuid_generate_v4()
at java.util.UUID.fromString(UUID.java:194)
at slick.driver.PostgresDriver$ModelBuilder$$anon$2$$anonfun$default$1.applyOrElse(PostgresDriver.scala:75)
at slick.driver.PostgresDriver$ModelBuilder$$anon$2$$anonfun$default$1.applyOrElse(PostgresDriver.scala:65)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:223)
at scala.PartialFunction$Lifted.apply(PartialFunction.scala:219)
at scala.Option.collect(Option.scala:282)
at slick.driver.PostgresDriver$ModelBuilder$$anon$2.default(PostgresDriver.scala:65)
at slick.jdbc.JdbcModelBuilder$ColumnBuilder$$anonfun$defaultColumnOption$3.apply(JdbcModelBuilder.scala:254)
at slick.jdbc.JdbcModelBuilder$ColumnBuilder$$anonfun$defaultColumnOption$3.apply(JdbcModelBuilder.scala:254)
at scala.Option.getOrElse(Option.scala:121)
at slick.jdbc.JdbcModelBuilder$ColumnBuilder.defaultColumnOption(JdbcModelBuilder.scala:253)
at slick.jdbc.JdbcModelBuilder$ColumnBuilder.convenientDefault(JdbcModelBuilder.scala:263)
at slick.jdbc.JdbcModelBuilder$ColumnBuilder.model(JdbcModelBuilder.scala:281)
at slick.jdbc.JdbcModelBuilder$TableBuilder$$anonfun$columns$1.apply(JdbcModelBuilder.scala:162)
at slick.jdbc.JdbcModelBuilder$TableBuilder$$anonfun$columns$1.apply(JdbcModelBuilder.scala:162)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at slick.jdbc.JdbcModelBuilder$TableBuilder.columns$lzycompute(JdbcModelBuilder.scala:162)
at slick.jdbc.JdbcModelBuilder$TableBuilder.columns(JdbcModelBuilder.scala:162)
at slick.jdbc.JdbcModelBuilder$TableBuilder.columnsByName$lzycompute(JdbcModelBuilder.scala:164)
at slick.jdbc.JdbcModelBuilder$TableBuilder.columnsByName(JdbcModelBuilder.scala:164)
at slick.jdbc.JdbcModelBuilder$ForeignKeyBuilder.buildModel(JdbcModelBuilder.scala:314)
at slick.jdbc.JdbcModelBuilder$TableBuilder$$anonfun$buildForeignKeys$1.apply(JdbcModelBuilder.scala:169)
at slick.jdbc.JdbcModelBuilder$TableBuilder$$anonfun$buildForeignKeys$1.apply(JdbcModelBuilder.scala:169)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.AbstractTraversable.map(Traversable.scala:104)
at slick.jdbc.JdbcModelBuilder$TableBuilder.buildForeignKeys(JdbcModelBuilder.scala:169)
at slick.jdbc.JdbcModelBuilder$TableBuilder.buildModel(JdbcModelBuilder.scala:160)
at slick.jdbc.JdbcModelBuilder$$anonfun$buildModel$3$$anonfun$apply$17.apply(JdbcModelBuilder.scala:94)
at slick.jdbc.JdbcModelBuilder$$anonfun$buildModel$3$$anonfun$apply$17.apply(JdbcModelBuilder.scala:94)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:245)
at scala.collection.immutable.List.foreach(List.scala:381)
at scala.collection.TraversableLike$class.map(TraversableLike.scala:245)
at scala.collection.immutable.List.map(List.scala:285)
at slick.jdbc.JdbcModelBuilder$$anonfun$buildModel$3.apply(JdbcModelBuilder.scala:94)
at slick.jdbc.JdbcModelBuilder$$anonfun$buildModel$3.apply(JdbcModelBuilder.scala:91)
at slick.dbio.DBIOAction$$anonfun$map$1.apply(DBIOAction.scala:43)
at slick.dbio.DBIOAction$$anonfun$map$1.apply(DBIOAction.scala:43)
at slick.backend.DatabaseComponent$DatabaseDef$$anonfun$runInContext$1.apply(DatabaseComponent.scala:146)
at slick.backend.DatabaseComponent$DatabaseDef$$anonfun$runInContext$1.apply(DatabaseComponent.scala:146)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:249)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at scala.concurrent.impl.ExecutionContextImpl$AdaptedForkJoinTask.exec(ExecutionContextImpl.scala:121)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.pollAndExecAll(ForkJoinPool.java:1253)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1346)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
java.lang.RuntimeException: Nonzero exit code: 1
at scala.sys.package$.error(package.scala:27)
at sbt.BuildCommon$$anonfun$toError$1.apply(Defaults.scala:1943)
at sbt.BuildCommon$$anonfun$toError$1.apply(Defaults.scala:1943)
at scala.Option.foreach(Option.scala:236)
at sbt.BuildCommon$class.toError(Defaults.scala:1943)
at sbt.package$.toError(package.scala:4)
at $4f4c3fbf06757612bb06$$anonfun$slickCodeGenTask$1.apply(build.sbt:27)
at $4f4c3fbf06757612bb06$$anonfun$slickCodeGenTask$1.apply(build.sbt:19)
at scala.Function4$$anonfun$tupled$1.apply(Function4.scala:35)
at scala.Function4$$anonfun$tupled$1.apply(Function4.scala:34)
at scala.Function1$$anonfun$compose$1.apply(Function1.scala:47)
at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:40)
at sbt.std.Transform$$anon$4.work(System.scala:63)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:226)
at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:17)
at sbt.Execute.work(Execute.scala:235)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:226)
at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:159)
at sbt.CompletionService$$anon$2.call(CompletionService.scala:28)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
[error] (compile:managedSources) Nonzero exit code: 1
Here are the details of my sbt project ...
project/build.properties
sbt.version = 0.13.8
project/plugins.sbt
logLevel := Level.Warn
build.sbt
name := "slick_gen_schema_test"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= List(
"org.postgresql" % "postgresql" % "9.3-1103-jdbc4",
"com.typesafe.slick" % "slick_2.11" % "3.0.2",
"com.typesafe.slick" % "slick-codegen_2.11" % "3.0.2"
)
slick <<= slickCodeGenTask
sourceGenerators in Compile <+= slickCodeGenTask
lazy val slick = TaskKey[Seq[File]]("gen-tables")
lazy val slickCodeGenTask = (sourceManaged, dependencyClasspath in Compile, runner in Compile, streams) map { (dir, cp, r, s) =>
val outputDir = (dir / "main/slick").getPath
val username = "postgres"
val password = "postgres"
val url = "jdbc:postgresql://10.0.11.13:5432/example"
val jdbcDriver = "org.postgresql.Driver"
val slickDriver = "slick.driver.PostgresDriver"
val pkg = "dao"
toError(r.run("slick.codegen.SourceCodeGenerator", cp.files, Array(slickDriver, jdbcDriver, url, outputDir, pkg, username, password), s.log))
val fname = outputDir + "/dao/Tables.scala"
Seq(file(fname))
}
What is the right way to use the slick code generator in this scenario?
This is a Slick bug, which was fixed in Slick master (https://github.com/slick/slick/pull/1472).
Until Slick 3.2 comes out, you could work around the issue by changing your schema to not have a functional default.