I am migrating a Play v 2.3.4 app to Play v 2.5.4. Along the way I had to also upgrade to Scala 2.11.8 and kafka 9.0+ to support the updated Play version.
Most of the issues I have worked out but I cannot figure out a Kafka issue with some code that manages Kafka topics though AdminUtils. The troubles are all centered around kafka.utils.ZkStringSerialzier.
I am using org.I0Itec.zkclient package to instances ZkClient object that is passed in the construction of ZkUtils object but it fails because it cannot resolve my ZkStringSerializer.
Related code is:
import kafka.admin.AdminUtils
import kafka.utils.ZkUtils
import kafka.utils.ZKStringSerializer
import org.I0Itec.zkclient.{ZkClient, ZkConnection}
object Topic {
def CreateKafkaTopic(topic: String, zookeeperHosts: String, partitionSize: Int, replicationCount: Int, connectionTimeoutMs: Int = 10000, sessionTimeoutMs: Int = 10000): Boolean = {
var zkSerializer: ZKStringSerializer = ZKStringSerializer
val zkClient: ZkClient= new ZkClient(zookeeperHosts, connectionTimeoutMs, sessionTimeoutMs, zkSerializer)
val topicConfig: Properties = new Properties()
val isSecureKafkaCluster: Boolean = false
val zkUtils: ZkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperHosts), isSecureKafkaCluster)
AdminUtils.createTopic(zkUtils, topic, partitionSize, replicationCount, topicConfig)
zkClient.close()
}
}
The above code results in the error that ZKStringSerializer is inaccessible from his place.
I found several related post to creating topics (mostly in Java and before Kafka 9.0)
Creating a topic for Apache Kafka 0.9 Using Java
How create Kafka ZKStringSerializer in Java?
How Can we create a topic in Kafka from the IDE using API
And Finally
Creating a Kafka topic results in no leader
Based on these I updated by code as follows:
import kafka.admin.AdminUtils
import kafka.utils.ZkUtils
import kafka.utils.ZKStringSerializer$
import org.I0Itec.zkclient.{ZkClient, ZkConnection}
object Topic {
def CreateKafkaTopic(topic: String, zookeeperHosts: String, partitionSize: Int, replicationCount: Int, connectionTimeoutMs: Int = 10000, sessionTimeoutMs: Int = 10000): Boolean = {
var zkSerializer: ZKStringSerializer = ZKStringSerializer$.MODULE$
val zkClient: ZkClient= new ZkClient(zookeeperHosts, connectionTimeoutMs, sessionTimeoutMs, zkSerializer)
val topicConfig: Properties = new Properties()
val isSecureKafkaCluster: Boolean = false
val zkUtils: ZkUtils = new ZkUtils(zkClient, new ZkConnection(zookeeperHosts), isSecureKafkaCluster)
AdminUtils.createTopic(zkUtils, topic, partitionSize, replicationCount, topicConfig)
zkClient.close()
}
}
And then I just get unable to resolve symbol ZkStringSerialzer$ errors.
I tried both with the org.I0Itec.zkclient.serialize.ZkSerializer object as well and it did not make a difference.
So my Question is actually two fold:
1. What is the significance of the '$' character for the Import and Declarations statements in scala. I have used it in string interpolation ( e/g/ s"var value is $var")to reference variables but this seems different.
2. What is wrong with my code. Is it the way I am importing, declaring, something else?
I am new to scala and Play but I am feeling like quite and idiot at the moment so any advice / help is appreciated
~Dave
P.S.
In case it helps relevant bits from project files
build.sbt:
lazy val `api` = (project in file(".")).enablePlugins(PlayScala)
scalaVersion := "2.11.8"
libraryDependencies ++= Seq(
"org.apache.kafka" % "kafka_2.11" % "0.9.0.1",
jdbc,
cache,
ws,
specs2 % Test
)
plugins.sbt:
resolvers += "Typesafe repository" at "http://repo.typesafe.com/typesafe/releases/"
addSbtPlugin("com.typesafe.play" % "sbt-plugin" % "2.5.4")
addSbtPlugin("com.typesafe.sbt" % "sbt-coffeescript" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-less" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-jshint" % "1.0.1")
addSbtPlugin("com.typesafe.sbt" % "sbt-rjs" % "1.0.1")
addSbtPlugin("com.typesafe.sbt" % "sbt-digest" % "1.0.0")
addSbtPlugin("com.typesafe.sbt" % "sbt-mocha" % "1.0.0")
build.properties:
sbt.version=0.13.5
After fighting this issue over the weekend I gave up on the ZKClient package that had been used previously and simple used Kafka directly which was actually much cleaner that trying to use the I0Itech ZKClient.
New implementation goes like this:
import java.util.Properties
import kafka.admin.AdminUtils
import kafka.utils.ZkUtils
class Topic {
def CreateKafkaTopic(topic: String, zookeeperHosts: String, partitionSize: Int, replicationCount: Int, connectionTimeoutMs: Int = 10000, sessionTimeoutMs: Int = 10000): Boolean = {
if (ListKafkaTopics(zookeeperHosts).contains(topic) ) {
return false
}
val zkUtils = ZkUtils.apply(zookeeperHosts, sessionTimeoutMs, connectionTimeoutMs, false)
AdminUtils.createTopic( zkUtils, topic, partitionSize, replicationCount, new Properties())
zkUtils.close()
true
}
}
End the end removed a dependency and make cleaner code so a double win I suppose.
~Dave
The reason for this problem is the ZkStringSerialzer is declared as private, just use ZkUtils.createZkClient instead as follows:
ZkUtils.createZkClient(zookeeperHosts, sessionTimeoutMs, connectionTimeoutMs)
Related
for weeks I'm trying to get slick running with evolutions for Postgres (and eventually with codegen for the Tables.scala file) and it is extremely frustrating because the official documentation stops after the essential basic setup and anything that can be found elsewhere on the internet what goes a bit deeper is outdated. It seems that the framework changes a lot after every release and you cannot use older code snippets.
In order to keep things simple, I ended up in cloning the official play-samples repo and step by step change that to work with Postgres instead of the used H2 in-memory db.
Here is a link to the play-scala-slick-example
The only thing I changed is in the build.sbt#L11:
I replaced the h2 dependency with a postgres dependency:
"org.postgresql" % "postgresql" % "42.2.19",
and I added the jdbc dependency (according to their documentation) - but this causes a binding issue, so I'm not sure that this dependency should be really used
Then I changed the config to connect to local dev DB at the application.conf#L66-L68
slick.dbs.default.profile="slick.jdbc.PostgresProfile$"
slick.dbs.default.db.dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
slick.dbs.default.db.driver="org.postgresql.Driver"
slick.dbs.default.db.url="jdbc:postgresql://localhost:5432/my_local_test_db?currentSchema=play_example&user=postgres&password="
When I run the application with the JDBC connection and try to access it in the browser it reports this exception:
CreationException: Unable to create injector, see the following errors:
1) A binding to play.api.db.DBApi was already configured at play.api.db.DBModule$$anonfun$$lessinit$greater$1.apply(DBModule.scala:39):
Binding(interface play.api.db.DBApi to ProviderConstructionTarget(class play.api.db.DBApiProvider)) (via modules: com.google.inject.util.Modules$OverrideModule -> play.api.inject.guice.GuiceableModuleConversions$$anon$4).
at play.api.db.slick.evolutions.EvolutionsModule.bindings(EvolutionsModule.scala:15):
Binding(interface play.api.db.DBApi to ConstructionTarget(class play.api.db.slick.evolutions.internal.DBApiAdapter) in interface javax.inject.Singleton) (via modules: com.google.inject.util.Modules$OverrideModule -> play.api.inject.guice.GuiceableModuleConversions$$anon$4)
And when I run it without the jdbc dependency (I saw some hints on stackoverflow that this might cause the exception above) then it displays a warning and eventually runs into a timeout.
[warn] c.z.h.HikariConfig - db - using dataSourceClassName and ignoring jdbcUrl.
Does anyone know what is missing here to tell it to use the JDBC URL?
I finally found a solution. Thanks to user who pointed me in the right direction, where I found what the actual 2 problems are.
First in my application.conf had the wrong parameters and second I indeed needed to get rid of the jdbc dependency.
When the slick db config looks like this, then it will work:
slick.dbs.default.profile="slick.jdbc.PostgresProfile$"
slick.dbs.default.db.dataSourceClass = "slick.jdbc.DatabaseUrlDataSource"
slick.dbs.default.db.properties.driver="org.postgresql.Driver"
slick.dbs.default.db.properties.url="jdbc:postgresql://localhost:5432/fugu_test?currentSchema=play_example&user=postgres&password="
The "...properties..." was missing in both 2 last lines for the driver and url. It seems that what I had was from an older slick version.
I was going through similar problems for getting the Play to work with Postgres.
The connections to the database should be configured with a pool from HikariCP. The pasted configuration, as it can be implied from the build file, gets Slick to use HikariCP.
Here you go with my base and let me know if it works:
Contents of build.sbt:
import com.typesafe.sbt.SbtScalariform._
import scalariform.formatter.preferences._
val SlickVersion = "3.3.2"
name := “play-slick"
version := "6.0.0"
//val PlayVersion = "2.8.5"
scalaVersion := "2.13.1"
resolvers += Resolver.jcenterRepo
resolvers += "Sonatype snapshots" at "https://oss.sonatype.org/content/repositories/snapshots/"
libraryDependencies ++= Seq(
jdbc,
"org.postgresql" % "postgresql" % "42.2.18",
"com.typesafe.slick" %% "slick-hikaricp" % SlickVersion,
"org.webjars" %% "webjars-play" % "2.8.0",
"org.webjars" % "bootstrap" % "4.4.1" exclude("org.webjars", "jquery"),
"org.webjars" % "jquery" % "3.2.1",
"net.codingwell" %% "scala-guice" % "4.2.6",
"com.iheart" %% "ficus" % "1.4.7",
"com.typesafe.play" %% "play-mailer" % "8.0.1",
"com.typesafe.play" %% "play-mailer-guice" % "8.0.1",
//"com.enragedginger" %% "akka-quartz-scheduler" % "1.8.2-akka-2.6.x",
"com.enragedginger" %% "akka-quartz-scheduler" % "1.8.3-akka-2.6.x",
"com.adrianhurt" %% "play-bootstrap" % "1.5.1-P27-B4",
specs2 % Test,
ehcache,
guice,
jdbc,
filters
)
lazy val root = (project in file(".")).enablePlugins(PlayScala)
routesImport += "utils.route.Binders._"
// https://github.com/playframework/twirl/issues/105
TwirlKeys.templateImports := Seq()
scalacOptions ++= Seq(
"-deprecation", // Emit warning and location for usages of deprecated APIs.
"-feature", // Emit warning and location for usages of features that should be imported explicitly.
"-unchecked", // Enable additional warnings where generated code depends on assumptions.
"-Xfatal-warnings", // Fail the compilation if there are any warnings.
//"-Xlint", // Enable recommended additional warnings.
"-Ywarn-dead-code", // Warn when dead code is identified.
"-Ywarn-numeric-widen", // Warn when numerics are widened.
// Play has a lot of issues with unused imports and unsued params
// https://github.com/playframework/playframework/issues/6690
// https://github.com/playframework/twirl/issues/105
"-Xlint:-unused,_"
)
//********************************************************
// Scalariform settings
//********************************************************
scalariformAutoformat := true
ScalariformKeys.preferences := ScalariformKeys.preferences.value
.setPreference(FormatXml, false)
.setPreference(DoubleIndentConstructorArguments, false)
.setPreference(DanglingCloseParenthesis, Preserve)
The database configuration which should be added to application.conf, provided that Postgres is configured to use encrypted connection. I used letsencrypt.
#include "database.conf"
db.default.driver=org.postgresql.Driver
db.default.url="jdbc:postgresql://localhost/users_db?ssl=true&sslmode=require"
db.default.username=${?DATABASE_USER}
db.default.password=${?DATABASE_PASSWORD}
db.default.hikaricp.connectionTestQuery = "SELECT 1"
fixedConnectionPool = 5
database.dispatcher {
executor = "thread-pool-executor"
throughput = 1
thread-pool-executor {
fixed-pool-size = ${fixedConnectionPool}
}
}
Sample from UserDAOImpl, for the signature and the imports:
package models.daos
import java.util.UUID
import javax.inject.Inject
import models.User
import play.api.db.Database
import scala.concurrent.{ ExecutionContext, Future }
import scala.util.{ Failure, Success }
/**
* Give access to the user object.
*/
class UserDAOImpl #Inject() (db: Database)(implicit executionContext: DatabaseExecutionContext) extends UserDAO {
/**
* Finds a user by its user info.
*
* #param userInfo The user info of the user to find.
* #return The found user or None if no user for the given user info could be found.
*/
def find(userInfo: UserInfo) =
Future {
val c = db.getConnection()
val statement = c.prepareStatement("SELECT * FROM users WHERE email = ?;")
statement.setString(1, userInfo.email)
if (statement.execute()) {
val resultSet = statement.getResultSet
if (resultSet.next()) {
val userID = resultSet.getString("userid")
val firstName = resultSet.getString("firstName")
val lastName = resultSet.getString("lastName")
val affiliation = resultSet.getString("affiliation")
val roleTitle = resultSet.getString("roleTitle")
val fullName = resultSet.getString("fullName")
val email = resultSet.getString("email")
val avatarURL = resultSet.getString("avatarURL")
val activatedStr = resultSet.getString("activated")
val activated: Boolean = activatedStr match {
case "f" => false
case "t" => true
case _ => false
}
statement.close()
c.close()
Some(
User(
UUID.fromString(userID),
firstName = Some(firstName),
lastName = Some(lastName),
affiliation = Some(affiliation),
roleTitle = Some(roleTitle),
fullName = Some(fullName),
email = Some(email),
avatarURL = Some(avatarURL),
activated))
} else {
statement.close()
c.close()
None
}
} else {
statement.close()
c.close()
None
}
}
}
Place this DatabaseExecutionContext file in the same directory as your dao files.
package models.daos
import javax.inject._
import akka.actor.ActorSystem
import play.api.libs.concurrent.CustomExecutionContext
/**
* This class is a pointer to an execution context configured to point to "database.dispatcher"
* in the "application.conf" file.
*/
#Singleton
class DatabaseExecutionContext #Inject() (system: ActorSystem) extends CustomExecutionContext(system, "database.dispatcher")
In this project, I'm trying to consume data from a Kafka topic using Flink and then process the stream to detect a pattern using Flink CEP.
The part of using Kafka connect works and data is being fetched, but the CEP part doesn't work for some reason.
I'm using scala in this project.
build.sbt:
version := "0.1"
scalaVersion := "2.11.12"
libraryDependencies += "org.apache.flink" %% "flink-streaming-scala" % "1.12.2"
libraryDependencies += "org.apache.kafka" %% "kafka" % "2.3.0"
libraryDependencies += "org.apache.flink" %% "flink-connector-kafka" % "1.12.2"
libraryDependencies += "org.apache.flink" %% "flink-cep-scala" % "1.12.2"
the main code:
import org.apache.flink.api.common.serialization.SimpleStringSchema
import java.util
import java.util.Properties
import org.apache.flink.cep.PatternSelectFunction
import org.apache.flink.cep.scala.CEP
import org.apache.flink.streaming.api.scala._
import org.apache.flink.cep.scala.pattern.Pattern
import org.apache.flink.streaming.api.scala.{DataStream, StreamExecutionEnvironment}
import org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
import org.apache.flink.cep.pattern.conditions.IterativeCondition
object flinkExample {
def main(args: Array[String]): Unit = {
val CLOSE_THRESHOLD: Double = 140.00
val properties = new Properties()
properties.setProperty("bootstrap.servers", "localhost:9092")
properties.setProperty("zookeeper.connect", "localhost:2181")
properties.setProperty("group.id", "test")
val consumer = new FlinkKafkaConsumer[String]("test", new SimpleStringSchema(), properties)
consumer.setStartFromEarliest
val see: StreamExecutionEnvironment = StreamExecutionEnvironment.getExecutionEnvironment
val src: DataStream[String] = see.addSource(consumer)
val keyedStream: DataStream[Stock] = src.map(v => v)
.map {
v =>
val data = v.split(":")
val date = data(0)
val close = data(1).toDouble
Stock(date,close)
}
val pat = Pattern
.begin[Stock]("start")
.where(_.Adj_Close > CLOSE_THRESHOLD)
val patternStream = CEP.pattern(keyedStream, pat)
val result = patternStream.select(
patternSelectFunction = new PatternSelectFunction[Stock, String]() {
override def select(pattern: util.Map[String, util.List[Stock]]): String = {
val data = pattern.get("first").get(0)
data.toString
}
}
)
result.print()
see.execute("ASK Flink Kafka")
}
case class Stock(date: String,
Adj_Close: Double)
{
override def toString: String = s"Stock date: $date, Adj Close: $Adj_Close"
}
}
Data coming from Kafka are in string format: "date:value"
Scala version: 2.11.12
Flink version: 1.12.2
Kafka version: 2.3.0
I'm building the project using: sbt assembly, and then deploy the jar in the flink dashboard.
With pattern.get("first") you are selecting a pattern named "first" from the pattern sequence, but the pattern sequence only has one pattern, which is named "start". Trying changing "first" to "start".
Also, CEP has to be able to sort the stream into temporal order in order to do pattern matching. You should define a watermark strategy. For processing time semantics you can use WatermarkStrategy.noWatermarks().
I am trying to learn a Scala-Spark JDBC program on IntelliJ IDEA. In order to do that, I have created a Scala SBT Project and the project structure looks like:
Before writing the JDBC connection parameters in the class, I first tried loading a properties file which contain all my connection properties and trying to display if they are loading properly as below:
connection.properties content:
devUserName=username
devPassword=password
gpDriverClass=org.postgresql.Driver
gpDevUrl=jdbc:url
Code:
package com.yearpartition.obj
import java.io.FileInputStream
import java.util.Properties
import org.apache.spark.sql.SparkSession
import org.apache.log4j.{Level, LogManager, Logger}
import org.apache.spark.SparkConf
object PartitionRetrieval {
var conf = new SparkConf().setAppName("Spark-JDBC")
val properties = new Properties()
properties.load(new FileInputStream("connection.properties"))
val connectionUrl = properties.getProperty("gpDevUrl")
val devUserName=properties.getProperty("devUserName")
val devPassword=properties.getProperty("devPassword")
val gpDriverClass=properties.getProperty("gpDriverClass")
println("connectionUrl: " + connectionUrl)
Class.forName(gpDriverClass).newInstance()
def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().enableHiveSupport().config(conf).master("local[2]").getOrCreate()
println("connectionUrl: " + connectionUrl)
}
}
Content of build.sbt:
name := "YearPartition"
version := "0.1"
scalaVersion := "2.11.8"
libraryDependencies ++= {
val sparkCoreVer = "2.2.0"
val sparkSqlVer = "2.2.0"
Seq(
"org.apache.spark" %% "spark-core" % sparkCoreVer % "provided" withSources(),
"org.apache.spark" %% "spark-sql" % sparkSqlVer % "provided" withSources(),
"org.json4s" %% "json4s-jackson" % "3.2.11" % "provided",
"org.apache.httpcomponents" % "httpclient" % "4.5.3"
)
}
Since I am not writing or saving data into any file and trying to display the values of properties file, I executed the code using following:
SPARK_MAJOR_VERSION=2 spark-submit --class com.yearpartition.obj.PartitionRetrieval yearpartition_2.11-0.1.jar
But I am getting file not found exception as below:
Caused by: java.io.FileNotFoundException: connection.properties (No such file or directory)
I tried to fix it in vain. Could anyone let me know what is the mistake I am doing here and how can I correct it ?
You must write to full path of your connection.properties file (file:///full_path/connection.properties) and in this option when you submit a job in cluster if you want to read file the local disk you must save connection.properties file on the all server in the cluster to same path. But in other option, you can read the files from HDFS. Here is a little example for reading files on HDFS:
#throws[IOException]
def readFileFromHdfs(file: String): org.apache.hadoop.fs.FSDataInputStream = {
val conf = new org.apache.hadoop.conf.Configuration
conf.set("fs.default.name", "HDFS_HOST")
val fileSystem = org.apache.hadoop.fs.FileSystem.get(conf)
val path = new org.apache.hadoop.fs.Path(file)
if (!fileSystem.exists(path)) {
println("File (" + path + ") does not exists.")
null
} else {
val in = fileSystem.open(path)
in
}
}
I'm trying out Word Count problem with kafka streams. I am using Kafka 1.1.0 with scala version 2.11.12 and sbt version 1.1.4. I am getting following error:
Exception in thread "wordcount-application-d81ee069-9307-46f1-8e71-c9f777d2db64-StreamThread-1" java.lang.UnsatisfiedLinkError: C:\Users\user\AppData\Local\Temp\librocksdbjni5439068356048679315.dll: À¦¥Y
at java.lang.ClassLoader$NativeLibrary.load(Native Method)
at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1941)
at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1824)
at java.lang.Runtime.load0(Runtime.java:809)
at java.lang.System.load(System.java:1086)
at org.rocksdb.NativeLibraryLoader.loadLibraryFromJar(NativeLibraryLoader.java:78)
at org.rocksdb.NativeLibraryLoader.loadLibrary(NativeLibraryLoader.java:56)
at org.rocksdb.RocksDB.loadLibrary(RocksDB.java:64)
at org.rocksdb.RocksDB.<clinit>(RocksDB.java:35)
at org.rocksdb.Options.<clinit>(Options.java:25)
at org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:116)
at org.apache.kafka.streams.state.internals.RocksDBStore.init(RocksDBStore.java:167)
at org.apache.kafka.streams.state.internals.ChangeLoggingKeyValueBytesStore.init(ChangeLoggingKeyValueBytesStore.java:40)
at org.apache.kafka.streams.state.internals.CachingKeyValueStore.init(CachingKeyValueStore.java:63)
at org.apache.kafka.streams.state.internals.InnerMeteredKeyValueStore.init(InnerMeteredKeyValueStore.java:160)
at org.apache.kafka.streams.state.internals.MeteredKeyValueBytesStore.init(MeteredKeyValueBytesStore.java:102)
at org.apache.kafka.streams.processor.internals.AbstractTask.registerStateStores(AbstractTask.java:225)
at org.apache.kafka.streams.processor.internals.StreamTask.initializeStateStores(StreamTask.java:162)
at org.apache.kafka.streams.processor.internals.AssignedTasks.initializeNewTasks(AssignedTasks.java:88)
at org.apache.kafka.streams.processor.internals.TaskManager.updateNewAndRestoringTasks(TaskManager.java:316)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:789)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:750)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:720)
I already tried solution given here UnsatisfiedLinkError on Lib rocks DB dll when developing with Kafka Streams .
Here is the code that i am trying out in scala.
object WordCountApplication {
def main(args: Array[String]) {
val config: Properties = {
val p = new Properties()
p.put(StreamsConfig.APPLICATION_ID_CONFIG, "wordcount-application")
p.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092")
p.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass)
p.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass)
p
}
val builder: StreamsBuilder = new StreamsBuilder()
val textLines: KStream[String, String] = builder.stream("streams-plaintext-input")
val afterFlatMap: KStream[String, String] = textLines.flatMapValues(new ValueMapper[String,java.lang.Iterable[String]] {
override def apply(value: String): lang.Iterable[String] = value.split("\\W+").toIterable.asJava
})
val afterGroupBy: KGroupedStream[String, String] = afterFlatMap.groupBy(new KeyValueMapper[String,String,String] {
override def apply(key: String, value: String): String = value
})
val wordCounts: KTable[String, Long] = afterGroupBy
.count(Materialized.as("counts-store").asInstanceOf[Materialized[String, Long, KeyValueStore[Bytes, Array[Byte]]]])
wordCounts.toStream().to("streams-wordcount-output ", Produced.`with`(Serdes.String(), Serdes.Long()))
val streams: KafkaStreams = new KafkaStreams(builder.build(), config)
streams.start()
Runtime.getRuntime.addShutdownHook(new Thread(
new Runnable{
override def run() = streams.close(10, TimeUnit.SECONDS)}
))
}
}
Build.sbt
name := "KafkaStreamDemo"
version := "0.1"
scalaVersion := "2.11.12"
libraryDependencies ++= Seq(
"org.apache.kafka" %% "kafka" % "1.1.0",
"org.apache.kafka" % "kafka-clients" % "1.1.0",
"org.apache.kafka" % "kafka-streams" % "1.1.0",
"ch.qos.logback" % "logback-classic" % "1.2.3"
)
If anyone has faced such problem, kindly help.
Finally, I found answer which works. I followed Unable to load rocksdbjni.
I did two things which worked for me.
1) I installed Visual C++ Redistributable for Visual Studio 2015.
2) Earlier, i was using rocksdb 5.7.3 with kafka-streams 1.1.0(rocksdb 5.7.3 comes by default with kafka-streams 1.1.0). I excluded rocksdb dependency from kafka-streams dependency and installed rocksdb 5.3.6. For reference, below is my built.sbt right now.
name := "KafkaStreamDemo"
version := "0.1"
scalaVersion := "2.12.5"
libraryDependencies ++= Seq(
"org.apache.kafka" %% "kafka" % "1.1.0",
"org.apache.kafka" % "kafka-clients" % "1.1.0",
"org.apache.kafka" % "kafka-streams" % "1.1.0" exclude("org.rocksdb","rocksdbjni"),
"ch.qos.logback" % "logback-classic" % "1.2.3",
"org.rocksdb" % "rocksdbjni" % "5.3.6"
)
Hope, it helps someone.
Thanks
In my case I was using kafka-streams:1.0.2. Changing the base docker image from alpine-jdk8:latest to openjdk:8-jre worked.
This link - https://github.com/docker-flink/docker-flink/pull/22 helped me arrive at this solution.
I've read the jackson-module-scala page on enumeration handling (https://github.com/FasterXML/jackson-module-scala/wiki/Enumerations). Still I'm not getting it to work. The essential code goes like this:
#Path("/v1/admin")
#Produces(Array(MediaType.APPLICATION_JSON + ";charset=utf-8"))
#Consumes(Array(MediaType.APPLICATION_JSON + ";charset=utf-8"))
class RestService {
#POST
#Path("{type}/abort")
def abortUpload(#PathParam("type") typeName: ResourceTypeHolder) {
...
}
}
object ResourceType extends Enumeration {
type ResourceType = Value
val ssr, roadsegments, tmc, gab, tne = Value
}
class ResourceTypeType extends TypeReference[ResourceType.type]
case class ResourceTypeHolder(
#JsonScalaEnumeration(classOf[ResourceTypeType])
resourceType:ResourceType.ResourceType
)
This is how it's supposed to work, right? Still I get these errors:
Following issues have been detected:
WARNING: No injection source found for a parameter of type public void no.tull.RestService.abortUpload(no.tull.ResourceTypeHolder) at index 0.
unavailable
org.glassfish.jersey.server.model.ModelValidationException: Validation of the application resource model has failed during application initialization.
[[FATAL] No injection source found for a parameter of type public void no.tull.RestService.abortUpload(no.tull.ResourceTypeHolder) at index 0.; source='ResourceMethod{httpMethod=POST, consumedTypes=[application/json; charset=utf-8], producedTypes=[application/json; charset=utf-8], suspended=false, suspendTimeout=0, suspendTimeoutUnit=MILLISECONDS, invocable=Invocable{handler=ClassBasedMethodHandler{handlerClass=class no.tull.RestService, handlerConstructors=[org.glassfish.jersey.server.model.HandlerConstructor#7ffe609f]}, definitionMethod=public void no.tull.RestService.abortUpload(no.tull.ResourceTypeHolder), parameters=[Parameter [type=class no.tull.ResourceTypeHolder, source=type, defaultValue=null]], responseType=void}, nameBindings=[]}']
at org.glassfish.jersey.server.ApplicationHandler.initialize(ApplicationHandler.java:467)
at org.glassfish.jersey.server.ApplicationHandler.access$500(ApplicationHandler.java:163)
at org.glassfish.jersey.server.ApplicationHandler$3.run(ApplicationHandler.java:323)
at org.glassfish.jersey.internal.Errors$2.call(Errors.java:289)
at org.glassfish.jersey.internal.Errors$2.call(Errors.java:286)
I have also assembled a tiny runnable project (while trying to eliminate any other complications) that demonstrates the problem: project.tgz
Update: Created an sbt-file to see if gradle was building a strange build. Got the same result, but this is the build.sbt:
name := "project"
version := "1.0"
scalaVersion := "2.10.4"
val jacksonVersion = "2.4.1"
val jerseyVersion = "2.13"
libraryDependencies ++= Seq(
"com.fasterxml.jackson.core" % "jackson-annotations" % jacksonVersion,
"com.fasterxml.jackson.core" % "jackson-databind" % jacksonVersion,
"com.fasterxml.jackson.jaxrs" % "jackson-jaxrs-json-provider" % jacksonVersion,
"com.fasterxml.jackson.jaxrs" % "jackson-jaxrs-base" % jacksonVersion,
"com.fasterxml.jackson.module" % "jackson-module-scala_2.10" % jacksonVersion,
"org.glassfish.jersey.containers" % "jersey-container-servlet-core" % jerseyVersion
)
seq(webSettings :_*)
libraryDependencies ++= Seq(
"org.eclipse.jetty" % "jetty-webapp" % "9.1.0.v20131115" % "container",
"org.eclipse.jetty" % "jetty-plus" % "9.1.0.v20131115" % "container"
)
... and this is the project/plugins.sbt:
addSbtPlugin("com.earldouglas" % "xsbt-web-plugin" % "0.9.0")
You seem to possibly have a few problems with your tarball.
You need to add some Scala modules to Jackson to be able to use any Scala functionality. That can be done by doing this:
val jsonObjectMapper = new ObjectMapper()
jsonObjectMapper.registerModule(DefaultScalaModule)
val jsonProvider: JacksonJsonProvider = new JacksonJsonProvider(jsonObjectMapper)
According to this working jersey-jackson example. You also need to inject org.glassfish.jersey.jackson.JacksonFeature into Jersey which is found in jersey-media-json-jackson. My RestApplication.scala came out like this
import javax.ws.rs.core.Application
import javax.ws.rs.ext.{ContextResolver, Provider}
import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.module.scala.DefaultScalaModule
import com.google.common.collect.ImmutableSet
import org.glassfish.jersey.jackson.JacksonFeature
#Provider
class ObjectMapperProvider extends ContextResolver[ObjectMapper] {
val defaultObjectMapper = {
val jsonObjectMapper = new ObjectMapper()
jsonObjectMapper.registerModule(DefaultScalaModule)
jsonObjectMapper
}
override def getContext(typ: Class[_]): ObjectMapper = {
defaultObjectMapper
}
}
class RestApplication extends Application {
override def getSingletons: java.util.Set[AnyRef] = {
ImmutableSet.of(
new RestService,
new ObjectMapperProvider,
new JacksonFeature
)
}
}
The real issue, though, is the #PathParam annotation. This code path doesn't invoke Jackson at all. However, what's interesting is that Jersey appears to generically support parsing to any type that has a constructor of a single string. So if you modify your ResourceTypeHolder you can get the functionality you want after all.
case class ResourceTypeHolder(#JsonScalaEnumeration(classOf[ResourceTypeType]) resourceType:ResourceType.ResourceType) {
def this(name: String) = this(ResourceType.withName(name))
}
You might be able to add generic support for enum holders to Jersey as an injectable provider. However, that hasn't come up in dropwizard-scala, a project that would suffer the same fate as it uses Jersey too. Thus I imagine it's either impossible, or simply just not common enough for anyone to have done the work. When it comes to enum's, I tend to keep mine in Java.