Hazelcast server with scala client issue - scala

I am trying to setup hazelcast server and client on my local machine. I am also trying to connect to local Hazelcast server by scala-client.
For server I used below code,
import com.hazelcast.config._
import com.hazelcast.Scala._
object HazelcastServer {
def main(args: Array[String]): Unit = {
val conf = new Config
serialization.Defaults.register(conf.getSerializationConfig)
serialization.DynamicExecution.register(conf.getSerializationConfig)
val hz = conf.newInstance()
val cmap = hz.getMap[String, String]("test")
cmap.put("a","A")
cmap.put("b","B")
}
}
and hazelcast client as,
import com.hazelcast.Scala._
import client._
import com.hazelcast.client._
import com.hazelcast.config._
object Hazelcast_Client {
def main(args:Array[String]): Unit = {
val conf = new Config
serialization.Defaults.register(conf.getSerializationConfig)
serialization.DynamicExecution.register(conf.getSerializationConfig)
val hz = conf.newClient()
val cmap = hz.getMap("test")
println(cmap.size())
}
}
In my build.sbt,
libraryDependencies += "com.hazelcast" % "hazelcast" % "3.7.2"
libraryDependencies += "com.hazelcast" %% "hazelcast-scala" % "3.7.2"
I am getting below error and stuck in dependency issues.
Symbol 'type <none>.config.ClientConfig' is missing from the classpath.
[error] This symbol is required by 'value com.hazelcast.Scala.client.package.conf'.
[error] Make sure that type ClientConfig is in your classpath and check for conflicting dependencies with `-Ylog-classpath`.
[error] A full rebuild may help if 'package.class' was compiled against an incompatible version of <none>.config.
[error] val conf = new Config
I referred hazelcast documentation. I am not able to find any good hazelcast scala examples to understand the setup and to start playing with. If anybody can help in solving this issue, or share really good scala examples that would be helpful.

I've done a Scala+Akka Hazelcast before. My build.sbt included
libraryDependencies += "com.hazelcast" % "hazelcast-all" % "3.7.2"
I seem to remember that hazelcast-all was required.

Related

How to use Flink's KafkaSource with Scala in 2022

I've checked out this similar but 7 year old question but it does not apply to newer Flink versions.
I'm trying to get a simple Flink Kafka job running and have tried various versions getting different compile errors for each. I'm using sbt to manage my dependencies:
val flinkDependencies = Seq(
"org.apache.flink" %% "flink-clients" % flinkVersion,
"org.apache.flink" %% "flink-scala" % flinkVersion,
"org.apache.flink" %% "flink-streaming-scala" % flinkVersion,
"org.apache.flink" %% "flink-connector-kafka" % flinkVersion
)
Versions tried:
scala 2.11.12 and 2.12.15
flink 1.14.6
The code I'm trying to compile (relevant bits):
import org.apache.flink.streaming.util.serialization.SimpleStringSchema
import org.apache.flink.connector.kafka.source.KafkaSource
...
val env = ExecutionEnvironment.getExecutionEnvironment
val kafkaConsumer = new KafkaSource.builder[String]
.setBootstrapservers("localhost:9092")
.setGroupId("flink")
.setTopics("test")
.build()
val text = env.fromSource(kafkaConsumer)
I did not find an official example that this is indeed how one is supposed to use the KafkaSource but I found this setup here and here. To my still very new Java eyes this looks aligned with the API docs. But yeah can't get it to work with either Scala version:
[error] somepathwithmyfile: type builder is not a member of object org.apache.flink.connector.kafka.source.KafkaSource
[error] val kafkaConsumer = new KafkaSource.builder[String]
[error] ^
[error] somepathwithmyfile: value fromSource is not a member of org.apache.flink.api.scala.ExecutionEnvironment
[error] val text = env.fromSource(kafkaConsumer)
[error] ^
[error] two errors found
For the first problem, drop the new:
val kafkaConsumer = KafkaSource.builder[String]
...
For the second problem, fromSource requires three arguments:
/** Create a DataStream using a [[Source]]. */
#Experimental
def fromSource[T: TypeInformation](
source: Source[T, _ <: SourceSplit, _],
watermarkStrategy: WatermarkStrategy[T],
sourceName: String): DataStream[T] = {
val typeInfo = implicitly[TypeInformation[T]]
asScalaStream(javaEnv.fromSource(source, watermarkStrategy, sourceName, typeInfo))
}
Also, note that Flink does not (yet) support scala 2.12.15. See https://issues.apache.org/jira/browse/FLINK-20969. However, Flink 1.15 can be used with newer versions of Scala (including Scala 3), if you exclude Flink's built-in scala API support. See https://flink.apache.org/2022/02/22/scala-free.html for more on this.

How to use a standalone WSClient in a Scala application

I would like to use ws in a standalone application. Trying this code, copied from https://gist.github.com/cdimascio/46b2b7d2986636c1189c :
import com.ning.http.client.AsyncHttpClientConfig
import play.api.libs.ws.ning._
import play.api.libs.ws._
// provide an execution context
import scala.concurrent.ExecutionContext.Implicits.global
object WSStandaloneTest {
def main(args: Array[String]) {
// set up the client
val config = new NingAsyncHttpClientConfigBuilder(DefaultWSClientConfig()).build
val builder = new AsyncHttpClientConfig.Builder(config)
val client = new NingWSClient(builder.build)
// execute a GET request
val response = client.url("http://www.example.com").get
// print the response body
response.foreach(r => {
println(r.body)
// not the best place to close the client,
// but it ensures we dont close the threads before the response arrives
// Good enough for the gist :-D
client.close()
})
}
}
Results in the following error:
[error] object ning is not a member of package play.api.libs.ws
[error] import play.api.libs.ws.ning._
In my build.sbt I have this:
libraryDependencies += "com.typesafe.play" %% "play-json" % "2.6.1"
libraryDependencies += "com.typesafe.play" %% "play-ws" % "2.6.1"
What am I doing wrong?
NingWSClient is deprecated in Play! 2.5.x.
In 2.6.x
The ning package has been replaced by the ahc package, and the Ning* classes replaced by AHC*.
There is a migration guide available in the official doc.
So you can choose to downgrade to 2.5.x and use ning or update the code.

Spark not working with pureconfig

I'm trying to use pureConfig and configFactory for my spark application configuration.
here is my code:
import pureconfig.{loadConfigOrThrow}
object Source{
def apply(keyName: String, configArguments: Config): Source = {
keyName.toLowerCase match {
case "mysql" =>
val properties = loadConfigOrThrow[DBConnectionProperties](configArguments)
new MysqlSource(None, properties)
case "files" =>
val properties = loadConfigOrThrow[FilesSourceProperties](configArguments)
new Files(properties)
case _ => throw new NoSuchElementException(s"Unknown Source ${keyName.toLowerCase}")
}
}
}
import Source
val config = ConfigFactory.parseString(result.mkString("\n"))
val source = Source("mysql",config.getConfig("source.mysql"))
when I run it from the IDE (intelliJ) or directly from java
(i.e java jar...) it works fine.
But when I run it with spark-submit it fails with the following error:
Exception in thread "main" java.lang.NoSuchMethodError: shapeless.Witness$.mkWitness(Ljava/lang/Object;)Lshapeless/Witness;
From a quick search I found a similar similar to this question.
which suggest the reason for this is due to the fact both spark and pureConfig depends on Shapeless but with different versions,
I tried to shade it as suggested in the answer
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("shapeless.**" -> "shadeshapless.#1")
.inLibrary("com.github.pureconfig" %% "pureconfig" % "0.7.0").inProject
)
but it didn't work as well
can it be from a different reason?
any idea what may work?
Thanks
You also have to shade shapeless inside its own JAR, in addition to pureconfig:
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("shapeless.**" -> "shadeshapless.#1")
.inLibrary("com.chuusai" % "shapeless_2.11" % "2.3.2")
.inLibrary("com.github.pureconfig" %% "pureconfig" % "0.7.0")
.inProject
)
Make sure to add the correct shapeless version.

How to do Slick configuration via application.conf from within custom sbt task?

I want to create an set task which creates a database schema with slick. For that, I have a task object like the following in my project:
object CreateSchema {
val instance = Database.forConfig("localDb")
def main(args: Array[String]) {
val createFuture = instance.run(createActions)
...
Await.ready(createFuture, Duration.Inf)
}
}
and in my build.sbt I define a task:
lazy val createSchema = taskKey[Unit]("CREATE database schema")
fullRunTask(createSchema, Runtime, "sbt.CreateSchema")
which gets executed as expected when I run sbt createSchema from the command line.
However, the problem is that application.conf doesn't seem to get taken into account (I've also tried different scopes like Compile or Test). As a result, the task fails due to com.typesafe.config.ConfigException$Missing: No configuration setting found for key 'localDb'.
How can I fix this so the configuration is available?
I found a lot of questions here that deal with using the application.conf inside the build.sbt itself, but that is not what I need.
I have setup a little demo using SBT 0.13.8 and Slick 3.0.0, which is working as expected. (And even without modifying "-Dconfig.resource".)
Files
./build.sbt
name := "SO_20150915"
version := "1.0"
scalaVersion := "2.11.7"
libraryDependencies ++= Seq(
"com.typesafe" % "config" % "1.3.0" withSources() withJavadoc(),
"com.typesafe.slick" %% "slick" % "3.0.0",
"org.slf4j" % "slf4j-nop" % "1.6.4",
"com.h2database" % "h2" % "1.3.175"
)
lazy val createSchema = taskKey[Unit]("CREATE database schema")
fullRunTask(createSchema, Runtime, "somefun.CallMe")
./project/build.properties
sbt.version = 0.13.8
./src/main/resources/reference.conf
hello {
world = "buuh."
}
h2mem1 = {
url = "jdbc:h2:mem:test1"
driver = org.h2.Driver
connectionPool = disabled
keepAliveConnection = true
}
./src/main/scala/somefun/CallMe.scala
package somefun
import com.typesafe.config.Config
import com.typesafe.config.ConfigFactory
import slick.driver.H2Driver.api._
/**
* SO_20150915
* Created by martin on 15.09.15.
*/
object CallMe {
def main(args: Array[String]) : Unit = {
println("Hello")
val settings = new Settings()
println(s"Settings read from hello.world: ${settings.hw}" )
val db = Database.forConfig("h2mem1")
try {
// ...
println("Do something with your database.")
} finally db.close
}
}
class Settings(val config: Config) {
// This verifies that the Config is sane and has our
// reference config. Importantly, we specify the "di3"
// path so we only validate settings that belong to this
// library. Otherwise, we might throw mistaken errors about
// settings we know nothing about.
config.checkValid(ConfigFactory.defaultReference(), "hello")
// This uses the standard default Config, if none is provided,
// which simplifies apps willing to use the defaults
def this() {
this(ConfigFactory.load())
}
val hw = config.getString("hello.world")
}
Result
If running sbt createSchema from Console I obtain the output
[info] Loading project definition from /home/.../SO_20150915/project
[info] Set current project to SO_20150915 (in build file:/home/.../SO_20150915/)
[info] Running somefun.CallMe
Hello
Settings read from hello.world: buuh.
Do something with your database.
[success] Total time: 1 s, completed 15.09.2015 10:42:20
Ideas
Please verify that this unmodified demo project also works for you.
Then try changing SBT version in the demo project and see if that changes something.
Alternatively, recheck your project setup and try to use a higher version of SBT.
Answer
So, even if your code resides in your src-folder, it is called from within SBT. That means, you are trying to load your application.conf from within the classpath context of SBT.
Slick uses Typesafe Config internally. (So the approach below (described in background) is not applicable, as you can not modify the Config loading mechanism itself).
Instead try the set the path to your application.conf explicitly via config.resource, see typesafe config docu (search for config.resource)
Option 1
Either set config.resource (via -Dconfig.resource=...) before starting sbt
Option 2
Or from within build.sbt as Scala code
sys.props("config.resource") = "./src/main/resources/application.conf"
Option 3
Or create a Task in SBT via
lazy val configPath = TaskKey[Unit]("configPath", "Set path for application.conf")
and add
configPath := Def.task {
sys.props("config.resource") = "./src/main/resources/application.conf"
}
to your sequence of settings.
Please let me know, if that worked.
Background information
Recently, I was writing a custom plugin for SBT, where I also tried to access a reference.conf as well. Unfortunately, I was not able to access any of .conf placed within project-subfolder using the default ClassLoader.
In the end I created a testenvironment.conf in project folder and used the following code to load the (typesafe) config:
def getConfig: Config = {
val classLoader = new java.net.URLClassLoader( Array( new File("./project/").toURI.toURL ) )
ConfigFactory.load(classLoader, "testenvironment")
}
or for loading a genereal application.conf from ./src/main/resources:
def getConfig: Config = {
val classLoader = new java.net.URLClassLoader( Array( new File("./src/main/resources/").toURI.toURL ) )
// no .conf basename given, so look for reference.conf and application.conf
// use specific classLoader
ConfigFactory.load(classLoader)
}

No RowReaderFactory can be found for this type error when trying to map Cassandra row to case object using spark-cassandra-connector

I am trying to get a simple example working mapping rows from Cassandra to a scala case class using Apache Spark 1.1.1, Cassandra 2.0.11, & the spark-cassandra-connector (v1.1.0). I have reviewed the documentation at the spark-cassandra-connector github page, planetcassandra.org, datastax, and generally searched around; but have not found anyone else encountering this issue. So here goes...
Building a tiny spark application using sbt (0.13.5), scala 2.10.4, spark 1.1.1 against Cassandra 2.0.11. Modelling the example from the spark-cassandra-connector docs the following two lines present an error in my IDE and fail to compile.
case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
The simple error presented by eclipse is:
No RowReaderFactory can be found for this type
The compile error is only slightly more verbose:
> compile
[info] Compiling 1 Scala source to /home/bkarels/dev/simple-case/target/scala-2.10/classes...
[error] /home/bkarels/dev/simple-case/src/main/scala/com/bradkarels/simple/SimpleApp.scala:82: No RowReaderFactory can be found for this type
[error] val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
[error] ^
[error] one error found
[error] (compile:compile) Compilation failed
[error] Total time: 1 s, completed Dec 10, 2014 9:01:30 AM
>
Scala source:
package com.bradkarels.simple
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.rdd._
// Likely don't need this import - but throwing darts hits the bullseye once in a while...
import com.datastax.spark.connector.rdd.reader.RowReaderFactory
object CaseStudy {
def main(args: Array[String]) {
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host", "127.0.0.1")
val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)
case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
}
}
With the bothersome lines removed, everything compiles fine, assembly works, and I can perform other Spark operations normally. For example, if I remove the problem lines and drop in:
val rdd:CassandraRDD[CassandraRow] = sc.cassandraTable("nicecase", "human")
I get back the RDD and work with it as expected. That said, I suspect that my sbt project, assembly plugin, etc. are not contributing to the issues. The working source (less the new attempt to map to a case class as the connector as intended) can be found on github here.
But, to be more thorough, my build.sbt:
name := "Simple Case"
version := "0.0.1"
scalaVersion := "2.10.4"
libraryDependencies ++= Seq(
"org.apache.spark" %% "spark-core" % "1.1.1",
"org.apache.spark" %% "spark-sql" % "1.1.1",
"com.datastax.spark" %% "spark-cassandra-connector" % "1.1.0" withSources() withJavadoc()
)
So the question is what have I missed? Hoping this is something silly, but if you have encountered this and can help me get past this puzzling little issue I would very much appreciate it. Please let me know if there are any other details that would be helpful in troubleshooting.
Thank you.
This may be my newness with Scala in general, but I resolved this issue by moving the case class declaration out of the main method. So the simplified source now looks like this:
package com.bradkarels.simple
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import com.datastax.spark.connector._
import com.datastax.spark.connector.rdd._
object CaseStudy {
case class SubHuman(id:String, firstname:String, lastname:String, isGoodPerson:Boolean)
def main(args: Array[String]) {
val conf = new SparkConf(true)
.set("spark.cassandra.connection.host", "127.0.0.1")
val sc = new SparkContext("spark://127.0.0.1:7077", "simple", conf)
val foo = sc.cassandraTable[SubHuman]("nicecase", "human").select("id","firstname","lastname","isGoodPerson").toArray
}
}
The complete source (updated & fixed) can be found on github https://github.com/bradkarels/spark-cassandra-to-scala-case-class