I'm using play 2.6 and scala 2.12 in my app, and elastic search using elastic4s.
In my build.sbt:
"com.sksamuel.elastic4s" %% "elastic4s-core" % "5.6.0",
"com.sksamuel.elastic4s" %% "elastic4s-http" % "5.6.0",
and I keep getting this exception in my log:
Exception in thread "I/O dispatcher 16" java.lang.NoClassDefFoundError: Could not initialize class com.sksamuel.elastic4s.http.JacksonSupport$
at com.sksamuel.elastic4s.http.ResponseHandler$.fromEntity(ResponseHandler.scala:47)
at com.sksamuel.elastic4s.http.DefaultResponseHandler.$anonfun$onResponse$1(ResponseHandler.scala:55)
at scala.util.Try$.apply(Try.scala:209)
at com.sksamuel.elastic4s.http.DefaultResponseHandler.onResponse(ResponseHandler.scala:55)
at com.sksamuel.elastic4s.http.HttpExecutable$RichRestClient$$anon$1.onSuccess(HttpExecutable.scala:27)
at org.elasticsearch.client.RestClient$FailureTrackingResponseListener.onSuccess(RestClient.java:597)
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:352)
at org.elasticsearch.client.RestClient$1.completed(RestClient.java:343)
at org.apache.http.concurrent.BasicFuture.completed(BasicFuture.java:119)
at org.apache.http.impl.nio.client.DefaultClientExchangeHandlerImpl.responseCompleted(DefaultClientExchangeHandlerImpl.java:177)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.processResponse(HttpAsyncRequestExecutor.java:436)
at org.apache.http.nio.protocol.HttpAsyncRequestExecutor.inputReady(HttpAsyncRequestExecutor.java:326)
at org.apache.http.impl.nio.DefaultNHttpClientConnection.consumeInput(DefaultNHttpClientConnection.java:265)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:81)
at org.apache.http.impl.nio.client.InternalIODispatch.onInputReady(InternalIODispatch.java:39)
at org.apache.http.impl.nio.reactor.AbstractIODispatch.inputReady(AbstractIODispatch.java:114)
at org.apache.http.impl.nio.reactor.BaseIOReactor.readable(BaseIOReactor.java:162)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvent(AbstractIOReactor.java:337)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.processEvents(AbstractIOReactor.java:315)
at org.apache.http.impl.nio.reactor.AbstractIOReactor.execute(AbstractIOReactor.java:276)
at org.apache.http.impl.nio.reactor.BaseIOReactor.execute(BaseIOReactor.java:104)
at org.apache.http.impl.nio.reactor.AbstractMultiworkerIOReactor$Worker.run(AbstractMultiworkerIOReactor.java:588)
at java.lang.Thread.run(Thread.java:748)
Trying to figure out what could cause this.
In another log also saw:
Caused by: com.fasterxml.jackson.databind.JsonMappingException: Scala module 2.8.8 requires Jackson Databind version >= 2.8.0 and < 2.9.0
so it looks like something with the jackson version, or also with the jackson version, but not sure where do I update it or where is it failing exactly...
Someone ever had this issue?
thanks!
Double check if you are importing elastic4s-jackson:
libraryDependencies += "com.sksamuel.elastic4s" %% "elastic4s-jackson" % "5.6.0"
I have had the exact same issue with play 2.8 and elastic4s 6.7.3 using the jackson module too
What helped me was running evicted and looking at the dependencies each library had and figuring out why it cannot load that particular class . ultimately upgrading to 6.7.8 ended up resolving the issue.
Also if you look in the buid.sbt file for the elasic4s project it lists all the versions it is using https://github.com/sksamuel/elastic4s/blob/master/build.sbt
You can use jackson-module-scala and exclude the com.fasterxml.jackson which is referenced in elastic4s-http and verify that com.fasterxml.jackson is available in the libraries/classpath when the application is running.
Related
I am trying to use the Mongo scala client, but when I init the client I get the following exception:
java.lang.NoSuchMethodError: com.mongodb.MongoClientSettings.getUuidRepresentation()Lorg/bson/UuidRepresentation;
In my project, I also use Mongo java client for other purposes.
I saw this question and solution so I guess it is probably the same issue (libraries conflicts).
The problem is that I prefer to keep the design of 2 clients- one java and one scala (BTW, each client is for a different Mongo cluster).
I wonder how can I achieve that.
build.sbt:
"org.mongodb" % "mongo-java-driver" % "3.11.1",
"org.mongodb.scala" %% "mongo-scala-driver" % "2.9.0",
code:
val mongoClient: MongoClient = org.mongodb.scala.MongoClient(server)
Thanks a lot
Seems like there is no solution for having 2 clients. Eventually, I used one java client :(
Been trying to solve this issue too many times and still can't wrap my head around it, seems that the issues go deeper and that it's a massive problem within the library.
val cloudStorage = "com.google.cloud" % "google-cloud-storage" % googleCloudV exclude ("com.google.guava", "guava")
val cloudHadoop = "com.google.cloud.bigdataoss" % "gcs-connector" % googleHadoopV exclude ("org.apache.hadoop", "hadoop-common") exclude ("org.apache.hadoop", "hadoop-mapreduce-client-core") exclude ("com.google.guava", "guava")
val guava = "com.google.guava" % "guava" % guavaV
Reading tons and tons of posts I still can't make it run deploying it within a Dataproc cluster where it crashes with the following error.
Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
at com.google.cloud.storage.StorageImpl.optionMap(StorageImpl.java:1480)
at com.google.cloud.storage.StorageImpl.optionMap(StorageImpl.java:1469)
at com.google.cloud.storage.StorageImpl.optionMap(StorageImpl.java:1502)
at com.google.cloud.storage.StorageImpl.list(StorageImpl.java:326)
I've also tried shading...
assemblyShadeRules in assembly := Seq(
ShadeRule.rename("com.google.common.**" -> "repackaged.com.google.common.#1").inAll
)
And still, no results when it comes to the Guava dependency issues.
The versions of the GCP dependencies that I'm running (together with Spark 2.3.0), are the following ones.
val googleCloudV = "1.98.0"
val googleHadoopV = "hadoop3-2.0.0"
val guavaV = "28.0-jre"
Latest possible versions.
Hope someone can shed some light on this because it's something really odd that nobody can really solve?
This is a well known problem within Hadoop that dependencies like guava are not shaded. So when you depend on Guava (a much later version than Hadoop) you instead get the version from Hadoop's jar which is much older and it does not have the method.
Solution is to shade Guava (and any other dependencies) within your job jar.
I Was using play framework 2.5.
My scala version is 2.11.8.
When I migrated to Paly 2.6, I'm getting the following error
RuntimeException: java.lang.NoClassDefFoundError: play/api/libs/ws/WSRequest$class
WS was extracted to it's own library, so you need to add in the build.sbt:
libraryDependencies += ws
You can find more details in the official documentation: https://www.playframework.com/documentation/2.6.x/ScalaWS
And do not forget the migration guide:
https://www.playframework.com/documentation/2.6.x/WSMigration26
I have created a new play framework app in SCALA. If i run the application it runs as expected and i am able to open the URL localhost:9000, but after i created a lib folder and added some jars(spark jars) in that when i tries to run the application using the command play run, it shows the message
[error] p.nettyException - Exception caught in Netty
java.lang.NoSuchMethodError: scala.collection.immutable.HashSet$.empty()Lscala/collection/immutable/HashSet;
at akka.actor.ActorCell$.<init>(ActorCell.scala:305) ~[akka-actor_2.10.jar:2.2.0]
at akka.actor.ActorCell$.<clinit>(ActorCell.scala) ~[akka-actor_2.10.jar:2.2.0]
at akka.actor.RootActorPath.$div(ActorPath.scala:152) ~[akka-actor_2.10.jar:2.2.0]
at akka.actor.LocalActorRefProvider.<init>(ActorRefProvider.scala:465) ~[akka-actor_2.10.jar:2.2.0]
at akka.actor.LocalActorRefProvider.<init>(ActorRefProvider.scala:453) ~[akka-actor_2.10.jar:2.2.0]
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_91]
[error] p.nettyException - Exception caught in Netty
java.lang.NoClassDefFoundError: Could not initialize class play.api.libs.concurrent.Execution$
when i tries to open the URL localhost:9000 it shows "localhost page is not working".
Spark Version 2.0
Scala Version 2.10 and also i have tested with scala 2.11 and 2.12 same error reproducing.
Play framework version - 2.2.6
First, if you created a new Play! project, why use version 2.2? The newest one is 2.5 and that may already fix your issue.
Second, to ensure you don't have any other version conflicts I suggest you use managed dependencies instead of putting them manually into the lib folder.
For Apache Spark, you would add this line to your build.sbt file:
libraryDependencies += "org.apache.spark" %% "spark-core" % "2.1.0"
as described in Spark's Quick Start Guide. You need to use Scala 2.10 or 2.11 (suggested) in your project.
If that doesn't resolve your issue, please post the contents of your build.sbt and the controller code. Thanks.
I was following this tutorial to star using playframework2.1 RC1 + Slick.
When try launch the SoftwareSpec test (that tests Model-like class):
play test
they fail with an error like this:
[error] SQLException: No suitable driver found for
jdbc:h2:mem:test1 (DriverManager.java:190) [error]
SoftwareSpec$$anonfun$1$$anonfun$apply$3.apply(SoftwareSpec.scala:25)
[error]
SoftwareSpec$$anonfun$1$$anonfun$apply$3.apply(SoftwareSpec.scala:25)
But I've enabled h2 db in application.conf like was mentioned in that article.
I've used latest dependencies to slick, in Build.scala file:
"com.typesafe" % "slick_2.10.0-RC1" % "1.0.0-RC1"
Once, when I changed dependency to slick to this version:
"com.typesafe" % "slick_2.10.0-RC1" % "0.11.2"
then the test was passed successfully. But after I change the test to make it fail, the error came back again - "No suitable driver". And after this all my tries (like play clean whatever) were unsuccessful.
I got the same error when running tests on Travis CI.
A work around is loading JDBC driver class like Class.forName("org.h2.Driver").
https://github.com/seratch/scalikejdbc/blob/564cc07505d7a9f217945a7f2c07dc2c7460ed87/scalikejdbc-play-plugin/src/test/scala/scalikejdbc/PlayPluginSpec.scala#L15
Of course, I should investigate the reason of this issue and report to the Play team but I haven't done.
I'm not using Play at the moment, so I can't test the following, but it should be fine. For the latest Slick, you can use:
"com.typesafe" % "slick_2.10" % "1.0.0-RC1"
For h2, the following should work:
"com.h2database" % "h2" % "1.3.166"