Can someone please suggest a version of Cassandra 3.x which works with Spark 1.6, Scala 2.10.5 WHICH WORKS!!!!
Below are the version of jars I am looking for the versions of the below jar files
Cassandra Core
Cassandra Spark Connector
Thanks,
Sai
Visit the below link for check version compatibility.
Correct version of connector is 1.6 for cassandra 3.x , spark -1.6 and scala -2.10.5
Check version as per below image.
https://github.com/datastax/spark-cassandra-connector
Related
I am getting this warning when I execute my pyspark code. I am writing from S3 to snowflake.
My Snowflake- pyspark packages are
net.snowflake:snowflake-jdbc:3.13.10,
net.snowflake:spark-snowflake_2.12:2.9.2-spark_3.1
My local pyspark version is
Spark version 3.2.1
Hadoop version 3.3.1
warning:
WARN SnowflakeConnectorUtils$: Query pushdown is not supported because you are using Spark 3.2.1 with a connector designed to support Spark 3.1. Either use the version of Spark supported by the connector or install a version of the connector that supports your version of Spark.
Is this the right package or do we have anything other?
My program is working as expected, reading from s3 storing results to snowflake. How to remove this warning?
For Spark 3.2 you need to use Snowflake Spark connector 2.10:
For Scala 2.12:
https://search.maven.org/search?q=a:spark-snowflake_2.12
For Scala 2.13:
https://search.maven.org/search?q=a:spark-snowflake_2.13
We have several Spark applications running on production developed using Spark 2.4.1 (Scala 2.11.12).
For couple of our new Spark jobs,we are considering utilizing features of DeltaLake.For this we need to use Spark 2.4.2 (or higher).
My questions are:
If we upgrade our Spark cluster to 3.0.0, can our 2.4.1 applications still run on the new cluster (without recompile)?
If we need to recompile our previous Spark jobs with Spark 3, are they source compatible or do they need any migration?
There are some breaking changes in Spark 3.0.0, including source incompatible change and binary incompatible changes. See https://spark.apache.org/releases/spark-release-3-0-0.html. And there are also some source and binary incompatible changes between Scala 2.11 and 2.12, so you may also need to update codes because of Scala version change.
However, only do Delta Lake 0.7.0 and above require Spark 3.0.0. If upgrading to Spark 3.0.0 requires a lot of work, you can use Delta Lake 0.6.x or below. You just need to upgrade Spark to 2.4.2 or above in 2.4.x line. They should be source and binary compatible.
You can cross compile projects Spark 2.4 projects with Scala 2.11 and Scala 2.12. The Scala 2.12 JARs should generally work for Spark 3 applications. There are edge cases when using a Spark 2.4/Scala 2.12 JAR won't work properly on a Spark 3 cluster.
It's best to make a clean migration to Spark 3/Scala 2.12 and cut the cord with Spark 2/Scala 2.11.
Upgrading can be a big pain, especially if your project has a lot of dependencies. For example, suppose your project depends on spark-google-spreadsheets, a project that's not built with Scala 2.12. With this dependency, you won't be able to easily upgrade your project to Scala 2.12. You'll need to either compile spark-google-spreadsheets with Scala 2.12 yourself or drop the dependency. See here for more details on how to migrate to Spark 3.
i got this error. I'm not sure why this is the case because there is a coalesce method in org.apache.spark.rdd.RDD.
Any ideas?
Am I running a incompatible version of Spark and org.apache.spark.rdd.RDD?
Exception in thread "main" java.lang.NoSuchMethodError: org.apache.spark.rdd.RDD.coalesce$default$3(IZ)Lscala/math/Ordering;
It was because some part of your code or project dependencies called old version(spark version before 2.0.0) spark API 'coalesce' while in new version spark this API has been removed and replaced by 'repartition'.
To fix this problem, you could either downgrade your spark run environment to below version 2.0.0, or you can upgrade your SDK spark version to above 2.0.0 and upgrade project dependencies version to be compatible with spark 2.0.0 or above.
For more details please see this thread:
https://github.com/twitter/algebird/issues/549
https://github.com/EugenCepoi/algebird/commit/0dc7d314cba3be588897915c8dcfb14964933c31
As I suspected, this is a library compatibility issue. Everything works (no code change) after downgrading Spark alone.
Before:
scala 2.11.8
spark 2.0.1
Java 1.8.0_92
After
scala 2.11.8
spark 1.6.2
Java 1.8.0_92
OS: OSX 10.11.6
I currently have a CDH with Spark 1.6.0 and Scala 2.10.5. I would like to upgrade the Spark version to 2.0.0 and Scala version to 2.11.x and make these as defaults.
I am currently trying this on a CDH Quickstart VM but would like to extend this to a Spark cluster with CDH distribution.
Could someone help on how to go about these two upgrades?
Thank you.
I am experimenting with Spark Kafka integration. And I want to test the code from my eclipse IDE. However, I got below error:
java.lang.NoClassDefFoundError: scala/collection/GenTraversableOnce$class
at kafka.utils.Pool.<init>(Pool.scala:28)
at kafka.consumer.FetchRequestAndResponseStatsRegistry$.<init>(FetchRequestAndResponseStats.scala:60)
at kafka.consumer.FetchRequestAndResponseStatsRegistry$.<clinit>(FetchRequestAndResponseStats.scala)
at kafka.consumer.SimpleConsumer.<init>(SimpleConsumer.scala:39)
at org.apache.spark.streaming.kafka.KafkaCluster.connect(KafkaCluster.scala:52)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:345)
at org.apache.spark.streaming.kafka.KafkaCluster$$anonfun$org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers$1.apply(KafkaCluster.scala:342)
at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:35)
at org.apache.spark.streaming.kafka.KafkaCluster.org$apache$spark$streaming$kafka$KafkaCluster$$withBrokers(KafkaCluster.scala:342)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitionMetadata(KafkaCluster.scala:125)
at org.apache.spark.streaming.kafka.KafkaCluster.getPartitions(KafkaCluster.scala:112)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:403)
at org.apache.spark.streaming.kafka.KafkaUtils$.createDirectStream(KafkaUtils.scala:532)
at org.apache.spark.streaming.kafka.KafkaUtils.createDirectStream(KafkaUtils.scala)
at com.capiot.platform.spark.SparkTelemetryReceiverFromKafkaStream.executeStreamingCalculations(SparkTelemetryReceiverFromKafkaStream.java:248)
at com.capiot.platform.spark.SparkTelemetryReceiverFromKafkaStream.main(SparkTelemetryReceiverFromKafkaStream.java:84)
UPDATE:
The versions that I am using are:
scala - 2.11
spark-streaming-kafka- 1.4.1
spark - 1.4.1
Can any one resolve the issue? Thanks in advance.
You have the wrong version of Scala. You need 2.10.x per
https://spark.apache.org/docs/1.4.1/
"For the Scala API, Spark 1.4.1 uses Scala 2.10."
Might be late to help OP, but when using kafka streaming with spark, you need to make sure that you use the right jar file.
For example, in my case, I have scala 2.11 (the minimum required for spark 2.0 which im using), and given that kafka spark requires the version 2.0.0 I have to use the artifact spark-streaming-kafka-0-8-assembly_2.11-2.0.0-preview.jar
Notice my scala version and the artifact version can be seen at 2.11-2.0.0
Hope this helps (someone)
Hope that helps.