I downloaded confluent-2.0.0-2.10.5.tar.gz, because I want to have scala 2.10 package
but still the kafka jar in /share/java/schema-registry is still kafka_2.11-0.9.0.0-cp1.jar
Is there anyway I can get a clean 2.10 scala confluent package
The 2.10 refers to the version of the Kafka subpackage, but a different version may be used by other subpackages.
The tar.gz packages use the 2.11 versions where a different subpackage requires access to the core Kafka jar that has a Scala dependency. (Actually, the version they depend on is really whichever Scala version is supported by Kafka and considered most stable and well supported upstream). This is necessary because Scala libraries aren't necessarily binary compatible between different Scala versions, which would mean that not doing this would require multiple versions of all the services that use the Kafka libraries, especially on platforms like Debian and RPM-based distros, i.e. we'd need a schema-registry-2.10 and schema-registry-2.11. Instead, we sort of vendorize the entire Kafka library for services that depend on it.
Note that the files under /share/java/kafka only use Scala 2.10 and if you need to pull in the clients, you can safely add that to your classpath. The use of 2.10 or 2.11 for any of the other services shouldn't matter as they are simply that: services that you execute. Any libraries that you might need to put on your classpath (e.g. serializers) only depend on the pure Java libraries in Kafka and are therefore safe to use with Kafka libraries compiled with any Scala version.
Related
I have a jar file compiled in scala 2.12 and now I want to run it on emr 5.29.0. How do I run them as the default version of emr 5.29.0 is scala 2.11.
As per this thread in AWS Forum, all Spark versions on EMR are built with Scala 2.11 as it's the stable version:
On EMR, Spark is built with Scala-2.11.x, which is currently the
stable version. As per-
https://spark.apache.org/releases/spark-release-2-4-0.html ,
Scala-2.12 is still under experimental support. Our service team is
already aware of this feature request, and they shall be adding
Scala-2.12.0 support in coming releases, once it becomes stable.
So you'll have to wait until they add support on future EMR releases or you may want to build a Spark with Scala 2.12 and install it on EMR. See Building and Deploying Custom Applications with Apache Bigtop and Amazon EMR and Building a Spark Distribution for EMR.
UPDATE:
Since Release 6.0.0, Scala 2.12 can be used with Spark on EMR:
Changes, Enhancements, and Resolved Issues
Scala
Scala 2.12 is used with Apache Spark and Apache Livy.
Just an Idea, if waiting is not the option!
Is it possible to package the latest scala jars with the application with appropriate maven scope defined and point those packages with the spark property
--properties spark.jars.repositories ??
Maybe you'll have to figure out a way to transfer the jars to the driver node. If s3 is an option that can be used as intermediatory storage.
I am using JDT to get ASTs and type resolvers for Java sources in Eclipse, is there a way to achieve the same for Scala sources in Scala projects?
No, it is not possible to parse Scala with Java Development Tools. Scala and Java are two completely different languages. They have different syntax, different semantics, different type systems.
I would like to use an application developed with Apache Spark 2.0.0 (GitHub repo here) but I only have Spark 2.3.1 installed on my iMac (it seems to be the only one supported by homebrew at the moment). I can successfully compile it with sbt assembly but then when I run the first example given here I get the following error:
java.lang.NoSuchMethodError: breeze.linalg.DenseVector$.canDotD()Lbreeze/generic/UFunc$UImpl2;
Is this a compatibility issue between the two different versions of Scala-breeze used by Spark 2.0.0 and Spark 2.3.1. Is there a way to easily change the code in order to be able to use it with Spark 2.3.1? (I have never used scala before)
It probably is.
You can always manually download required version of Apache Spark (not by homebrew, but by downloading tar.gz archive from official page and just extracting it).
I understand that cross-building across different Scala versions is easy with SBT. You just put your files which fail to compile in scala-2.10 and scala-2.11 directories instead of scala. However, If I want to cross build for different versions of Scala and for different versions of a dependency (say, Spark 1.6 and 2.1) then how can that be done?
I am new to Spark. I am trying to run a simple spark project in local system.
So based on tutorials I have run 'sbt/sbt assembly'. Now jar file is created in core/target/scala-2.9.2/spark-core-assembly-0.7.0.jar. To run samples could you please tell where and how I have to add this jar to classpath?
Regards,
Dinesh
The Spark documentation's quick start guide has documentation on developing standalone applications using Spark with Scala and Java. Those instructions show how to add a Spark dependency to your Maven or SBT projects.
If you're not using Maven or SBT to build your project, you'll have to pass the appropriate flags to javac and java to add the Spark assembly JAR to your classpath, the same as you'd do for any other JAR dependency.
As an aside, 0.7.0 is a pretty old version of Spark (it was released almost a year ago); I'd recommend using a newer version, such as 0.9.0.