Not able to find ./bin/spark-class for launching spark cluster on standalone mode - scala

While launching standalone cluster on spark streaming, I am not able to find ./bin/spark-class command.
Please let me know, if I need to do any additional configurations for getting "spark-class".

Which version of Spark are you using? Starting with Spark 0.9.0, spark-class is located in the bin folder, but in earlier versions it was at the root of SPARK_HOME.
Perhaps you're following instructions for Spark 0.9.0+ even though you've installed an earlier version of Spark? You can find documentation for older releases of Spark on the Spark documentation overview page.

Related

How to integrate Eclipse IDE with Databricks Cluster

I am trying to integrate my Scala Eclipse IDE with my Azure Databricks Cluster so that I can directly run my Spark program through Eclipse IDE on my Databricks Cluster.
I followed the official documentation of Databricks Connect(https://docs.databricks.com/dev-tools/databricks-connect.html)
.
I have:
Installed Anaconda.
Installed Python Lib 3.7 and Databricks Connect library 6.0.1.
Did the Databricks Connect Configuration part(CLI part).
Also, added the client libraries in the Eclipse IDE.
Set the SPARK_HOME env. variable to the path which I get from running command in Anaconda, i.e. 'databricks-connect get-jar-dir'
I have not set any other environment variables apart from the one mentioned above.
Need help on finding what else is to be done to accomplish this integration, like how the ENV. variable related to connection works if running through IDE.
If someone has already done this successfully, guide me please.

How to start confluent platform?

i have following problem
I am using java 8 version for this and zookeeper is not working for this.
Downloads$ cd confluent-5.2.2/
roshni#roshni-HP-Pavilion-15-Notebook-PC:~/Downloads/confluent-5.2.2$
/home/roshni/Downloads/confluent-5.2.2/bin/confluent start
This CLI is intended for development only, not for production
https://docs.confluent.io/current/cli/index.html
WARNING: Java version 1.8 or 1.11 is recommended.
See https://docs.confluent.io/current/installation/versions-interoperability.html
What you did is correct. Based on the warning, it seems to suggest you're not using Java 8 or 11, though, so check your JAVA_HOME variable, for example.
You could try running confluent logs to see if you get more information, and just do confluent start kafka if you're truly only trying to run Kafka
Otherwise, you could explicitly start Zookeeper and Kafka manually, as well as other Confluent services, using the other scripts in the bin folder, just the same as any other Kafka install

Where is scala on node with spark-shell installed?

I have Apache Spark installed on a cluster. I can run spark-shell on the cluster master node. So, it means there is the scala installed to this machine. However, I cannot start neither sbt nor scalac. Is it possible to obtain spark's scala and how to do it?
No, Its not. You have to install manually.
Go through these links.
https://www.scala-lang.org/download/
https://www.scala-sbt.org/1.0/docs/Installing-sbt-on-Linux.html

Spark setup on Windows

I am trying to setup Spark on my Windows 10 PC. After executing the spark-shell command, I got the following error:
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':
at rg.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect
Spark Installation on windows machine is not much difficult. We need to take care of some permissions and configurations during the installation. Please follow below link for step wise Spark and Scala installation and configuration on windows machine.
Apache Spark Installation on windows10

Spark job server for spark 1.6.0

Is there any specific Spark Job Server version matching with Spark 1.6.0 ?
As per the version information in https://github.com/spark-jobserver/spark-jobserver, I see SJS is available only for 1.6.1 not for 1.6.0.
Our CloudEra hosted Spark is running on 1.6.0
I deployed SJS by configuring the spark home to 1.6.1. When I submitted jobs, I see job ids are getting generated but I can't see the job result.
Any inputs?
No, there is no SJS version tied to spark 1.6.0. But it should be easy for you to compile against 1.6.0. May be you could modify this https://github.com/spark-jobserver/spark-jobserver/blob/master/project/Versions.scala#L10 and try.