I am trying to setup Spark on my Windows 10 PC. After executing the spark-shell command, I got the following error:
java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveSessionState':
at rg.apache.spark.sql.SparkSession$.org$apache$spark$sql$SparkSession$$reflect
Spark Installation on windows machine is not much difficult. We need to take care of some permissions and configurations during the installation. Please follow below link for step wise Spark and Scala installation and configuration on windows machine.
Apache Spark Installation on windows10
Related
I'm having some issues getting JProfiler connected to a remote WebSphere 8.5.5 instance that is running on Linux. When I start JProfiler on my Windows 10 machine I select the "Profile an application server, locally or remotely" and select the option to integrate with IBM WebSphere 8.x Application Server.
The part I'm having an issue with is the "Specify the remote address" section of setting up the profile. The setup says I need the profiling agent running on target JVM. I download the tar file from the JProfiler website and extract it on Linux machine and run jpenable as it says I should but I get this message.
"No suitable Java Virtual Machine could be found on your system. The version of the JVM must be at least 1.6 and at most 11. Please define INSTALL4J_JAVA_HOME to point to a suitable JVM."
I have made edits to the arguments file that came with the JProfiler in order to remedy this issue but I just can't seem to get JProfiler to see the IBM Java that WebSphere is using on this machine. I have tried using the INSTALL_JAVA_HOME_OVERRIDE variable in the arguments file by putting the full path to the WebSphere Java install. I have tried using the INSTALL4J_JAVA_PREFIX variable and I have created a INSTALL4J_JAVA_HOME variable in the arguments file and put the full path to the WebSphere Java.
Any help would be appreciated greatly appreciated in getting me around this issue. I have verified that WebSphere is using Java version 1.8.0_171.
but I just can't seem to get JProfiler to see the IBM Java that WebSphere is using on this machine.
That's because IBM JVMs are not supported for attach mode.
The setup says I need the profiling agent running on target JVM.
Generally, this is achieved by adding an -agentpath VM parameter to the profiled VM. The remote address that you are asked for in the wizard will be added as an option to that parameter. The wizard will then modify the server config file and add the complete VM parameter, so you don't have to it manually.
More information is available at
https://www.ej-technologies.com/resources/jprofiler/help/doc/main/profiling.html
I am trying to integrate my Scala Eclipse IDE with my Azure Databricks Cluster so that I can directly run my Spark program through Eclipse IDE on my Databricks Cluster.
I followed the official documentation of Databricks Connect(https://docs.databricks.com/dev-tools/databricks-connect.html)
.
I have:
Installed Anaconda.
Installed Python Lib 3.7 and Databricks Connect library 6.0.1.
Did the Databricks Connect Configuration part(CLI part).
Also, added the client libraries in the Eclipse IDE.
Set the SPARK_HOME env. variable to the path which I get from running command in Anaconda, i.e. 'databricks-connect get-jar-dir'
I have not set any other environment variables apart from the one mentioned above.
Need help on finding what else is to be done to accomplish this integration, like how the ENV. variable related to connection works if running through IDE.
If someone has already done this successfully, guide me please.
I have Apache Spark installed on a cluster. I can run spark-shell on the cluster master node. So, it means there is the scala installed to this machine. However, I cannot start neither sbt nor scalac. Is it possible to obtain spark's scala and how to do it?
No, Its not. You have to install manually.
Go through these links.
https://www.scala-lang.org/download/
https://www.scala-sbt.org/1.0/docs/Installing-sbt-on-Linux.html
I downloaded the Cloudera QuickStart VM and I want to run some code Scala. I've found in the Cloudera a tutorial to run scala code in shell.
Now I want to use an IDE (like IntilliJ IDEA) and connect to the server spark and run my code. I've found this tutorial but it didn't work for me. I can not find the same interface as described.
Thanks for any suggestion!
While launching standalone cluster on spark streaming, I am not able to find ./bin/spark-class command.
Please let me know, if I need to do any additional configurations for getting "spark-class".
Which version of Spark are you using? Starting with Spark 0.9.0, spark-class is located in the bin folder, but in earlier versions it was at the root of SPARK_HOME.
Perhaps you're following instructions for Spark 0.9.0+ even though you've installed an earlier version of Spark? You can find documentation for older releases of Spark on the Spark documentation overview page.