1: error: ';' expected but 'import' found - pyspark

I am running this code in Zeppelin, I am getting following error message
from pyspark import SparkContext
from pyspark.sql import HiveContext
sc = SparkContext(appName="PythonSQL")
hive_context = HiveContext(sc)
bank = hive_context.table("default.invites_orc")
bank.show()
bank.registerTempTable("bank_temp")
hive_context.sql("select * from bank_temp").show()
sc.stop()
:1: error: ';' expected but 'import' found.
from pyspark import SparkContext
^

The Spark Interpreter group currently has 4 interpreter as listed here...
https://zeppelin.incubator.apache.org/docs/0.5.0-incubating/interpreter/spark.html
The default interpreter is %spark and default interpreter is selected based on the order of the interpreter listed in the zeppelin.interpreters property in zeppelin-site.xml config file.
The current order of interpreter in your zeppelin-site.xml (zeppelin.interpreters property) will be this ...
org.apache.zeppelin.spark.SparkInterpreter,org.apache.zeppelin.spark.PySparkInterpreter
Modify this to ...
org.apache.zeppelin.spark.PySparkInterpreter, org.apache.zeppelin.spark.SparkInterpreter
and restart Zeppelin (zeppelin-daemon.sh restart)
This will make %pyspark as default interpreter.
OR
You can write like this
%pyspark
from pyspark import SparkContext

Related

Pyspark ModuleNotFound when importing custom package

Context: I'm running a script on azure databricks and I'm using imports to import functions from a given file
Let's say we have something like this in a file called "new_file"
from old_file import x
from pyspark.sql import SparkSession
from pyspark.context import SparkContext
from pyspark.sql.types import *
spark = SparkSession.builder.appName('workflow').config(
"spark.driver.memory", "32g").getOrCreate()
The imported funcion "x" will take as argument a string that was read as a pyspark dataframe as such:
new_df_spark = spark.read.parquet(new_file)
new_df = ps.DataFrame(new_df_spark)
new_df is then passed as argument to a function that calls the function x
I then get an error like
ModuleNotFoundError: No module named "old_file"
Does this mean I can't use imports? Or do I need to install the old_file in the cluster for this to work? If so, how would this work and will the package update if I change old_file again?
Thanks

Error in Pycharm when linking to pyspark: name 'spark' is not defined

When I run the example code in cmd, everything is ok.
>>> import pyspark
>>> l = [('Alice', 1)]
>>> spark.createDataFrame(l).collect()
[Row(_1='Alice', _2=1)]
But when I execute the code in pycharm, I get an error.
spark.createDataFrame(l).collect()
NameError: name 'spark' is not defined
Maybe something wrong when I link Pycharm to pyspark.
Environment Variable
Project Structure
Project Interpreter
When you start pyspark from the command line, you have a sparkSession object and a sparkContext available to you as spark and sc respectively.
For using it in pycharm, you should create these variables first so you can use them.
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
sc = spark.sparkContext
EDIT:
Please have a look at : Failed to locate the winutils binary in the hadoop binary path

No module named 'pyspark' in Zeppelin

I am new to Spark and just started using it. Trying to import SparkSession from pyspark but it throws an error: 'No module named 'pyspark'. Please see my code below.
# Import our SparkSession so we can use it
from pyspark.sql import SparkSession
# Create our SparkSession, this can take a couple minutes locally
spark = SparkSession.builder.appName("basics").getOrCreate()```
Error:
```---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
<ipython-input-2-6ce0f5f13dc0> in <module>
1 # Import our SparkSession so we can use it
----> 2 from pyspark.sql import SparkSession
3 # Create our SparkSession, this can take a couple minutes locally
4 spark = SparkSession.builder.appName("basics").getOrCreate()
ModuleNotFoundError: No module named 'pyspark'```
I am in my conda env and I tried ```pip install pyspark``` but I already have it.
If you are using Zepl, they have their own specific way of importing. This makes sense, they need their own syntax since they are running in the cloud. It clarifies their specific syntax vs. Python itself. For instance %spark.pyspark.
%spark.pyspark
from pyspark.sql import SparkSession

Error in running Scala in terminal: "object apache is not a member of package org"

I'm using sublime to write my first Scala program, and I'm using terminal to run it.
First I use scalac assignment2.scala command to compile it, but it show error message:"error: object apache is not a member of package org"
How can I do to fix it?
This is my code:
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
object assignment2 {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("assignment2")
val sc = new SparkContext(conf)
val input = sc.parallelize(List(1, 2, 3, 4))
val result = input.map(x => x * x)
println(result.collect().mkString(","))
}
}
Where are you trying to submit the job. To run any spark application you need to submit it from bin/spark-submit in your spark installation directory or you need to have spark-home set in your environment, which you can refer while submitting.
Actually you can't run spark-scala file directly because for compilation your scala class, you need spark library. So for executing scala file you required spark-shell. For executing your spark scala file inside spark-shell, please find the below steps:
Open your spark-shell using next command-
'spark-shell --master yarn-client'
load your file with exact location-
':load File_Name_With_Absoulte_path'
Run you main method using class name- 'ClassName.main(null)'

Reading a csv file in pyspark (1.6.0)

Maybe the question is trivial but i am getting issues while reading a csv from local directory in Pyspark.
I tried,
from pyspark.sql.types import *
from pyspark.sql import Row
from pyspark import SparkContext as sc
mydata = sc.textFile("/home/documents/mydata.csv")
newdata = mydata.map(lambda line: line.split(","))
But getting a error like,
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: unbound method textFile() must be called with SparkContext instance as first argument (got str instance instead)
Now my question is I have called SparkContext just before that. Then why am I getting such error? Please guide me where I am lacking.
You do not import SparkContext as sc:
In interactive usage (i.e. pyspark shell), sc is already initialized, so sc.textFile() should work fine
In self-contained applications, you should initialize sc first:
from pyspark import SparkContext
sc = SparkContext("local", "Simple App")
where the arguments in SparkContext() matter - see the provided links for more details.
Finally, Spark 1.x cannot natively read CSV files into dataframes - you will need the Spark CSV external package. You may find a relevant blog post I wrote some time ago for Spark 1.5 useful...