I am setting up a SparkSession using
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName('nlp').getOrCreate()
But I am getting an error:
# This SparkContext may be an existing one.
Related
I have a SparkContext sc with a highly customised SparkConf(). How do I use that SparkContext to create a SparkSession? I found this post: https://stackoverflow.com/a/53633430/201657 that shows how to do it using Scala:
val spark = SparkSession.builder.config(sc.getConf).getOrCreate()
but when I try and apply the same technique using PySpark:
from pyspark.sql import SparkSession
spark = SparkSession.builder.config(sc.getConf()).enableHiveSupport().getOrCreate()
It fails with error
AttributeError: 'SparkConf' object has no attribute '_get_object_id'
As I say I want to use the same SparkConf in my SparkSession as used in the SparkContext. How do I do it?
UPDATE
I've done a bit of fiddling about:
from pyspark.sql import SparkSession
spark = SparkSession.builder.enableHiveSupport().getOrCreate()
sc.getConf().getAll() == spark.sparkContext.getConf().getAll()
returns
True
so the SparkConf of both the SparkContext & the SparkSession are the same. My assumption from this is that SparkSession.builder.getOrCreate() will use an existing SparkContext if it exists. Am I correct?
I am trying to connect MySQL using pyspark in pydev environment of Eclipse IDE.
Getting below error:
Exception: Java gateway process exited before sending its port number
I have checked Java is properly installed and also set PYSPARK_SUBMIT_ARGS to value --master local[*] --jars path\mysql-connector-java-5.1.44-bin.jar pyspark-shell in windows-> preferences->Pydev->Python Interpreter->Environment.
Java Path is also set. Tried setting it via code also but no luck.
#import os
from pyspark import SparkContext
from pyspark import SparkConf
from pyspark.sql.context import SQLContext
#os.environ['JAVA_HOME']= 'C:/Program Files/Java/jdk1.8.0_141/'
#os.environ['PYSPARK_SUBMIT_ARGS'] = '--jars D:/Softwares/mysql-connector-java-5.1.44.tar/mysql-connector-java-5.1.44/mysql-connector-java-5.1.44-bin.jar pyspark-shell'
conf = SparkConf().setMaster('local').setAppName('MySQLdataread')
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
dataframe_mysql = sqlContext.read.format("jdbc").option("url", "jdbc:mysql://localhost:3306/").option("driver", "com.mysql.jdbc.Driver").option("dbtable", "XXXXX").option("user", "root").option("password", "XXXX").load()
dataframe_mysql.show()
My problem was slightly different, I am running spark in spyder with windows.
when I am using
from pyspark.sql import SQLContext, SparkSession
I had the issue and followed google search links and not able to solve the problem.
Then I changed the import to:
from pyspark.sql import SparkSession
from pyspark import SQLContext
and the error message disappeared.
I am running on Windows, anaconda3, python3.7, spyder Hope it is helpful to someone.
Edit:
Later, i found the real problem is from the following. When any of the configuration was not working properly, the same exception shows up. Previously, I used 28gb and 4gb instead of 28g and 4g and that cause all the problems I had.
from pyspark.sql import SparkSession
from pyspark import SQLContext
spark = SparkSession.builder \
.master('local') \
.appName('muthootSample1') \
.config('spark.executor.memory', '28g') \
.config('spark.driver.memory','4g')\
.config("spark.cores.max", "6") \
.getOrCreate()
I am trying to upgrade to Spark 2.2 from Spark 1.6. The existing unit tests are depending on a defined HiveContext which was initialised using TestHiveContext.
val conf = new SparkConf().set("spark.driver.allowMultipleContexts", "true")
val sc = new SparkContext("local", "sc", conf)
sc.setLogLevel("WARN")
val sqlContext = new TestHiveContext(sc)
In spark 2.2, HiveContext is deprecated and SparkSession.builder.enableHiveSupport is advised to be used. I tried to create a new SparkSession using SparkSession.builder but I couldn't find a way to initialise a SparkSession that uses TestHiveContext.
Is it possible to do that or should I change my approach ?
HiveContext and SQLContext has been replaced by SparkSession as stated in the migration guide :
SparkSession is now the new entry point of Spark that replaces the old
SQLContext and
HiveContext. Note that the old SQLContext and HiveContext are kept for
backward compatibility. A new catalog interface is accessible from
SparkSession - existing API on databases and tables access such as
listTables, createExternalTable, dropTempView, cacheTable are moved
here.
https://spark.apache.org/docs/latest/sql-migration-guide-upgrade.html#upgrading-from-spark-sql-16-to-20
So you create a Sparksession instance with your test configuration and use it instead of HiveContext
Below is my code :
I am trying to access s3 files from spark locally.
But getting error :
Exception in thread "main" org.apache.hadoop.security.AccessControlException: Permission denied: s3n://bucketname/folder
I am also using jars :hadoop-aws-2.7.3.jar,aws-java-sdk-1.7.4.jar,hadoop-auth-2.7.1.jar while submitting spark job from cmd.
package org.test.snow
import org.apache.spark._
import org.apache.spark.SparkContext._
import org.apache.log4j._
import org.apache.spark.storage.StorageLevel
import org.apache.spark.sql.SparkSession
import org.apache.spark.util.Utils
import org.apache.spark.sql._
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.Path
object SnowS3 {
def main(args: Array[String]) {
val conf = new SparkConf().setAppName("IDV4")
val sc = new SparkContext(conf)
val spark = new org.apache.spark.sql.SQLContext(sc)
import spark.implicits._
sc.hadoopConfiguration.set("fs.s3a.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
sc.hadoopConfiguration.set("fs.s3a.awsAccessKeyId", "A*******************A")
sc.hadoopConfiguration.set("fs.s3a.awsSecretAccessKey","A********************A")
val cus_1=spark.read.format("com.databricks.spark.csv")
.option("header","true")
.option("inferSchema","true")
.load("s3a://tb-us-east/working/customer.csv")
cus_1.show()
}
}
Any help would be appreciated.
FYI: I am using spark 2.1
You shouldn't set that fs.s3a.impl option; that's a superstition which seems to persist in spark examples.
Instead uses the S3A connector just by using the s3a:// prefix with
consistent versions of hadoop-* jar versions. Yes, hadoop-aws-2.7.3 needs hadoop-common-2.7.3
setting the s3a specific authentication options, fs.s3a.access.key and `fs.s3a.secret.key'
If that doesn't work, look at the s3a troubleshooting docs
I'm trying to connect pyspark to MongoDB with this (running on Databricks) :
from pyspark import SparkConf, SparkContext
from pyspark.mllib.recommendation import ALS
from pyspark.sql import SQLContext
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
but I get this error
java.lang.NoClassDefFoundError: org/apache/spark/sql/DataFrame
I am using Spark 2.0 and Mongo-spark-connector 2.11 and defined spark.mongodb.input.uri and spark.mongodb.output.uri
You are using spark.read.format before you defined spark
As you can see in the Spark 2.1.0 documents
A SparkSession can be used create DataFrame, register DataFrame as tables, execute SQL over tables, cache tables, and read parquet files. To create a SparkSession, use the following builder pattern:
spark = SparkSession.builder \
.master("local") \
.appName("Word Count") \
.config("spark.some.config.option", "some-value") \
.getOrCreate()
I managed to make it work because I was using mongo-spark-connector_2.10-1.0.0 instead of mongo-spark-connector_2.10-2.0.0