Classnotfound error when connecting to snowflake from pyspark local machine - pyspark

I am trying to connect to snowflake from Pyspark on my local machine.
My code looks as below.
from pyspark import SparkConf, SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
from pyspark import SparkConf, SparkContext
sc = SparkContext("local", "sf_test")
spark = SQLContext(sc)
spark_conf = SparkConf().setMaster('local').setAppName('sf_test')
sfOptions = {
"sfURL" : "someaccount.some.address",
"sfAccount" : "someaccount",
"sfUser" : "someuser",
"sfPassword" : "somepassword",
"sfDatabase" : "somedb",
"sfSchema" : "someschema",
"sfWarehouse" : "somedw",
"sfRole" : "somerole",
}
SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake"
I get an error when I run this particular chunk of code.
df = spark.read.format(SNOWFLAKE_SOURCE_NAME).options(**sfOptions).option("query","""select * from
"PRED_ORDER_DEV"."SALES"."V_PosAnalysis" pos
ORDER BY pos."SAPAccountNumber", pos."SAPMaterialNumber" """).load()
Py4JJavaError: An error occurred while calling o115.load. :
java.lang.ClassNotFoundException: Failed to find data source:
net.snowflake.spark.snowflake. Please find packages at
http://spark.apache.org/third-party-projects.html at
org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:657)
at
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
at
org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
I have loaded the connector and jdbc jar files and added them to CLASSPATH
pyspark --packages net.snowflake:snowflake-jdbc:3.11.1,net.snowflake:spark-snowflake_2.11:2.5.7-spark_2.4
CLASSPATH = C:\Program Files\Java\jre1.8.0_241\bin;C:\snowflake_jar
I want to be able to connect to snowflake and read data with Pyspark. Any help would be much appreciated!

To run a pyspark application you can use spark-submit and pass the JARs under the --packages option. I'm assuming you'd like to run client mode so you pass this to the --deploy-mode option and at last you add the name of your pyspark program.
Something like below:
spark-submit --packages net.snowflake:snowflake-jdbc:3.11.1,net.snowflake:spark-snowflake_2.11:2.5.7-spark_2.4 --deploy-mode client spark-snowflake.py

Below working script.
You should to create directory jar in root of you project and add two jars:
snowflake-jdbc-3.13.4.jar (jdbc driver)
spark-snowflake_2.12-2.9.0-spark_3.1.jar (spark connector).
Next you need to understood what is your scala compiler version. I`m using PyCharm, so double click shift and the search for 'scala'. You will see something like scala-compiler-2.12.10.jar. The first digits of the scala-compiler version (in our case 2.12) should be the same as the first digits of spark connector (spark-snowflake_2.12-2.9.0-spark_3.1.jar)
Driver - https://repo1.maven.org/maven2/net/snowflake/snowflake-jdbc/
Connector - https://docs.snowflake.com/en/user-guide/spark-connector-install.html#downloading-and-installing-the-connector
CHECK SCALA COMPILER VERSION BEFORE DOWNLOADING CONNECTOR
from pyspark.sql import SparkSession
sfOptions = {
"sfURL": "sfURL",
"sfUser": "sfUser",
"sfPassword": "sfPassword",
"sfDatabase": "sfDatabase",
"sfSchema": "sfSchema",
"sfWarehouse": "sfWarehouse",
"sfRole": "sfRole",
}
spark = SparkSession.builder \
.master("local") \
.appName("snowflake-test") \
.config('spark.jars', 'jar/snowflake-jdbc-3.13.4.jar,jar/spark-snowflake_2.12-2.9.0-spark_3.1.jar') \
.getOrCreate()
SNOWFLAKE_SOURCE_NAME = "net.snowflake.spark.snowflake"
df = spark.read.format(SNOWFLAKE_SOURCE_NAME) \
.options(**sfOptions) \
.option("query", "select * from some_table") \
.load()
df.show()

Related

Reading avro messages from Kafka in spark streaming/structured streaming

I am using pyspark for the first time.
Spark Version : 2.3.0
Kafka Version : 2.2.0
I have a kafka producer which sends nested data in avro format and I am trying to write code in spark-streaming/ structured streaming in pyspark which will deserialize the avro coming from kafka into dataframe do transformations write it in parquet format into s3.
I was able to find avro converters in spark/scala but support in pyspark has not yet been added. How do I convert the same in pyspark.
Thanks.
As like you mentioned , Reading Avro message from Kafka and parsing through pyspark, don't have direct libraries for the same . But we can read/parsing Avro message by writing small wrapper and call that function as UDF in your pyspark streaming code as below .
Reference :
Pyspark 2.4.0, read avro from kafka with read stream - Python
Note: Avro is built-in but external data source module since Spark 2.4. Please deploy the application as per the deployment section of "Apache Avro Data Source Guide".
Refererence: https://spark-test.github.io/pyspark-coverage-site/pyspark_sql_avro_functions_py.html
Spark-Submit :
[adjust the package versions to match spark/avro version based installation]
/usr/hdp/2.6.1.0-129/spark2/bin/pyspark --packages org.apache.spark:spark-avro_2.11:2.4.3 --conf spark.ui.port=4064
Pyspark Streaming Code:
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
from pyspark.streaming import StreamingContext
from pyspark.sql.column import Column, _to_java_column
from pyspark.sql.functions import col, struct
from pyspark.sql.functions import udf
import json
import csv
import time
import os
# Spark Streaming context :
spark = SparkSession.builder.appName('streamingdata').getOrCreate()
sc = spark.sparkContext
ssc = StreamingContext(sc, 20)
# Kafka Topic Details :
KAFKA_TOPIC_NAME_CONS = "topicname"
KAFKA_OUTPUT_TOPIC_NAME_CONS = "topic_to_hdfs"
KAFKA_BOOTSTRAP_SERVERS_CONS = 'localhost.com:9093'
# Creating readstream DataFrame :
df = spark.readStream \
.format("kafka") \
.option("kafka.bootstrap.servers", KAFKA_BOOTSTRAP_SERVERS_CONS) \
.option("subscribe", KAFKA_TOPIC_NAME_CONS) \
.option("startingOffsets", "latest") \
.option("failOnDataLoss" ,"false")\
.option("kafka.security.protocol","SASL_SSL")\
.option("kafka.client.id" ,"MCI-CIL")\
.option("kafka.sasl.kerberos.service.name","kafka")\
.option("kafka.ssl.truststore.location", "/path/kafka_trust.jks") \
.option("kafka.ssl.truststore.password", "changeit") \
.option("kafka.sasl.kerberos.keytab","/path/bdpda.headless.keytab") \
.option("kafka.sasl.kerberos.principal","bdpda") \
.load()
df1 = df.selectExpr( "CAST(value AS STRING)")
df1.registerTempTable("test")
# Deserilzing the Avro code function
from pyspark.sql.column import Column, _to_java_column
def from_avro(col):
jsonFormatSchema = """
{
"type": "record",
"name": "struct",
"fields": [
{"name": "col1", "type": "long"},
{"name": "col2", "type": "string"}
]
}"""
sc = SparkContext._active_spark_context
avro = sc._jvm.org.apache.spark.sql.avro
f = getattr(getattr(avro, "package$"), "MODULE$").from_avro
return Column(f(_to_java_column(col), jsonFormatSchema))
spark.udf.register("JsonformatterWithPython", from_avro)
squared_udf = udf(from_avro)
df1 = spark.table("test")
df2 = df1.select(squared_udf("value"))
# Declaring the Readstream Schema DataFrame :
df2.coalesce(1).writeStream \
.format("parquet") \
.option("checkpointLocation","/path/chk31") \
.outputMode("append") \
.start("/path/stream/tgt31")
ssc.awaitTermination()

How to write a pyspark-dataframe to redshift?

I am trying to write a pyspark DataFrame to Redshift but it results into error:-
java.util.ServiceConfigurationError: org.apache.spark.sql.sources.DataSourceRegister: Provider org.apache.spark.sql.avro.AvroFileFormat could not be instantiated
Caused by: java.lang.NoSuchMethodError: org.apache.spark.sql.execution.datasources.FileFormat.$init$(Lorg/apache/spark/sql/execution/datasources/FileFormat;)V
Spark Version: 2.4.1
Spark-submit command: spark-submit --master local[*] --jars ~/Downloads/spark-avro_2.12-2.4.0.jar,~/Downloads/aws-java-sdk-1.7.4.jar,~/Downloads/RedshiftJDBC42-no-awssdk-1.2.20.1043.jar,~/Downloads/hadoop-aws-2.7.3.jar,~/Downloads/hadoop-common-2.7.3.jar --packages com.databricks:spark-redshift_2.11:2.0.1,com.amazonaws:aws-java-sdk:1.7.4,org.apache.hadoop:hadoop-aws:2.7.3,org.apache.hadoop:hadoop-common:2.7.3,org.apache.spark:spark-avro_2.12:2.4.0 script.py
from pyspark.sql import DataFrameReader
from pyspark.context import SparkContext
from pyspark.sql.session import SparkSession
from pyspark.sql import SQLContext
from pyspark.sql.functions import pandas_udf, PandasUDFType
from pyspark.sql.types import *
import sys
import os
pe_dl_dbname = os.environ.get("REDSHIFT_DL_DBNAME")
pe_dl_host = os.environ.get("REDSHIFT_DL_HOST")
pe_dl_port = os.environ.get("REDSHIFT_DL_PORT")
pe_dl_user = os.environ.get("REDSHIFT_DL_USER")
pe_dl_password = os.environ.get("REDSHIFT_DL_PASSWORD")
s3_bucket_path = "s3-bucket-name/sub-folder/sub-sub-folder"
tempdir = "s3a://{}".format(s3_bucket_path)
driver = "com.databricks.spark.redshift"
sc = SparkContext.getOrCreate()
sqlContext = SQLContext(sc)
spark = SparkSession(sc)
spark.conf.set("spark.sql.execution.arrow.enabled", "true")
sc._jsc.hadoopConfiguration().set("fs.s3.impl","org.apache.hadoop.fs.s3native.NativeS3FileSystem")
datalake_jdbc_url = 'jdbc:redshift://{}:{}/{}?user={}&password={}'.format(pe_dl_host, pe_dl_port, pe_dl_dbname, pe_dl_user, pe_dl_password)
"""
The table is created in Redshift as follows:
create table adhoc_analytics.testing (name varchar(255), age integer);
"""
l = [('Alice', 1)]
df = spark.createDataFrame(l, ['name', 'age'])
df.show()
df.write \
.format("com.databricks.spark.redshift") \
.option("url", datalake_jdbc_url) \
.option("dbtable", "adhoc_analytics.testing") \
.option("tempdir", tempdir) \
.option("tempformat", "CSV") \
.save()
Databricks Spark-Redshift doesn't work with Spark version 2.4.1,
Here is the version that I maintain to make it work with Spark 2.4.1
https://github.com/goibibo/spark-redshift
How to use it:
pyspark --packages "com.github.goibibo:spark-redshift:v4.1.0" --repositories "https://jitpack.io"

Failed to find data source: com.mongodb.spark.sql.DefaultSource

I'm trying to connect spark (pyspark) to mongodb as follows:
conf = SparkConf()
conf.set('spark.mongodb.input.uri', default_mongo_uri)
conf.set('spark.mongodb.output.uri', default_mongo_uri)
sc = SparkContext(conf=conf)
sqlContext = SQLContext(sc)
spark = SparkSession \
.builder \
.appName("my-app") \
.config("spark.mongodb.input.uri", default_mongo_uri) \
.config("spark.mongodb.output.uri", default_mongo_uri) \
.getOrCreate()
But when I do the following:
users = spark.read.format("com.mongodb.spark.sql.DefaultSource") \
.option("uri", '{uri}.{col}'.format(uri=mongo_uri, col='users')).load()
I get this error:
java.lang.ClassNotFoundException: Failed to find data source:
com.mongodb.spark.sql.DefaultSource
I did the same thing from pyspark shell and I was able to retrieve data. This is the command I ran:
pyspark --conf "spark.mongodb.input.uri=mongodb_uri" --conf "spark.mongodb.output.uri=mongodburi" --packages org.mongodb.spark:mongo-spark-connector_2.11:2.2.2
But here we have the option to specify the package we need to use. But what about standalone apps and scripts. how can I configure mongo-spark-connector there.
Any ideas?
Here how I did it in Jupyter notebook:
1. Download jars from central or any other repository and put them in directory called "jars":
mongo-spark-connector_2.11-2.4.0
mongo-java-driver-3.9.0
2. Create session and write/read any data
from pyspark import SparkConf
from pyspark.sql import SparkSession
from pyspark.sql.functions import *
from pyspark.sql.types import *
working_directory = 'jars/*'
my_spark = SparkSession \
.builder \
.appName("myApp") \
.config("spark.mongodb.input.uri=mongodb://127.0.0.1/test.myCollection") \
.config("spark.mongodb.output.uri=mongodb://127.0.0.1/test.myCollection") \
.config('spark.driver.extraClassPath', working_directory) \
.getOrCreate()
people = my_spark.createDataFrame([("JULIA", 50), ("Gandalf", 1000), ("Thorin", 195), ("Balin", 178), ("Kili", 77),
("Dwalin", 169), ("Oin", 167), ("Gloin", 158), ("Fili", 82), ("Bombur", 22)], ["name", "age"])
people.write.format("com.mongodb.spark.sql.DefaultSource").mode("append").save()
df = my_spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
df.select('*').where(col("name") == "JULIA").show()
As a result you will see this:
If you are using SparkContext & SparkSession, you have mentioned the connector jar packages in SparkConf, check the following Code:
from pyspark import SparkContext,SparkConf
conf = SparkConf().set("spark.jars.packages", "org.mongodb.spark:mongo-spark-connector_2.11:2.3.2")
sc = SparkContext(conf=conf)
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("myApp") \
.config("spark.mongodb.input.uri", "mongodb://xxx.xxx.xxx.xxx:27017/sample1.zips") \
.config("spark.mongodb.output.uri", "mongodb://xxx.xxx.xxx.xxx:27017/sample1.zips") \
.getOrCreate()
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
df.printSchema()
If you are using only SparkSession then use following code:
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("myApp") \
.config("spark.mongodb.input.uri", "mongodb://xxx.xxx.xxx.xxx:27017/sample1.zips") \
.config("spark.mongodb.output.uri", "mongodb://xxx.xxx.xxx.xxx:27017/sample1.zips") \
.config('spark.jars.packages', 'org.mongodb.spark:mongo-spark-connector_2.11:2.3.2') \
.getOrCreate()
df = spark.read.format("com.mongodb.spark.sql.DefaultSource").load()
df.printSchema()
If you're using the newest version of mongo-spark-connector, i.e. v10.0.1 at the time of writing this, you need to use SparkConf object, as stated by the mongo documentation (https://www.mongodb.com/docs/spark-connector/current/configuration/).
Besides, you don't need to manually download anything, it will do it for you.
Bellow is the solution I came up with, for :
mongo-spark-connector: 10.0.1
mongo server : 5.0.8
spark : 3.2.0
def init_spark():
password = os.environ["MONGODB_PASSWORD"]
user = os.environ["MONGODB_USER"]
host = os.environ["MONGODB_HOST"]
db_auth = os.environ["MONGODB_DB_AUTH"]
mongo_conn = f"mongodb://{user}:{password}#{host}:27017/{db_auth}"
conf = SparkConf()
# Download mongo-spark-connector and its dependencies.
# This will download all the necessary jars and put them in your $HOME/.ivy2/jars, no need to manually download them :
conf.set("spark.jars.packages",
"org.mongodb.spark:mongo-spark-connector:10.0.1")
# Set up read connection :
conf.set("spark.mongodb.read.connection.uri", mongo_conn)
conf.set("spark.mongodb.read.database", "<my-read-database>")
conf.set("spark.mongodb.read.collection", "<my-read-collection>")
# Set up write connection
conf.set("spark.mongodb.write.connection.uri", mongo_conn)
conf.set("spark.mongodb.write.database", "<my-write-database>")
conf.set("spark.mongodb.write.collection", "<my-write-collection>")
# If you need to update instead of inserting :
conf.set("spark.mongodb.write.operationType", "update")
SparkContext(conf=conf)
return SparkSession \
.builder \
.appName('<my-app-name>') \
.getOrCreate()
spark = init_spark()
df = spark.read.format("mongodb").load()
df_grouped = df.groupBy("<some-column>").agg(mean("<some-other-column>"))
df_grouped.write.format("mongodb").mode("append").save()
I was also facing same error "java.lang.ClassNotFoundException: Failed to find data source: com.mongodb.spark.sql.DefaultSource" while trying to connect to MongoDB from Spark (2.3).
I had to download and copy mongo-spark-connector_2.11 JAR file(s) into jars directory of spark installation.
That resolved my issue and I was successfully able to call my spark code via spark-submit.
Hope it helps.
Here is how this error got resolved by downloading the jar files below. (Used the solution of this question.)
1.Downloaded the jar files below.
mongo-spark-connector_2.11-2.4.1 from here
mongo-java-driver-3.9.0 from here
copy and paste both these jar files into 'jars' location in spark directory.
Pyspark Code in jupiter notebook:
import pyspark
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("mongo").\
config("spark.mongodb.input.uri","mongodb://127.0.0.1:27017/$database.$table_name").\
config("spark.mongodb.output.uri","mongodb://127.0.0.1:27017/$database.$table_name").\
getOrCreate()
df=spark.read.format('com.mongodb.spark.sql.DefaultSource')\
.option( "uri", "mongodb://127.0.0.1:27017/$database.$table_name") \
.load()
df.printSchema()
#create Temp view of df to view the data
table = df.createOrReplaceTempView("df")
#to read table present in mongodb
query1 = spark.sql("SELECT * FROM df ")
query1.show(10)
You are not using sc to create the SparkSession. Maybe this code can help you:
conf.set('spark.mongodb.input.uri', mongodb_input_uri)
conf.set('spark.mongodb.input.collection', 'collection_name')
conf.set('spark.mongodb.output.uri', mongodb_output_uri)
sc = SparkContext(conf=conf)
spark = SparkSession(sc) # Using the context (conf) to create the session

PySpark sqlContext read Postgres 9.6 NullPointerException

Trying to read a table with PySpark from a Postgres DB. I have set up the following code and verified SparkContext exists:
import os
os.environ['PYSPARK_SUBMIT_ARGS'] = '--driver-class-path /tmp/jars/postgresql-42.0.0.jar --jars /tmp/jars/postgresql-42.0.0.jar pyspark-shell'
from pyspark import SparkContext, SparkConf
conf = SparkConf()
conf.setMaster("local[*]")
conf.setAppName('pyspark')
sc = SparkContext(conf=conf)
from pyspark.sql import SQLContext
properties = {
"driver": "org.postgresql.Driver"
}
url = 'jdbc:postgresql://tom:#localhost/gqp'
sqlContext = SQLContext(sc)
sqlContext.read \
.format("jdbc") \
.option("url", url) \
.option("driver", properties["driver"]) \
.option("dbtable", "specimen") \
.load()
I get the following error:
Py4JJavaError: An error occurred while calling o812.load. : java.lang.NullPointerException
The name of my database is gqp, table is specimen, and have verified it is running on localhost using the Postgres.app macOS app.
The URL was the problem!
Originally it was: url = 'jdbc:postgresql://tom:#localhost/gqp'
I removed the tom:# part, and it worked. The URL must follow the pattern: jdbc:postgresql://ip_address:port/db_name, whereas mine was directly copied from a Flask project.
If you're reading this, hope you didn't make this same mistake :)

Using pyspark to connect to PostgreSQL

I am trying to connect to a database with pyspark and I am using the following code:
sqlctx = SQLContext(sc)
df = sqlctx.load(
url = "jdbc:postgresql://[hostname]/[database]",
dbtable = "(SELECT * FROM talent LIMIT 1000) as blah",
password = "MichaelJordan",
user = "ScottyPippen",
source = "jdbc",
driver = "org.postgresql.Driver"
)
and I am getting the following error:
Any idea why is this happening?
Edit: I am trying to run the code locally in my computer.
Download the PostgreSQL JDBC Driver from https://jdbc.postgresql.org/download/
Then replace the database configuration values by yours.
from pyspark.sql import SparkSession
spark = SparkSession \
.builder \
.appName("Python Spark SQL basic example") \
.config("spark.jars", "/path_to_postgresDriver/postgresql-42.2.5.jar") \
.getOrCreate()
df = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://localhost:5432/databasename") \
.option("dbtable", "tablename") \
.option("user", "username") \
.option("password", "password") \
.option("driver", "org.postgresql.Driver") \
.load()
df.printSchema()
More info: https://spark.apache.org/docs/latest/sql-data-sources-jdbc.html
The following worked for me with postgres on localhost:
Download the PostgreSQL JDBC Driver from https://jdbc.postgresql.org/download.html.
For the pyspark shell you use the SPARK_CLASSPATH environment variable:
$ export SPARK_CLASSPATH=/path/to/downloaded/jar
$ pyspark
For submitting a script via spark-submit use the --driver-class-path flag:
$ spark-submit --driver-class-path /path/to/downloaded/jar script.py
In the python script load the tables as a DataFrame as follows:
from pyspark.sql import DataFrameReader
url = 'postgresql://localhost:5432/dbname'
properties = {'user': 'username', 'password': 'password'}
df = DataFrameReader(sqlContext).jdbc(
url='jdbc:%s' % url, table='tablename', properties=properties
)
or alternatively:
df = sqlContext.read.format('jdbc').\
options(url='jdbc:%s' % url, dbtable='tablename').\
load()
Note that when submitting the script via spark-submit, you need to define the sqlContext.
It is necesary copy postgresql-42.1.4.jar in all nodes... for my case, I did copy in the path /opt/spark-2.2.0-bin-hadoop2.7/jars
Also, i set classpath in ~/.bashrc (export SPARK_CLASSPATH="/opt/spark-2.2.0-bin-hadoop2.7/jars" )
and work fine in pyspark console and jupyter
You normally need either:
to install the Postgres Driver on your cluster,
to provide the Postgres driver jar from your client with the --jars option
or to provide the maven coordinates of the Postgres driver with --packages option.
If you detail how are you launching pyspark, we may give you more details.
Some clues/ideas:
spark-cannot-find-the-postgres-jdbc-driver
Not able to connect to postgres using jdbc in pyspark shell
One approach, building on the example per the quick start guide, is this blog post which shows how to add the --packages org.postgresql:postgresql:9.4.1211 argument to the spark-submit command.
This downloads the driver into ~/.ivy2/jars directory, in my case /Users/derekhill/.ivy2/jars/org.postgresql_postgresql-9.4.1211.jar. Passing this as the --driver-class-path option gives the full spark-submit command of:
/usr/local/Cellar/apache-spark/2.0.2/bin/spark-submit\
--packages org.postgresql:postgresql:9.4.1211\
--driver-class-path /Users/derekhill/.ivy2/jars/org.postgresql_postgresql-9.4.1211.jar\
--master local[4] main.py
And in main.py:
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
dataframe = spark.read.format('jdbc').options(
url = "jdbc:postgresql://localhost/my_db?user=derekhill&password=''",
database='my_db',
dbtable='my_table'
).load()
dataframe.show()
To use pyspark and jupyter notebook notebook: first open pyspark with
pyspark --driver-class-path /spark_drivers/postgresql-42.2.12.jar --jars /spark_drivers/postgresql-42.2.12.jar
Then in jupyter notebook
import os
jardrv = "~/spark_drivers/postgresql-42.2.12.jar"
from pyspark.sql import SparkSession
spark = SparkSession.builder.config('spark.driver.extraClassPath', jardrv).getOrCreate()
url = 'jdbc:postgresql://127.0.0.1/dbname'
properties = {'user': 'usr', 'password': 'pswd'}
df = spark.read.jdbc(url=url, table='tablename', properties=properties)
I had trouble to get a connection to the postgresDB with the jars i had on my computer.
This code solved my problem with the driver
from pyspark.sql import SparkSession
import os
sparkClassPath = os.environ['PYSPARK_SUBMIT_ARGS'] = '--packages org.postgresql:postgresql:42.1.1 pyspark-shell'
spark = SparkSession \
.builder \
.config("spark.driver.extraClassPath", sparkClassPath) \
.getOrCreate()
df = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://localhost:5432/yourDBname") \
.option("driver", "org.postgresql.Driver") \
.option("dbtable", "yourtablename") \
.option("user", "postgres") \
.option("password", "***") \
.load()
df.show()
I also get this error
java.sql.SQLException: No suitable driver
at java.sql.DriverManager.getDriver(Unknown Source)
and add one item .config('spark.driver.extraClassPath', './postgresql-42.2.18.jar') in SparkSession - that worked.
eg:
from pyspark import SparkContext, SparkConf
import os
from pyspark.sql.session import SparkSession
spark = SparkSession \
.builder \
.appName('Python Spark Postgresql') \
.config("spark.jars", "./postgresql-42.2.18.jar") \
.config('spark.driver.extraClassPath', './postgresql-42.2.18.jar') \
.getOrCreate()
df = spark.read \
.format("jdbc") \
.option("url", "jdbc:postgresql://localhost:5432/abc") \
.option("dbtable", 'tablename') \
.option("user", "postgres") \
.option("password", "1") \
.load()
df.printSchema()
This exception means jdbc driver does not in driver classpath.
you can spark-submit jdbc jars with --jar parameter, also add it into driver classpath using spark.driver.extraClassPath.
Download postgresql jar from here:
Add this to ~Spark/jars/ folder.
Restart your kernel.
It should work.
Just initialize pyspark with --jars <path/to/your/jdbc.jar>
E.g.: pyspark --jars /path/Downloads/postgresql-42.2.16.jar
then create a dataframe as suggested above in other answers
E.g.:
df2 = spark.read.format("jdbc").option("url", "jdbc:postgresql://localhost:5432/db").option("dbtable", "yourTableHere").option("user", "postgres").option("password", "postgres").option("driver", "org.postgresql.Driver").load()
Download postgres JDBC driver from https://jdbc.postgresql.org/download.html
and use the script below.
Changes to make:
Edit PATH_TO_JAR_FILE
Save your DB credentials in an environment file and load them
Query the DB using query option and limit using fetch size
import os
from pyspark.sql import SparkSession
PATH_TO_JAR_FILE = "/home/user/Downloads/postgresql-42.3.3.jar"
spark = SparkSession \
.builder \
.appName("Example") \
.config("spark.jars", PATH_TO_JAR_FILE) \
.getOrCreate()
DB_HOST = os.environ.get("PG_HOST")
DB_PORT = os.environ.get("PG_PORT")
DB_NAME = os.environ.get("PG_DB_CLEAN")
DB_PASSWORD = os.environ.get("PG_PASSWORD")
DB_USER = os.environ.get("PG_USERNAME")
df = spark.read \
.format("jdbc") \
.option("url", f"jdbc:postgresql://{DB_HOST}:{DB_PORT}/{DB_NAME}") \
.option("user", DB_USER) \
.option("password", DB_PASSWORD) \
.option("driver", "org.postgresql.Driver") \
.option("query","select * from your_table") \
.option('fetchsize',"1000") \
.load()
df.printSchema()