writing psycopg2 query result to pyspark dataframe - postgresql

Is there a way to directly fetch the contents of a table from a postgresQL database into a pyspark dataframe using the psycopg2 library?
The solutions online so far only talk about using a pandas dataframe. But that is not possible with very large set of data in spark since it would be loading all the data to the driver node.
The code I am using is as follows:
conn = psycopg2.connect(database="databasename", user='user', password='pass', host='postgres.host, port= '5432'
)
cur = conn.cursor()
cur.execute("select * from database.table limit 10")
data = cur.fetchall()
The resulting data output is a tuple that is difficult to convert to a dataframe.
Any suggestions would be greatly appreciated

Directly use spark jdbc to connect to postgresql to read the data, and it will return a dataframe.

Related

How to create a temporary table in snowflake based on pyspark dataframe

I can read the snowflake table in pyspark dataframe using sqlContext
sql = f"""select * from table1""";
df = sqlContext.read
.format(SNOWFLAKE_SOURCE_NAME)
.options(**snowflake_options)
.option("query", sql)
.load()
How do I create a temporary table in snowflake (using pyspark code) and insert values from this pyspark dataframe (df)?
just save as usual, with snowflake format
snowflake_options = {
...
'sfDatabase': 'dbabc',
'dbtable': 'tablexyz',
...
}
(df
.write
.format(SNOWFLAKE_SOURCE_NAME)
.options(**snowflake_options)
.save()
)
I don't believe this can be done. At least not the way you want.
You can, technically, create a temporary table; but persisting it is something that I have had a great deal of difficulty finding how to do (i.e. I haven't). If you run the following:
spark.sparkContext._jvm.net.snowflake.spark.snowflake.Utils.runQuery(snowflake_options, 'create temporary table tmp_table (id int, value text)')
you'll notice that it successfully returns a java object indicating the temp table was created successfully; but once you try and run any further statements on it, you'll get nasty errors that mean it no longer exists. Somehow we mere mortals would need to find a way to access and persist the Snowflake session through the jvm api. That being said, I also think that would run contrary to the Spark paradigm.
If you really need the special-case performance boost of running transformations on Snowflake instead of bringing it all into Spark, just keep everything in Snowflake to begin with by either
Using CTEs in the query, or
Using the runQuery api described above to create "temporary" permanent/transient tables and designing Snowflake queries that insert directly to those and then clean them up (DROP them) when you are done.

Using loop to create spark SQL queries

I am trying to create some spark SQL queries for different tables which i have collected as a list. I want to create SQL queries for all the tables present in the hive database.The hive context has been initialized Following is my approach.
tables= spark.sql("select tables in survey_db")
# registering dataframe as temp view with 2 columns - tableName and db name
tables.createOrReplaceTempView("table_list")
# collecting my table names in a list
table_array= spark.sql("select collect_list(tableName) from table_list").collect()[0][0]
# array values(list)
table_array= [u'survey',u'market',u'customer']
I want to create spark SQL queries for the table names stored in table_array. for example:
for i in table_array:
spark.sql("select * from survey_db.'i'")
I cant use shell scripting as i have to write a pyspark script for this. Please advice if spark.sql queries can be created using loop/map . Thanks everyone.
You can achieve the same as follows:
sql_list = [f"select * from survey_db.{table}" for table in table_array]
for sql in sql_list:
df = spark.sql(sql)
df.show()

How to get Create Statement of Table in some other database in Spark using JDBC

Problem statement:
I have a Impala database where multiple tables are present
I am creating Spark JDBC connection to Impala and loading these tables into spark dataframe for my validations like this which works fine:
val df = spark.read.format("jdbc")
.option("url","url")
.option("dbtable","tablename")
.load()
Now the next step and my actual problem is I need to find the create statement which was used to create the tables in Impala itself
Since I cannot run command like below as it gives error, is there anyway I can fetch the show create statement for tables present in Impala.
val df = spark.read.format("jdbc")
.option("url","url")
.option("dbtable","show create table tablename")
.load()
Perhaps you can use Spark SQL "natively" to execute something like
val createstmt = spark.sql("show create table <tablename>")
The resulting dataframe will have a single column (type string) which contains a complete CREATE TABLE statement.
But, if you still choose to go JDBC route there is always an option to use the good old JDBC interface. Scala understands everything written in Java, after all...
import java.sql.*
Connection conn = DriverManager.getConnection("url")
Statement stmt = conn.createStatement()
ResultSet rs = stmt.executeQuery("show create table <tablename>")
...etc...

How to determine if source is empty?

I have an etl process that is using an athena source. I cannot figure out how to create a data frame if there is no data yet in the source. I was using the GlueContext:
trans_ddf = glueContext.create_dynamic_frame.from_catalog(
database=my_db, table_name=my_table, transformation_ctx="trans_ddf")
This fails if there is no data in the source db, because it can't infer the schema.
I also tried using the sql function on the spark session:
has_rows_df = spark.sql("select cast(count(*) as boolean) as hasRows from my_table limit 1")
has_rows = has_rows_df.collect()[0].hasRows
This also fails because it can't infer the schema.
How can I create a data frame so I can determine if the source has any data?
has_rows_df.head(1).isEmpty
should do the job,robustly.
See How to check if spark dataframe is empty?

How to execute a spark sql query from a map function (Python)?

How does one execute spark sql queries from routines that are not the driver portion of the program?
from pyspark import SparkContext
from pyspark.sql import SQLContext
from pyspark.sql.types import *
def doWork(rec):
data = SQLContext.sql("select * from zip_data where STATEFP ='{sfp}' and COUNTYFP = '{cfp}' ".format(sfp=rec[0], cfp=rec[1]))
for item in data.collect():
print(item)
# do something
return (rec[0], rec[1])
if __name__ == "__main__":
sc = SparkContext(appName="Some app")
print("Starting some app")
SQLContext = SQLContext(sc)
parquetFile = SQLContext.read.parquet("/path/to/data/")
parquetFile.registerTempTable("zip_data")
df = SQLContext.sql("select distinct STATEFP,COUNTYFP from zip_data where STATEFP IN ('12') ")
rslts = df.map(doWork)
for rslt in rslts.collect():
print(rslt)
In this example I'm attempting to query the same table but would like to query other tables registered in Spark SQL too.
One does not execute nested operations on distributed data structure.It is simply not supported in Spark. You have to use joins, local (optionally broadcasted) data structures or access external data directly instead.
In case when you can't accomplish your task with the joins and want to run the SQL queries in memory:
You can consider using some in-memory database like H2, Apache Derby and Redis etc. to execute parallel faster SQL queries without loosing benefits of in-memory computation.
In-memory databases will provide faster access as compared to MySQL, PostgreSQL etc. databases.