How to save dataframe as shp/geojson in PySpark/Databricks? - pyspark

I have a DataFrame that has WKT in one of the columns. That column can be transformed to geojson if needed.
Is there a way to save (output to storage) this data as a geojson or shapefile in Databricks/PySpark?
Example of a DataFrame:
Id
Color
Wkt
1
Green
POINT (3 7)
2
Yellow
POLYGON ((30 10, 40 40, 20 40, 10 20, 30 10))
The DataFrame can have ~100K rows and more.
I've tried using Geopandas library, but it doesn't work:
import geopandas as gpd
# df is as PySpark dataframe
# Covert it to Pandas dataframe
pd_df = df.toPandas()
pd_df['geometry']=pd['point_wkt'].apply(wkt.loads)
# Convert it to GeoPandas dataframe
gdf = gpd.GeoDataFrame(pd, geometry='geometry')
# The following fails:
gdf.to_file(os.path.join(MOUNT_POINT,output_folder,"shapefile.shp"))
The error is:
Failed to create file /mnt/traces/output_folder/shapefile.shp: No such file or directory
The error makes no sense as the folder /mnt/traces/output_folder/ does exist, and I've successfully saved the PySpark dataframe as CSV to it.
df.write.csv(os.path.join(MOUNT_POINT,output_folder), sep='\t')
I'm able to save GeoPandas dataframe to shapefile with the above code when running locally, but not on Spark (Databricks).

If you are using Databricks then
dbutils.library.installPyPI("geopandas")
dbutils.library.installPyPI("shapely")
dbutils.library.installPyPI("geojsonio")
If you are using pyspark then it will be similar to Python Environment
pip3 install shapely
pip3 install geopandas
pip3 install geojsonio
Before writing to the path please check whether the path is mounted in databricks
display(dbutils.fs.ls('/mnt/traces'))

If you use dbutils.fs.ls("/mnt/traces/output_folder/") you'll see this path:
dbfs:/mnt/ traces/output_folder/shapefile.shp which takes us to our solution:
Solution: when writing we CAN use /dbfs/mnt/ for path instead of /mnt/lab/:
gdf.to_file("/dbfs/mnt/traces/output_folder/shapefile.shp")
Good luck!

Related

Reading Excel(xlsx) with Pyspark does not work above a certain medium size

Having the following configuration of a cluster in databricks: 64GB, 8 cores
The tests have been carried out as the only notebook in the cluster, at that time there were no other notebooks running.
I find that reading a simple 30 MB Excel file in spark keeps loading and does not work. Using the following code for this purpose:
sdf = spark.read.format("com.crealytics.spark.excel")\
.option("header", True)\
.option("inferSchema", "true")\
.load(my_path)
display(sdf)
I have tried reducing the excel file and it works fine up to 15MB.
As a workaround I am going to export the excel to csv and read it from there, but I find it shocking that spark can't even read 30MB of excel.
or am I doing something wrong in the configuration?
You need to install these 2 libraries on your databricks cluster to read excel files. Follow these paths to install:
Clusters -> select your cluster -> Libraries -> Install New -> Maven -> in Coordinates: com.crealytics:spark-excel_2.12:0.13.5
Clusters -> select your cluster -> Libraries -> Install New -> PyPI-> in Package: xlrd
Now, you will be able to read your excel as follows:
sdf = spark.read.format("com.crealytics.spark.excel") \
.option("header", "true") \
.option("inferSchema", "true") \
.option("dataAddress", "'NameOfYourExcelSheet'!A1") \
.load(filePath)
Can you please try the below option as shown in this spark-excel - github ?
Based on your input you can modify the number of rows. The value 20 is an sample value.
.option("maxRowsInMemory", 20) // Optional, default None. If set, uses a streaming reader which can help with big files (will fail if used with xls format files)
As mentioned above, the option does not work for .xls files.
In case the files are really big, consider the options as show in the link #590
Please validate before using any of the options specified.
Cheers...

sequence files from sqoop import

I have imported a table using sqoop and saved it as a sequence file.
How do I read this file into an RDD or Dataframe?
I have tried sc.sequenceFile() but I'm not sure what to pass as keyClass and value Class. I tried tried using org.apache.hadoop.io.Text, org.apache.hadoop.io.LongWritable for keyClass and valueClass
but it did not work. I am using pyspark for reading the files.
in python its not working however in SCALA it works:
You need to do following steps:
step1:
If you are importing as sequence file from sqoop, there is a jar file generated, you need to use that as ValueClass while reading sequencefile. This jar file is generally placed in /tmp folder, but you can redirect it to a specific folder (i.e. to local folder not hdfs) using --bindir option.
example:
sqoop import --connect jdbc:mysql://ms.itversity.com/retail_export --
username retail_user --password itversity --table customers -m 1 --target-dir '/user/srikarthik/udemy/practice4/problem2/outputseq' --as-sequencefile --delete-target-dir --bindir /home/srikarthik/sqoopjars/
step2:
Also, you need to download the jar file from below link:
http://www.java2s.com/Code/Jar/s/Downloadsqoop144hadoop200jar.htm
step3:
Suppose, customers table is imported using sqoop as sequence file.
Run spark-shell --jars path-to-customers.jar,sqoop-1.4.4-hadoop200.jar
example:
spark-shell --master yarn --jars /home/srikarthik/sqoopjars/customers.jar,/home/srikarthik/tejdata/kjar/sqoop-1.4.4-hadoop200.jar
step4: Now run below commands inside the spark-shell
scala> import org.apache.hadoop.io.LongWritable
scala> val data = sc.sequenceFile[LongWritable,customers]("/user/srikarthik/udemy/practice4/problem2/outputseq")
scala> data.map(tup => (tup._1.get(), tup._2.toString())).collect.foreach(println)
You can use SeqDataSourceV2 package to read the sequence file with the DataFrame API without any prior knowledge of the schema (aka keyClass and valueClass).
Please note that the current version is only compatible with Spark 2.4
$ pyspark --packages seq-datasource-v2-0.2.0.jar
df = spark.read.format("seq").load("data.seq")
df.show()

How to refer deltalake tables in jupyter notebook using pyspark

I'm trying to start use DeltaLakes using Pyspark.
To be able to use deltalake, I invoke pyspark on Anaconda shell-prompt as —
pyspark — packages io.delta:delta-core_2.11:0.3.0
Here is the reference from deltalake — https://docs.delta.io/latest/quick-start.html
All commands for delta lake works fine from Anaconda shell-prompt.
On jupyter notebook, reference to a deltalake table gives error.Here is the code I am running on Jupyter Notebook -
df_advisorMetrics.write.mode("overwrite").format("delta").save("/DeltaLake/METRICS_F_DELTA")
spark.sql("create table METRICS_F_DELTA using delta location '/DeltaLake/METRICS_F_DELTA'")
Below is the code I am using at start of notebook to connect to pyspark -
import findspark
findspark.init()
findspark.find()
import pyspark
findspark.find()
Below is the error I get:
Py4JJavaError: An error occurred while calling o116.save.
: java.lang.ClassNotFoundException: Failed to find data source: delta. Please find packages at http://spark.apache.org/third-party-projects.html
Any suggestions?
I have created a Google Colab/Jupyter Notebook example that shows how to run Delta Lake.
https://github.com/prasannakumar2012/spark_experiments/blob/master/examples/Delta_Lake.ipynb
It has all the steps needed to run. This uses the latest spark and delta version. Please change the versions accordingly.
A potential solution is to follow the techniques noted in Import PySpark packages with a regular Jupyter notebook.
Another potential solution is to download the delta-core JAR and place it in the $SPARK_HOME/jars folder so when you run jupyter notebook it automatically includes the Delta Lake JAR.
I use DeltaLake all the time from a Jupyter notebook.
Try the following in you Jupyter notebook running Python 3.x.
### import Spark libraries
from pyspark.sql import SparkSession
import pyspark.sql.functions as F
### spark package maven coordinates - in case you are loading more than just delta
spark_packages_list = [
'io.delta:delta-core_2.11:0.6.1',
]
spark_packages = ",".join(spark_packages_list)
### SparkSession
spark = (
SparkSession.builder
.config("spark.jars.packages", spark_packages)
.config("spark.delta.logStore.class", "org.apache.spark.sql.delta.storage.S3SingleDriverLogStore")
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension")
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
.getOrCreate()
)
sc = spark.sparkContext
### Python library in delta jar.
### Must create sparkSession before import
from delta.tables import *
Assuming you have a spark dataframe df
HDFS
Save
### overwrite, change mode="append" if you prefer
(df.write.format("delta")
.save("my_delta_file", mode="overwrite", partitionBy="partition_column_name")
)
Load
df_delta = spark.read.format("delta").load("my_delta_file")
AWS S3 ObjectStore
Initial S3 setup
### Spark S3 access
hdpConf = sc._jsc.hadoopConfiguration()
user = os.getenv("USER")
### Assuming you have your AWS credentials in a jceks keystore.
hdpConf.set("hadoop.security.credential.provider.path", f"jceks://hdfs/user/{user}/awskeyfile.jceks")
hdpConf.set("fs.s3a.fast.upload", "true")
### optimize s3 bucket-level parquet column selection
### un-comment to use
# hdpConf.set("fs.s3a.experimental.fadvise", "random")
### Pick one upload buffer option
hdpConf.set("fs.s3a.fast.upload.buffer", "bytebuffer") # JVM off-heap memory
# hdpConf.set("fs.s3a.fast.upload.buffer", "array") # JVM on-heap memory
# hdpConf.set("fs.s3a.fast.upload.buffer", "disk") # DEFAULT - directories listed in fs.s3a.buffer.dir
s3_bucket_path = "s3a://your-bucket-name"
s3_delta_prefix = "delta" # or whatever
Save
### overwrite, change mode="append" if you prefer
(df.write.format("delta")
.save(f"{s3_bucket_path}/{s3_delta_prefix}/", mode="overwrite", partitionBy="partition_column_name")
)
Load
df_delta = spark.read.format("delta").load(f"{s3_bucket_path}/{s3_delta_prefix}/")
Spark Submit
Not directly answering the original question, but for completeness, you can do the following as well.
Add the following to your spark-defaults.conf file
spark.jars.packages io.delta:delta-core_2.11:0.6.1
spark.delta.logStore.class org.apache.spark.sql.delta.storage.S3SingleDriverLogStore
spark.sql.extensions io.delta.sql.DeltaSparkSessionExtension
spark.sql.catalog.spark_catalog org.apache.spark.sql.delta.catalog.DeltaCatalog
Refer to conf file in spark-submit command
spark-submit \
--properties-file /path/to/your/spark-defaults.conf \
--name your_spark_delta_app \
--py-files /path/to/your/supporting_pyspark_files.zip \
--class Main /path/to/your/pyspark_script.py

I am able to create a .csv file using Talend job and I want to convert .csv to .parquet file using tSystem component?

I have a Talend job to create a .csv file and now I want to convert .parquet format using Talend v6.5.1. Only option I can think, tSystem component to call the python script from local or directory where .csv landing temporarily. I know I can convert this easily using pandas or pyspark but I am not sure the same code will be work for tSystem in Talend. Can you please provide the suggestions or instructions-
Code:
import pandas as pd
DF = pd.read_csv("Path")
DF1 = to_parquet(DF)
If you have an external script on your file system, you can try
"python \"myscript.py\" "
Here is a link on talend forum regarding this problem :
https://community.talend.com/t5/Design-and-Development/how-to-execute-a-python-script-file-with-an-argument-using/m-p/23975#M3722
I am able to resolve the problem following below steps-
import pandas as pd
import pyarrow as pa
import numpy as np
import sys
filename = sys.argv[1]
test = pd.read_csv(r"C:\\Users\\your desktop\\Downloads\\TestXML\\"+ filename+".csv")
test.to_parquet(r"C:\\Users\\your desktop\\Downloads\\TestXML\\"+ filename+".parque
t")

pyspark : AnalysisException when reading csv file

I am new to pyspark . I am migrating my project to pyspark . I am trying to read csv file from S3 and create df out of it. file name is assigned to variable cfg_file and I am using key variable for reading from S3. I am able to do same using pandas but get AnalysisException when I read using spark . I am using boto lib for S3 connection
df = spark.read.csv(StringIO.StringIO(Key(bucket,cfg_file).get_contents_as_string()), sep=',')
AnalysisException: u'Path does not exist: file: