How to speed up spark df.write jdbc to postgres database? - postgresql

I am new to spark and am attempting to speed up appending the contents of a dataframe, (that can have between 200k and 2M rows) to a postgres database using df.write:
df.write.format('jdbc').options(
url=psql_url_spark,
driver=spark_env['PSQL_DRIVER'],
dbtable="{schema}.{table}".format(schema=schema, table=table),
user=spark_env['PSQL_USER'],
password=spark_env['PSQL_PASS'],
batchsize=2000000,
queryTimeout=690
).mode(mode).save()
I tried increasing the batchsize but that didn't help, as completing this task still took ~4hours. I've also included some snapshots below from aws emr showing more details about how the job ran. The task to save the dataframe to the postgres table was only assigned to one executor (which I found strange), would speeding this up involve dividing this task between executors?
Also, I have read spark's performance tuning docs but increasing the batchsize, and queryTimeout have not seemed to improve performance. (I tried calling df.cache() in my script before df.write, but runtime for the script was still 4hrs)
Additionally, my aws emr hardware setup and spark-submit are:
Master Node (1): m4.xlarge
Core Nodes (2): m5.xlarge
spark-submit --deploy-mode client --executor-cores 4 --num-executors 4 ...

Spark is a distributed data processing engine, so when you are processing your data or saving it on file system it uses all its executors to perform the task.
Spark JDBC is slow because when you establish a JDBC connection, one of the executors establishes link to the target database hence resulting in slow speeds and failure.
To overcome this problem and speed up data writes to the database you need to use one of the following approaches:
Approach 1:
In this approach you need to use postgres COPY command utility in order to speed up the write operation. This requires you to have psycopg2 library on your EMR cluster.
The documentation for COPY utility is here
If you want to know the benchmark differences and why copy is faster visit here!
Postgres also suggests using COPY command for bulk inserts. Now how to bulk insert a spark dataframe.
Now to implement faster writes, first save your spark dataframe to EMR file system in csv format and also repartition your output so that no file contains more than 100k rows.
#Repartition your dataframe dynamically based on number of rows in df
df.repartition(10).write.option("maxRecordsPerFile", 100000).mode("overwrite").csv("path/to/save/data)
Now read the files using python and execute copy command for each file.
import psycopg2
#iterate over your files here and generate file object you can also get files list using os module
file = open('path/to/save/data/part-00000_0.csv')
file1 = open('path/to/save/data/part-00000_1.csv')
#define a function
def execute_copy(fileName):
con = psycopg2.connect(database=dbname,user=user,password=password,host=host,port=port)
cursor = con.cursor()
cursor.copy_from(fileName, 'table_name', sep=",")
con.commit()
con.close()
To gain additional speed boost, since you are using EMR cluster you can leverage python multiprocessing to copy more than one file at once.
from multiprocessing import Pool, cpu_count
with Pool(cpu_count()) as p:
print(p.map(execute_copy, [file,file1]))
This is the approach recommended as spark JDBC can't be tuned to gain higher write speeds due to connection constraints.
Approach 2:
Since you are already using an AWS EMR cluster you can always leverage the hadoop capabilities to perform your table writes faster.
So here we will be using sqoop export to export our data from emrfs to the postgres db.
#If you are using s3 as your source path
sqoop export --connect jdbc:postgresql:hostname:port/postgresDB --table target_table --export-dir s3://mybucket/myinputfiles/ --driver org.postgresql.Driver --username master --password password --input-null-string '\\N' --input-null-non-string '\\N' --direct -m 16
#If you are using EMRFS as your source path
sqoop export --connect jdbc:postgresql:hostname:port/postgresDB --table target_table --export-dir /path/to/save/data/ --driver org.postgresql.Driver --username master --password password --input-null-string '\\N' --input-null-non-string '\\N' --direct -m 16
Why sqoop?
Because sqoop opens multiple connections with the database based on the number of mapper specified. So if you specify -m as 8 then 8 concurrent connection streams will be there and those will write data to the postgres.
Also, for more information on using sqoop go through this AWS Blog, SQOOP Considerations and SQOOP Documentation.
If you can hack around your way with code then Approach 1 will definitely give you the performance boost you seek and if you are comfortable with hadoop components like SQOOP then go with second approach.
Hope it helps!

Spark side tuning => Perform repartition on Datafarme so that there would multiple executor writing to DB in parallel
df
.repartition(10) // No. of concurrent connection Spark to PostgreSQL
.write.format('jdbc').options(
url=psql_url_spark,
driver=spark_env['PSQL_DRIVER'],
dbtable="{schema}.{table}".format(schema=schema, table=table),
user=spark_env['PSQL_USER'],
password=spark_env['PSQL_PASS'],
batchsize=2000000,
queryTimeout=690
).mode(mode).save()
Postgresql side tuning =>
There will need to bump up below parameters on PostgreSQL respectively.
max_connections determines the maximum number of concurrent
connections to the database server. The default is typically 100
connections.
shared_buffers configuration parameter determines how much
memory is dedicated to PostgreSQL to use for caching data.

To solve the performance issue, you generally need to resolve the below 2 bottlenecks:
Make sure the spark job is writing the data in parallel to DB -
To resolve this make sure you have a partitioned dataframe. Use "df.repartition(n)" to partiton the dataframe so that each partition is written in DB parallely.
Note - Large number of executors will also lead to slow inserts. So start with 5 partitions and increase the number of partitions by 5 till you get optimal performance.
Make sure the DB has enough compute, memory and storage required for ingesting bulk data.

By repartitioning the dataframe you can achieve a better write performance is a known answer. But there is an optimal way of repartitioning your dataframe.
Since you are running this process on an EMR cluster , First get to know about the instance type and the number of cores that are running on each of your slave instances. According to that specify your number of partitions on a dataframe.
In your case you are using m5.xlarge(2 slaves) which will have 4 vCPUs each which means 4 threads per instance. So 8 partitions will give you an optimal result when you are dealing with huge data.
Note : Number of partitions should be increased or decreased based on your data size.
Note : Batch size is also something you should consider in your writes. Bigger the batch size better the performance

Related

Using AWS Glue Python jobs to run ETL on redshift

We have a setup to sync rds postgres changes into s3 using DMS. Now, I want to run ETL on this s3 data(in parquet) using Glue as scheduler.
My plan is to build SQL queries to do the transformation, execute them on redshift spectrum and unload data back into s3 in parquet format. I don't want to Glue Spark as my data loads do not require that kind of capacity.
However, I am facing some problems connecting to redshift from glue, primarily library version issues and the right whl files to be used for pg8000/psycopg2. Wondering if anyone has experience with such implementation and how were you able to manage the db connections from Glue Python shell.
I'm doing something similar in a Python Shell Job but with Postgres instead of Redshift.
This is the whl file I use
psycopg_binary-2.9.2-cp36-cp36m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
An updated version can be found here.

faster mongoimport, in parallel in airflow?

tl;dr: there seems to be a limit on how fast data is inserted into our mongodb atlas cluster. Inserting data in parallel does not speed this up. How can we speed this up? Is our only option to get a larger mongodb atlas cluster with more Write IOPS? What even are write IOPS?
We replace and re-insert >10GB+ of data daily into our mongodb cluster with atlas. We have the following 2 bash commands, wrapped in python functions to help parameterize the commands, that we use with BashOperator in airflow:
upload single JSON to mongo cluster
def mongoimport_file(mongo_table, file_name):
# upload single file from /tmp directory into Mongo cluster
# cleanup: remove .json in /tmp at the end
uri = 'mongodb+srv://<user>:<pass>#our-cluster.dwxnd.gcp.mongodb.net/ourdb'
return f"""
echo INSERT \
&& mongoimport --uri "{uri}" --collection {mongo_table} --drop --file /tmp/{file_name}.json \
&& echo AND REMOVE LOCAL FILEs... \
&& rm /tmp/{file_name}.json
"""
upload directory of JSONs to mongo cluster
def mongoimport_dir(mongo_table, dir_name):
# upload directory of JSONs into mongo cluster
# cleanup: remove directory at the end
uri = 'mongodb+srv://<user>:<pass>#our-cluster.dwxnd.gcp.mongodb.net/ourdb'
return f"""
echo INSERT \
&& cat /tmp/{dir_name}/*.json | mongoimport --uri "{uri}" --collection {mongo_table} --drop \
&& echo AND REMOVE LOCAL FILEs... \
&& rm -rf /tmp/{dir_name}
"""
There are called in airflow using the BashOperator:
import_to_mongo = BashOperator(
task_id=f'mongo_import_v0__{this_table}',
bash_command=mongoimport_file(mongo_table = 'tname', file_name = 'fname')
)
Both of these work, although with varying performance:
mongoimport_file with 1 5GB file: takes ~30 minutes to mongoimport
mongoimport_dir with 100 50MB files: takes ~1 hour to mongoimport
There is currently no parallelization with ** mongoimport_dir**, and in fact it is slower than importing just a single file.
Within airflow, is it possible to parallelize the mongoimport of our directory of 100 JSONs, to achieve a major speedup? If there's a parallel solution using python's pymongo that cannot be done with mongoimport, we're happy to switch (although we'd strongly prefer to avoid loading these JSONs into memory).
What is the current bottleneck with importing to mongo? Is it (a) CPUs in our server / docker container, or (b) something with our mongo cluster configuration (cluster RAM, or cluster vCPU, or cluster max connections, or cluster read / write IOPS (what are these even?)). For reference, here is our mongo config. I assume we can speed up our import by getting a much bigger cluster but mongodb atlas becomes very expensive very fast. 0.5 vCPUs doesn't sound like much, but this already runs us $150 / month...
First of all "What is the current bottleneck with importing to mongo?" and "Is it (a) CPUs in our server / docker container " - don't believe to anyone who will tell you the answer from the screenshot you provided.
Atlas has monitoring tools that will tell you if the bottleneck is in CPU, RAM, disk or network or any combination of those on db side:
On the client side (airflow) - please use system monitor of your host OS to answer the question. Test disk I/O inside docker. Some combinations of host OS and docker storage drivers performed quite poor in the past.
Next, "What even are write IOPS" - random
write operations per second
https://cloud.google.com/compute/docs/disks/performance
IOPS calculation differs depending on cloud provider. Try AWS and Azure to compare cost vs speed. M10 on AWS gives you 2 vCPU, yet again I doubt you can compare them 1:1 between vendors. The good thing is it's on-demand and will cost you less than a cup of coffee to test and delete the cluster.
Finally, "If there's a parallel solution using python's pymongo" - I doubt so. mongoimport uses batches of 100,000 documents, so essentially it sends it as fast as the stream is consumed on the receiver. The limitations on the client side could be: network, disk, CPU. If it is network or disk, parallel import won't improve a thing. Multi-core systems could benefit from parallel import if mongoimport was using a single CPU and it was the limiting factor. By default mongoimport uses all CPUs available: https://github.com/mongodb/mongo-tools/blob/cac1bfbae193d6ba68abb764e613b08285c6f62d/common/options/options.go#L302. You can hardly beat it with pymongo.

How to import 700+ million rows into MongoDB in minutes

We have 32 Core Windows Server, 96 GB RAM with 5TB HDD
Approach 1( Using Oracle SQLLDR)
We fetched input data from oracle database.
We processed and generated multiple TSV files.
Using threading, we are importing data into the Oracle database using SQL Loader.
It requires approximately 66 Hrs.
Approach 2( Using MongoImport)
We fetched input data from oracle database.
We processed and generated multiple TSV files.
Using threading, we are importing data into a MongoDB database using mongoimport command line utility.
It requires approximately 65 Hrs.
There is no considerable difference observed in performance.
We need to process 700+ Millions of record, please suggest the better approach for optimized performance.
We are fetching from oracle database, processing in our application and storing the output in another database. This is an existing process which we do on Oracle database but it is time-consuming so we decided to try MongoDB for performance improvement.
We did one POC, where we did not get any considerable difference. We thought it may work on the server because of hardware so we did POC on the server where we got an above-mentioned result.
We think that MongoDB is more robust than the Oracle database but failed to get the desired result after comparing the stats.
Please find MongoDB related details of production server:
MongoImport Command
mongoimport --db abcDB --collection abcCollection --type tsv --file abc.tsv --headerline --numInsertionWorkers 8 --bypassDocumentValidation
Wired Tiger Configuration
storage:
dbPath: C:\data\db
journal:
enabled: false
wiredTiger:
engineConfig:
cacheSizeGB: 40
Approximate computation time is calculated by process log details for process execution using Oracle and process execution using MongoDB.
Underlined POC carried out on the production server is for comparing performance Oracle(SQL Loader) vs MongoDB ( MongoImport )
As we are using standalone MongoDB instance for our POC, we have not created any sharding in production server.
If we get the desired result using MongoDB, then we come to the conclusion about migration.
Thanking you in advance.

Neo4J: Importing a large Cypher dump

I have a large dump (millions of nodes and relationships) from a Neo4J 2.2.5 database in Cypher format (produced with neo4j-sh -c dump), that I'm trying to import into a 3.0.3 instance.
However, the import process (neo4j-sh < dump.cypher) slows down drastically after a few minutes, down to a couple records per second.
Is there any way to speed up this process? I tried upgrading the database as described in the manual, but the new instance crashes with an exception about a version mismatch in the store format.
Neo4j 3.0 comes with a bin/neo4j-admin tool for exactly this purpose.
try bin/neo4j-admin import --mode database --from /path/to/db
see: http://neo4j.com/docs/operations-manual/current/deployment/upgrade/#upgrade-instructions
The cypher dump is not useful for large database, it's only for smaller setups (a few thousand nodes) for demos etc.
FYI: In Neo4j 3.0 the cypher export procedure from APOC is much more suited for large scale cypher dumps.
Update
You can also try to upgrade from 2.2 to 2.3 first. E.g by using neo4j-shell
add allow_store_upgrade=true to your neo4j.properties` in 2.3
and then do: bin/neo4j-shell -path /path/to/db -config conf/neo4j.properties -c quit
If it is finished that backup of your db is on Version 2.3
Then you should be able to use neo4j-admin -import ...
I recently had this same symptom with my CSV import slowing to death.
My load-csv cypher script had too many rels.
So I divided my load in two. First create the nodes, then the relations and most connected nodes. HIH.
Back to your issue
First, try to increase the memory for the JVM. In NEO/conf, there is a wrapper file. At the beginning are the memory settings.
Lastly, from an instance w/ your data, export to multiple CSV files and import them in your new server.

Postgres master / slave based on table

Currently I have 1 postgres instance which is starting to receive too much load and want create a cluster of 2 postgres nodes.
From reading the documentation for postgres and pgpool, it seems like I can only write to a master and read from a slave or run parallel queries.
What I'm looking for is a simple replication of a database but with master/slave based on which table is being updated. Is this possible? Am i missing it somewhere in the documentation?
e.g.
update users will be executed on server1 and replicated to server2
update big_table will be executed on server2 and replicated back to server1
What you are looking for is called MASTER/MASTER replication. This is supported natively (without PgPool) since 9.5. Note, that it's an "eventually consistent" architecture, so your application should be aware of possible temporary differences between the two servers.
See PG documentation for more details and setup instructions.