Sqoop PSQLException "Sorry, too many clients already" on large exports - postgresql

I'm seeing Sqoop throw PSQLException "Sorry, too many clients already" when exporting large (200+ million rows) tables from HDFS to Postgres. I have a few smaller tables (~3 million) that seem to be working fine.
Even when the large tables fail it seems I still get ~2 million rows into my postgres table but I'm guessing that's just from the workers that didn't die since they got one of the connections first. My Postgres table is configured to allow 300 max_connections and there are about 70 connections always active from other applications so SQOOP should have ~230 to use.
I've tried toggling --num-mappers between 2-8 in my SQOOP export command but that hasn't seemed to make much of a difference. Looking at the failed hadoop job in the job tracker shows "Num tasks" as 3,660 in the map stage and "Failed/Killed Task Attempts" is showing 184/273, if that helps at all
Is there anyway to set a max # of connections to use? Anything else I can do here? Happy to provide additional info if it's needed.
Thanks.

Figured it out for my specific scenario. Thought I'd share my findings:
The core of the problem was the number of simultaneous map tasks running at a single time.
The large tables had 280 map tasks running simultaneously (3,660
total)
List item the small tables had 180 map tasks running simultaneously (180
total).
Due to this, the tasks were running out of connections since each of the 280 would try to spawn a connection and 280 + the existing 70 is > 300. So, I had two options: (1) jack up the postgres max_connection limit a bit, or (2) reduce the number of map tasks running at a single time.
I went with (1) since I control the database and jacked up max_connections to 400 and moved on with life.
FWIW, it looks like (2) is do-able with the following, but I couldn't test it since I don't control the HDFS cluster:
https://hadoop.apache.org/docs/r1.0.4/mapred-default.html
mapred.jobtracker.maxtasks.per.job

Related

Postgres Replication Slots Checking Lag

I'm attempting to detect on my AWS RDS Aurora Postgres 11.9 instance if my three Logical Replication slots are backing up. I'm using wal2json plugin to read off of them continuously. Two of the slots are being read off by python processes. The third is kafka-connect consumer.
I'm using the below query, but am getting odds results. It is saying two of my slots are several GB behind even in the middle of the night when we have very small load. Am I misinterpreting what the query is saying?
SELECT redo_lsn, slot_name,restart_lsn,
round((redo_lsn-restart_lsn) / 1024 / 1024 / 1024, 2) AS GB_behind
FROM pg_control_checkpoint(), pg_replication_slots;
Things I've checked:
I've checked that the consumers are still running.
I have also looked at the logs and the timestamps of the rows being inserted are coming off the database within 0-2 seconds after they were inserted. So it doesn't appear like I'm lagging behind.
I've performed an end-to-end test and the data is making it through my pipeline in a few seconds, so it is definitely consuming data relatively fast.
Both of the slots I'm using for my python processes have the same value for GB_behind, currently 12.40. Even though the two slots are on different logical databases which have dramatically different load (one is ~1000x higher load).
I have a 3rd replication slot being read by a different program (kafka connect). It shows 0 GB_behind.
There is just no way, even at peak load, that my workloads could generate 12.4 GBs of data in a few seconds(not even in a few minutes). Am I miss interpreting something? Is there a better way to check how far a replication slot is behind?
Thanks much!
Here is a small snippet of my code(python3.6) in case it helps, but I've bene using it for awhile and data has been working:
def consume(msg):
print(msg.payload)
try:
kinesis_client.put_record(StreamName=STREAM_NAME, Data=msg.payload, PartitionKey=partition_key)
except:
logger.exception('PG ETL: Failed to send load to kinesis. Likely too large.')
with con.cursor() as cur:
cur.start_replication(slot_name=replication_slot, options = {'pretty-print' : 1}, decode=True)
cur.consume_stream(consume)
I wasn't properly performing send_feedback during my consume function. So I was consuming the records, but I wasn't telling the Postgres replication slot that I had consumed the records.
Here is my complete consume function in case others interested:
def consume(msg):
print(msg.payload)
try:
kinesis_client.put_record(StreamName=STREAM_NAME, Data=msg.payload, PartitionKey=partition_key)
except:
logger.exception('PG ETL: Failed to send load to kinesis. Likely too large.')
msg.cursor.send_feedback(flush_lsn=msg.data_start)
with con.cursor() as cur:
cur.start_replication(slot_name=replication_slot, options = {'pretty-print' : 1}, decode=True)
cur.consume_stream(consume)

Why Select queries on partitioned tables take lock and get stuck

We designed a database so that it can accepts lots of data.
For that we used partitioned table quite a lot (because the way we handle trafic information in database can take advantage of the partitionning system).
to be more precise, we have table with partition, and partition that also have partition (with 4 levels)
main table
-> sub tables partitioned by row 1 (list)
-> sub tables partitioned by row 2 (list)
...
There are 4 main partitionned tables. Each one has from 40 to 120 (sub) partitions behing.
The query that take lock and is locked by others is a SELECT that work on these 4 tables, joined. (So counting partitions it work over about 250 tables)
We had no problem until now, maybe due to trafic increase. Now Select queries that use these tables, that normally are executed in 20ms, can wait up to 10seconds , locked and waiting.
When requesting pg_stat_activity I see that these queries are :
wait_event_type : LWLock
wait_event : lock_manager
I asked dev team and also confirmed in reading logs (primary and replica), there were nothing else running except select and insert / update queries on these tables.
These queries, select queries, are running on the replica servers.
I try to find on the internet before but everything I find is : yes there are exclusive lock on partitioned table , but it's when there are some operations like drop, attach / dettach partitions.. And it's not happening on my server while there is a problem.
Server is version 12.4, running on AWS Aurora.
What can make these queries locked and waiting for this LWLock ?
What could be my options to improve this behaviour ? (help my queries no being locked...)
EDIT :
Adding some details I ve been asked or that could be interesting :
number of connections :
usually : 10 connections opened by seconds
in peak (when the problem appear) : from 40 to 100 connections opened by second
during the problem, the numbre of opened connection vary from 100 to 200.
size of the database : 30 Gb - currently lots of partitions are empty.
You are probably suffering from internal contention on database resources caused by too many connections that all compete to use the same shared data structures. It is very hard to pinpoint the exact resource with the little information provided, but the high number of connections is a strong indication.
What you need is a connecrion pool that maintains a small number of persistent database connections. That will at the same time reduce the problematic contention and do away with the performance wasted on opening lots of short-lived database connections. Your overall throughput will increase.
If your application has no connection pool built in, use pgBouncer.

PostgreSQL Large Table Logical Replication Infinite Sync

I have a large and fast-growing PostgreSQL table (166gb Index and 72 GB database). And I want to set up a logical replication of this table. Version 11.4 on both sides.
I'm trying to do it for 2 weeks, but the only thing I have is infinite syncing and growing table size on the replica (already 293 Gb index and 88Gb table, more than original, and there are no errors in the log).
I also have tried to take a dump, restore it and start syncing - but got errors with existing primary keys.
Backend_xmin value of replication stats is changing once in a week, but the sync state is still "startup". The network connection between those servers is not used at all (they are in the same datacenter), actual speed like 300-400Kb (looks like it's mostly streaming part of replication process).
So the question is How to set up a Logical replication of large and fast-growing table properly, is it possible somehow? Thank you.
I'm trying to do it for 2 weeks, but the only thing I have is infinite syncing and growing table size on the replica (already 293 Gb index and 88Gb table, more than original, and there are no errors in the log).
Drop the non-identity indexes on the replica until after the sync is done.
The problem is exactly the same
Check the logs I found the following error:
ERROR: could not receive data from WAL stream: ERROR: canceling statement due to statement timeout
Due to large tables, replication fell off by timeout
By increasing the timeouts, the problem went away
PS Ideally, it would be cooler to set up separate timeouts for replication and for the main base.

Loading data to Postgres RDS is still slow after tuning parameters

We have created a RDS postgres instance (m4.xlarge) with 200GB storage (Provisioned IOPS). We are trying to upload data from company data mart to the 23 tables in RDS using DataStage. However the uploads are quite slow. It takes about 6 hours to load 400K records.
Then I started tuning the following parameters according to Best Practices for Working with PostgreSQL:
autovacuum 0
checkpoint_completion_target 0.9
checkpoint_timeout 3600
maintenance_work_mem {DBInstanceClassMemory/16384}
max_wal_size 3145728
synchronous_commit off
Other than these, I also turned off multi AZ and back-up. SSL is enabled though, not sure this will change anything. However, after all the changes, still not much improvement. DataStage is uploading data in parallel already ~12 threads. Write IOPS is around 40/sec. Is this value normal? Is there anything else I can do to speed up the data transfer?
In Postgresql, you're going to have to wait 1 full round trip (latency) for each insert statement written. This latency is the latency between the database all the way to the machine where the data is being loaded from.
In AWS you have many options to improve performance.
For starters, you can load your raw data onto an EC2 instance and start importing from there, however, you will likely not be able to use your dataStage tool unless it can be loaded directly on the ec2 instance.
You can configure dataStage to use batch processing where each insert statement actually contains many rows.. generally, the more, the faster.
disable data compression and make sure you've done everything you can to minimize latency between the two endpoints.

Mongostat says DB has a high lock percentage but thereare no insert/updates going on

I am trying to interpret the results from mongostat.
We are running some stress test which only perform read operations on the DB. This is confirmed by the first columns of mongostat which reports around 6K queries per second, 0 insert, 0 updates, 0 deletes.
Still, the "locked db" field reports the DB being locked about 40% of the time, with about 130 queued reads, 0 queued writes.
Mongo version is 2.2 running on a Linux set of boxes (replica set with 2 nodes + 1 arbiter).
Can you help me understand what's going on? I though the lock was due to writes, but there are no writes in my test scenario.
I think the MongoDb use Readers–writer lock ,which means the read will also get the lock at the same time, it allow a group of read request or one write request to get the lock. Hope can help you.
This is the wiki page of the Readers–writer lock .
http://en.wikipedia.org/wiki/Readers%E2%80%93writer_lock