I am stuck in a problem that PostgreSQL data writes are very slow.
I developed my application in Java (using JDBC) to insert data into a PostgreSQL DB. It works well on our remote development server. However, after I deploy it to the production server, it causes a problem.
The insert speed of PostgreSQL on the production server is only ~150 records/s for 200000K records, while it is ~1000 records/s for the same data set on the development server.
Firstly, I tried to change the configuration in postgresql.conf as follows:
effective_cache_size = 4GB
max_wal_size = 2GB
work_mem = 128MB
shared buffers = 512MB
After I changed the configuration and restarted, it only affects the query speed, while the insert speed does not change (~150 records/s).
I have checked my server memory info, there is a lot of free memory ~4GB. The inserter only uses 0.5% of 8GB (~40MB).
So my questions are:
Is this a problem of a storage disk, such as SSD and HDD or virtual
and physical etc.? Why is the insert speed still very slow, although I have changed the configuration? Is there any way
for increasing the insert speed?
Note: the problem does not relate to the insert query structure.
I have used the same query in the same condition elsewhere (I set up an
environment in 2 servers in the same way). I do not know why the
DEVELOPMENT server (4GB) works better than the PRODUCTION server
(8GB).
The only one of your parameters that has an influence on INSERT performance is max_wal_size. High values prevent frequent checkpoints.
Use iostat -x 1 on the database server to see how busy your disks are. If they are quite busy, you are probably I/O bottlenecked. Maybe the I/O subsystem on your test server is better?
If you are running the INSERTs in many small transactions, you may be bottlenecked by fsync to the WAL. The symptom is a busy disk with not much I/O being performed.
In that case batch the INSERTs in larger transactions. The difference you observe could then be due to different configuration: Maybe you set synchronous_commit or (horribile dictu!) fsync to off on the test server.
Related
I have the same 5000 key/value pairs being read/written continuously (every 150ms or so) on a Debian system equivalent to a Raspberry Pi 3.
I don't care about persisting this data, it's recreated whenever my application server is launched.
Initially I used SQLite for this, using an in-memory table. However, now I want to access the data from multiple processes (using a tmpfs didn't work out great) and even from a remote client, as well as add an HTTP API, use LISTEN/NOTIFY for change notifications, so I'd like to switch to PG which is more appropriate for these.
Given these circumstances:
small dataset that fits in RAM
no need for persistence
low power PC
running 24/7 forever
don't want to thrash the flash storage
...what would be a good approach to configuring PG?
I found this 10yo question and the last update was 5 years ago saying to use a 3rd party extension, which I'm not too excited about.
You should create few indexes apart from the primary keys and keep the fillfactor of all your tables low, perhaps around 50. That should get you HOT updates, which will reduce the need for VACUUM and the amount of data written.
You may want to reduce shared_buffers to conserve memory, but keep it big enough to contain the database.
Set synchronous_commit to off to have less disk I/O. If you are ready to ditch the database after an unclean shutdown or system crash, you can set fsync = off, but then you have to remove the cluster after each crash. If you take it that far, you could reduce the write load further by using unlogged tables.
Set checkpoint_timeout high for fewer writes.
I am having some trouble understanding linux hugepages within PostgreSQL. From what I have googled:
Configured huge pages will be allocated and not swapped out of RAM.
Huge pages may improve performance due to a lower number of pages to be managed by the kernel.
Individual huge pages are contiguous blocks of memory.
My Postgres cluster conf:
shared_buffers set to 4GB
max_connections 30
all other conf is set as default
Huge pages is set but when starting defaulted to not using them. After setting huge pages on in PostgreSQL and in Linux to 4096/2 with vm.nr_hugepages and sysctl -p PostgreSQL would't start because it couldn't allocate memory enough. After trying several vm.nr_hugepages values the lowest it seamed to work was 3500.
My questions are:
How does PostgreSQL calculate the amount of memory it will need in advance?
Read that a memlimit should be set to PostgreSQL after setting hugepages ¿If it is only going to use huge pages wouldn't this be a limit already? Assuming no other process uses them.
Related to previous question: what would happen if it ran out of hugepages?
Will hugepages be used by all PostgreSQL processes, a sort operation for instance. The fact of having to allocate 3500 pages in order to have PostgreSQL starting ok is a bit confusing because I thought they would be mainly used by shared_buffers.
Thanks in advance!
Edit:
The system I am testing on is an Intel 8 cores with 32GB RAM. The main purpose of the setup is ETLs, receive files which will be loaded with COPY (several GBs per file), transform with some more or less complex SQL, persist results in several tables with a design similar to a DWH (star schemas), 40TB of total storage for PostgreSQL (4x10TB drives) and 512GB SSD for Ubuntu server 18.04. Some of the transformations that will be done will require tablescans or scans of a big part of the tables, in DB2 I have the option of using a block-based buffer pool (https://www.ibm.com/support/knowledgecenter/SSEPGG_11.1.0/com.ibm.db2.luw.admin.perf.doc/doc/c0009651.html). I thought I could achieve something "similar" with hugepages in PostgreSQL ¿Would this be possible?
I'm trying to get the fastest possible queries from PostgreSQL, and I'm going to be testing this out but I want to know what kinds of issues could I run in to.
Servers
1X PostgreSQL Master. With all of it's data on a 20GB Ramdisk. (Leaving ~12GB of RAM for OS and programs)
2X PostgreSQL Replica (Hot-Standby). With all of it's data on a RAID 10 of SSDs.
Config
Synchronous commit is disabled
wal_buffer is set to 16MB
wal_writer_delay is 400ms
checkpoint_segments is 64
shared_buffers is 3GB
Loss of data that has not been committed yet is acceptable in this setup. But once the data is committed after the 400ms then it needs to be able to survive any single machine in this setup failing.
If the master fails that is okay and the last ~400ms is lost which is fine. But one of the other two nodes should then pick up where the master left off; although without the RAM disk.
We want to be able to query and insert data as fast as absolutely possible, and we have contingencies built into our application to handle the master failing.
What problems would this configuration cause, or what problems or difficulties might we face?
Any other information that might be needed I can provide.
I've got a task to do and some limited hardware resources, as always.
I need to setup postgres server with single database, with a table of largeobjects (3TB+) and a few small, heavily accessed tables (<10 GB).
I've got old physical server with ~5 TB of harddisk space, with limited CPU and RAM, I can also use much faster (in CPU and RAM) virtual server - but limited in storage.
I won't have much DELETE statements, most SELECT statements will be to recent data. There will be one simultanous connection doing all the job, client on one host only.
I see a few scenarios:
Postgres on virtual machine with remote storage (single instance)
Postgres on old hardware with local storage (single instance)
Postgres on both, with some kind of replication (high speed virtual machine for new data, low speed for older data on the old hardware)
Any other ideas?
Is it even possible to replicate just the most recent part of the postgres database?
90% of SELECT queries will be to the most recent ~5-10 gigabytes of data, but I need seamless access to the rest 2,990 TB.
What should I do? (except buying appropriate hardware;)
It doesn't really matter as long as you have enough RAM to buffer the 10GB of heavily accessed data.
You'll need some additional RAM to read large objects without pushing the 10GB out of the cache, but that shouldn't be a problem on today's machines.
If all your work is done on one connection, that sounds like there will be no high load on the database.
So I wouldn't really worry about scaling with requirements like that.
Your biggest worry should probably be how to backup 3TB of data in a reasonable time.
Edit: If you have much less memory, you should take the machine with the faster storage.
Finally I've checked several different scenarios and decided not to keep files/largeobjects in database.
Postgres with database location mounted over NFS (v4) had some lags - It was faster but it was choking for a few seconds periodically, i decided to store plain files over NFS which is significantly slower but more stable.
I'm sure there was a way to tune it, but this solution is fine too.
Postgres is used for file index and keeps their files on local harddisk.
We have created a RDS postgres instance (m4.xlarge) with 200GB storage (Provisioned IOPS). We are trying to upload data from company data mart to the 23 tables in RDS using DataStage. However the uploads are quite slow. It takes about 6 hours to load 400K records.
Then I started tuning the following parameters according to Best Practices for Working with PostgreSQL:
autovacuum 0
checkpoint_completion_target 0.9
checkpoint_timeout 3600
maintenance_work_mem {DBInstanceClassMemory/16384}
max_wal_size 3145728
synchronous_commit off
Other than these, I also turned off multi AZ and back-up. SSL is enabled though, not sure this will change anything. However, after all the changes, still not much improvement. DataStage is uploading data in parallel already ~12 threads. Write IOPS is around 40/sec. Is this value normal? Is there anything else I can do to speed up the data transfer?
In Postgresql, you're going to have to wait 1 full round trip (latency) for each insert statement written. This latency is the latency between the database all the way to the machine where the data is being loaded from.
In AWS you have many options to improve performance.
For starters, you can load your raw data onto an EC2 instance and start importing from there, however, you will likely not be able to use your dataStage tool unless it can be loaded directly on the ec2 instance.
You can configure dataStage to use batch processing where each insert statement actually contains many rows.. generally, the more, the faster.
disable data compression and make sure you've done everything you can to minimize latency between the two endpoints.