PostgreSQL update query performance issue - postgresql

I am not familiar with PostgreSQL. I found this query
UPDATE theater_visitor
SET viewer_total_count = viewer_total_count + 1
WHERE movie_id = $1
Let's consider this. 100+ users start watching at the same time, 100+ request will come at the same time, how postgresql server will behave?
I personally think, this design can be improved, considering some queuing mechanism to reduce the write rate, what other issue should I be concerned with?

If the 100 users watch different movies, they will be updated in parallel. If they all watch the same movie, the updates will be serialized automatically. Since locks are held until the end of the database transaction, it is important that you keep that transaction as short as possible.

Related

PostgreSQL: Backend processes are active for a long time

now I am hitting a very big road block.
I use PostgreSQL 10 and its new table partitioning.
Sometimes many queries don't return and at the time many backend processes are active when I check backend processes by pg_stat_activity.
First, I thought theses process are just waiting for lock, but these transactions contain only SELECT statements and the other backend doesn't use any query which requires ACCESS EXCLUSIVE lock. And these queries which contain only SELECT statements are no problem in terms of plan. And usually these work well. And computer resources(CPU, memory, IO, Network) are also no problem. Therefore, theses transations should never conflict. And I thoughrouly checked the locks of theses transaction by pg_locks and pg_blocking_pids() and finnaly I couldn't find any lock which makes queries much slower. Many of backends which are active holds only ACCESS SHARE because they use only SELECT.
Now I think these phenomenon are not caused by lock, but something related to new table partition.
So, why are many backends active?
Could anyone help me?
Any comments are highly appreciated.
The blow figure is a part of the result of pg_stat_activity.
If you want any additional information, please tell me.
EDIT
My query dosen't handle large data. The return type is like this:
uuid UUID
,number BIGINT
,title TEXT
,type1 TEXT
,data_json JSONB
,type2 TEXT
,uuid_array UUID[]
,count BIGINT
Because it has JSONB column, I cannot caluculate the exact value, but it is not large JSON.
Normally theses queries are moderately fast(around 1.5s), so it is absolutely no problem, however when other processes work, the phenomenon happens.
If statistic information is wrong, the query are always slow.
EDIT2
This is the stat. There are almost 100 connections, so I couldn't show all stat.
For me it looks like application problem, not postresql's one. active status means that your transaction still was not commited.
So why do you application may not send commit to database?
Try to review when do you open transaction, read data, commit transaction and rollback transaction in your application code.
EDIT:
By the way, to be sure try to check resource usage before problem appear and when your queries start hanging. Try to run top and iotop to check if postgres really start eating your cpu or disk like crazy when problem appears. If not, I will suggest to look for problem in your application.
Thank you everyone.
I finally solved this problem.
I noticed that a backend process holded too many locks. So, when I executed the query SELECT COUNT(*) FROM pg_locks WHERE pid = <pid>, the result is about 10000.
The parameter of locks_per_transactions is 64 and max_connections is about 800.
So, if the number of query that holds many locks is large, the memory shortage occurs(see calculation code of shared memory inside PostgreSQL if you are interested.).
And too many locks were caused when I execute query like SELECT * FROM (partitioned table). Imangine you have a table foo that is partitioned and the number of the table is 1000. And then you can execute SELECT * FROM foo WHERE partion_id = <id> and the backend process will hold about 1000 table locks(and index locks). So, I change the query from SELECT * FROM foo WHERE partition_id = <id> to SELECT * FROM foo_(partitioned_id). As the result, the problem looks solved.
You say
Sometimes many queries don't return
...however when other processes work, the phenomenon happens. If statistic
information is wrong, the query are always slow.
They don't return/are slow when directly connecting to the Postgres instance and running the query you need, or when running the queries from an application? The backend processes that are running, are you able to kill them successfully with pg_terminate_backend($PID) or does that have issues? To rule out issues with the statement itself, make sure statement_timeout is set to a reasonable amount to kill off long-running queries. After that is ruled out, perhaps you are running into a case of an application hanging and never allowing the send calls from PostgreSQL to finish. To avoid a situation like that, if you are able to (depending on OS) you can tune the keep-alive time: https://www.postgresql.org/docs/current/runtime-config-connection.html#GUC-TCP-KEEPALIVES-IDLE (by default is 2 hours)
Let us know if playing with any of that gives any more insight into your issue.
Sorry for late post, As #Konstantin pointed out, this might be because of your application(which is why I asked for your EDIT2). Adding a few excerpts,
table partition has no effect on these locks, that is a totally different concept and does not hold up locks in your case.
In your application, check if the connection properly close() after read() and is in finally block (From Java perspective). I am not sure of your application tier.
Check if SELECT..FOR UPDATE or any similar statement is written erroneously recently which is causing this.
Check if any table has grown in size recently and the column is not Indexed. This is very important and frequent cause of select statements running for some minutes. I'd also suggest using timeouts for select statements in your application. https://www.postgresql.org/docs/9.5/gin-intro.html This can give you a headstart.
Another thing that is fishy to me is the JSONB column, maybe your Jsonb values are pretty long, or the queries are unnecessarily selecting JSONB value even if not required?
Finally, If you don't need some special features of Jsonb data type, then you use JSON data type which is faster (magical maximum, sometimes 50x!)
It looks like the pooled connections not getting closed properly and a few queries might be taking huge time to respond back. As pointed out in other answers, it is the problem with the application and could be connection leak. Most possibly, it might be because of pending transactions over some already pending and unresolved transactions, leading to a number of unclosed transactions.
In addition, PostgreSQL generally has one or more "helper" processes like the stats collector, background writer, autovaccum daemon, walsender, etc, all of which show up as "postgres" instances.
One thing I would suggest you check in which part of the code you have initiated the queries. Try to DRY run your queries outside the application and have some benchmarking of queries performance.
Secondly, you can keep some timeout for certain queries if not all.
Thirdly, you can do kill the idle transactions after certain timeouts by using:
SET SESSION idle_in_transaction_session_timeout = '5min';
I hope it might work. Cheers!

I would like to cache new data in Redis, before insert them directly on Postgres

I am dealing with a great number of inserts per second int a Postgres DB (and a lot of read too).
A few days ago I heard about Redis and start to think about send all these INSERTS for Redis first, to avoid a lot of open/insert/close things in Postgres every second.
Than, after some short period, i could group those data from Redis, in a INSERT SQL structure and run them together in Postgres, with only one connection opened.
The system stores GPS data and an Online Map read them, in real time.
Any suggestions for that scenario? Thanks !!
I do not know how important it is in your case to have the data available for your users almost real time. But from the listed above, I do not see anything that can not be solved by configuration/replication for Postgresql.
You have A lot of writes to your database; before going for a different technology, Postgresql is tested in big battles and I am sure you can get more by configuring it to handle more writes if it is optimized. link
You have a lot of read to your database; A Master-Slave replication can let all your read traffic be targeted to those DB salves and you can scale horizontally as much as you need.

Database for long running transactions with huge updates

I build a tool for data extraction and transformation. Typical use case - transactionally processing lots of data.
Numbers are - about 10sec - 5min duration, 200-10000 row updated (long duration caused not by the database itself but by outside services that used during transaction).
There are two types of agents that access database - multiple read agents, and only one write agent (so, there are never multiple concurrent write).
During the transaction:
Read agents should be able to read database and see it in the current state.
Write agent should be able to read database (it does both - read and write during transaction) and see it in the new (not yet committed) state.
Is PostgreSQL a good choice for that type of load? I know it uses MVCC - so it should be ok in general, but is it ok to use long and big transactions extensively?
What other open-source transactional databases may be a good choice (I am not limited to SQL)?
P.S.
I do not know if the sharding may affect the performance. The database will be sharded. For every shard there will be multiple readers and only one writer, but multiple different shards can be written to at the same time.
I know that it's better not to use outside services during transaction, but in that case - it's the goal. The database used as a reliable and consistent index for some heavy, huge, slow and eventually-consistent data processing tool.
Huge disclaimer: as always, only real life test can tell you the truth.
But, I think PostgreSQL will not let you down, if you use most recent version (at least 9.1, better 9.2) and tune it properly.
I have somewhat similar load in my server, but with slightly worse R/W ratio: about 10:1. Transactions range from few milliseconds up to 1 hour (and sometimes even more), and one transaction can insert or update up to 100k rows. Total number of concurrent writers with long transactions can reach 10 and more.
So far so good - I don't really have any serious issues, performance is great (certainly not worse than I expected).
What really helps is that my hot working data set almost fits into available memory.
So, give it a try, it should work great for your load.
Have a look at this link. Maximum transaction size in PostgreSQL
Basically there can be some technical limits on the software side to how large your transaction can be.

What is better in terms of sqlite3 performance: delete unneeded row or set it as not needed?

I am writing an iPhone application where the user receives multiple messages from different users. These messages are stored in an sqlite3 database. With time the user might like to delete received messages from one user, but for sure he will continue to receive new messages from that user after deleting the old ones.
Since retrieving the messages will be done using a SELECT statement, which scenario is better to use when the user would like to delete the messages (in terms of performance):
DELETE all the old messages normally and continue to retrieve the new ones using a statement like: SELECT Messages FROM TableName WHERE UserID = (?)
Add a field to the table of type INTEGER and upon the DELETE request set this field to 1 and after that retrieve the new messages using a statement like: SELECT Messages FROM TableName WHERE UserID = (?) AND IsDeleted = 0
One more thing, if scenario 1 is used (normal DELETE) will this cause any fragmentation of the database file on the disk?
Many thanks in advance.
Using scenario 1 is much better, since both SELECT and DELETE in SQL operate at the same level of speed and scenario 1 will grant you not having dangling tuples (Unwanted Rows) in your database.
If you are wishing to perform data backup after any deletion process so scenario 2 is a must but you have to take into consideration the growing size of your database which leads to a slower performance in future.
Finally I would like to add that performing deleting operations on a database would not cause any fragmentation issues since most of databases have fragmentation and optimizing tools in their engines.
It would be a pretty lousy database if DELETE didn't work well. In absence of evidence to the contrary, I'd assume you are safe to delete as normal. The database's entire raison d'ĂȘtre is to make these sorts of operations efficient.
IMHO if you don't use DELETE, after a while the db will get bigger and bigger, thus making each SELECT less and less efficient.
therefore i figure that deleting rows that will never be used again is more efficient.

Queue implementation

What is the best way to implement fast queue where multiple users try to access to about 100 000 records. Only one user can get one unique row. Now im using sql database (firebird) but there is a lot of problems deadlocks / high database load.
Most of the time, deadlocks are caused by bad transaction logic.
In general, the transactions have to be short (the shorter the better).
You can start by reading some doc:
http://www.firebirdsql.org/doc/whitepapers/fb_vs_ibm_vs_oracle.htm
http://www.ibphoenix.com/main.nfs?a=ibphoenix&page=ibp_expert4