Is it possible to increase Pgadmin4 speed? - postgresql

I have installed Pgadmin 4 on a Windows 10 x64 system with 16gb of ram.
The problem is: if I send a simple query on a PostgresSql14 server, like extracting some data on a 10 rows table and i make a syntax error ( for example mispelling an attribute or a comma) the query sometimes runs forever and i can't even kill it.
If i do the same error in the cmd this get caught in milliseconds.
I have tried to modify the values in postgresql.conf with this values:
shared_buffers=4096MB
max_wal_size = 4GB
Furthermore sometimes the query runs for many minutes even if it's correct and the data accessed are few.
Is this behaviour normal for PgAdmin4?
If not how to fix it?

Related

PostgreSQL create database statement takes longer than usual

Creating a new empty database (using pgAdmin 4) usually takes 5-15 sec but recently it started to take much longer (2-3min).
When creating a new database I use a template (postgis) database. We have recently switched from postgresql 9 to 10.
Anticipating potential questions:
template database is empty (only template tables etc)
I did the VACUUM FULL
the database server (Win server 2012, Intel Xeon 3,2 Ghz 12 cores, 64GB RAM) is not very overloaded - on average: 7-10% CPU, 30-35% memory usage
Any hints what might be the reason? what elese should I check ?

Stymied by idle_in_transaction_session_timeout

Immediate problem: When I do a pgAdmin 4 restore I get "Stymied by idle_in_transaction_session_timeout" error.
I am on a MacBook Pro running macOS Mojave version 10.14.5, using Java and PostgreSQL. I use the pgAdmin 4 GUI, as I am not proficient in psql, bash, etc. I have a test database named pg2. As you can see from the attachment, PostgreSQL servers 9.4 and 10 have the identical databases. If I make a change in a database on one server, it will show also in the other server’s database. There is a third server, 11, in which there is only the postgres database.
I have tried psql and get errors (including timeout errors).
I have tried to Delete/Drop server 11, it will disappear but when I sign out of pgAdmin 4 and then go into pgAdmin 4 again the server 11 will be there again.
See the attachments for screen shots.
I expect the backup/restore to work: backup, then make a change to the database, then correctly restore to previous state.
I would like to have just one server, preferably 11 with only pg1 and the test db tempdb running in it. I thought that I could live with the three, for I am aware of my current capabilities and thus did not want to screw things up further. However, I suspect that the two servers 9.4 and 10 are the source of my current problem: receiving the idle_in_transaction_session_timeout error while doing a restore. Note: I did the backup using the server 10’s pg1 backup. Did it create 2 backups, one for 9.4 and one for 10?
I tried to attach these before. They will help make sense of my problem.
The 2 servers have the same database; is this causing the idle in transaction session timeout?

psql runs out of memory when restoring dump

I have a PostgreSQL text dump file approximatley 4.5GB in size (uncompressed) that I am trying to restore, but always fails due to running out of memory.
Interestingly enough, no matter what I try it always fails at the exact same line number of the dump file, which leads me to believe the changes I have attempted have had no effect. (I did look at this line number in the file and it is just another row of data, nothing significant is occurring at that point in the file.)
I am using psql with the -f option, as I read that can be better than the standard input. Both methods fail, however.
I have tried the following:
increase work_mem from 4MB to 128MB
increase shared buffers from 128MB to 2GB
increase VM memory from 8GB to 16GB
Using both Top and PG_Top I can see (what I believe shows) both the OS and database still have memory available when psql fails. I'm not doubting that something somewhere is running out of memory, I just wish I had a better way of telling what exactly that was.
Other information that may be helpful:
PostgreSQL 10.5
Ubuntu 16.04 LTS running on VMWare Workstation

MySQL Workbench 6.3 not returning results with more than 2000 rows

I am running MySQL Workbench 6.3 on a Windows 7 machine 64-bit laptop. When a do a simple query to get all the data in a single table with ~400 rows of data, the query stays in "running . . . " status and eventually returns the Error Code: 2013 Lost Connection to MySQL server at "waiting for initial communication". If I limit the results to 1000 rows, the query works fine, its only when I allow for more than 2000 rows does this occur.
I do have "Use compression protocol" enabled, which I had hoped would fix the issue.
The other thing I noticed, that if I run the query on my Mac I do not have this issue, I get more than 10,000 rows with no issues.
Has anyone else had this issue and resolved it?
~michemali
Seems like a time out issue. Please refer to this post to see if this resolves your issue:
MySQL Workbench: How to keep the connection alive

Timeout of remote connection to Postgresql

I have two EC2 instances, one of them needs to insert large amounts of data into a Postgresql db that lives on the other one. Incidentally it's Open Street Map data and I am using the osm2pgsql utility to insert the data, not sure if that's relevant, I don't think so.
For smaller inserts everything is fine but for very large inserts something times out after around 15 minutes and the operation fails with:
COPY_END for planet_osm_point failed: SSL SYSCALL error: Connection timed out
Is this timeout enforced by Postgresql, Ubuntu or AWS? Not too sure where to start troubleshooting.
Thanks
Could be caused by renegotiation. Check the log, and maybe tweak
ssl_renegotiation_limit = 512MB (the default)
setting it to zero will disable negotiation