safe to kill postgres processes with SIGTERM? - postgresql

I was debugging a PostgreSQL 9.2 database corruption issue (on Solaris, but I doubt it matters) recently, and I found that we could reproduce it reliably if the client died in the middle of a transaction and then I shut down PostgreSQL by doing pkill postgres (which basically sends SIGTERM to every running postgres process). If instead we did pkill -QUIT postgres to send SIGQUIT, the database would shut down cleanly and no corruption would occur.
Based on the PostgreSQL 9.2 docs, I think that SIGTERM should be 100% expected by the database server, so why is it not safe to shut down like this? Is it a bug in PostgreSQL, or could I be doing something (configuration, etc.) that would allow the corruption to occur?

I don't think sigterm is what is causing your problem. Again, recommend you ask on dba.stackexchange instead.
If the client dies in the middle of a transacction, then the problem is that the network connection hangs? And then when you kill it you get corruption during WAL replay?
This is a complicated area to troubleshoot but here are some places to begin:
What is going on conncurrently when this happens? What sort of transaction commit load?
How often do WAL logs normally get rotated?
It is possible you could be running into a rare, obscure bug with PostgreSQL (possibly somewhere between the db, kernel, and filesystem), but if so please start by upgrading to latest 9.2 and try to reproduce again. Term and even kill signals are supposed to be 100% safe on PostgreSQL so if you are seeing database corruption, that is not expected.

Related

How can I get the host name and query that has been executed on the slave postgres database that caused the system into the memory crashed

My slave database has undergone the memory crash(Out of memory) and in recovery stage.
I want to know the query that causes this issue.
I have checked logs I get one query just before the system goes into the recovery mode;But I want to confirm it.
I am using postgres 9.4
If any one has any idea?
If you set log_min_error_statement to error (the default) or lower, you will get the statement that caused the out-of-memory error in the database log.
But I assume that you got hit by the Linux OOM killer, that would cause a PostgreSQL process to be killed with signal 9, whereupon the database goes into recovery mode.
The correct solution here is to disable memory overcommit by setting vm.overcommit_ratio to 2 in /etc/sysctl.conf and activate the setting with sysctl -p (you should then also tune vm.overcommit_ratio correctly).
Then you will get an error rather than a killed process, which is easier to debug.

MongoDB WiredTiger error: WiredTiger.turtle: handle-open: open: operation not permitted

MongoDB was working beautifully for me for several months until I had an unexpected shutdown a week or two ago. Since then, I've been getting the error in the title that snowballs into an invalid argument, then a library panic, then some fatal assertions which cause MongoDB to crash.
Now, I've done my research: the normal answers are to run the repair function and to make sure SELinux isn't screwing up the process. Neither of those have worked. The error gets thrown during WiredTiger's checkpoint process, so reads/writes to the database aren't the issue, and because it's during the checkpoint process, it guarantees that MongoDB won't stay up for more than a day.
To be clear: all the files in the database are owned by mongod:mongod, have permissions set to 600 (default, and I tried setting them to 755 to see if that fixed it, and it didn't). I'm running mongodb as a service on a CentOS 7 box, and the service file specifies that it should run as user mongod. The mongod.conf file specifies a mounted filesystem as the database, and it was happy with that until the unexpected shutdown. I'm running MongoDB version 4.0.1, so WiredTiger really doesn't like it if I disable Journaling either (disregarding the fact that I shouldn't disable it in the first place).
I feel like I've exhausted all my options, and that the only thing I can do is backup my data and reinstall MongoDB. Are there any that I've missed?
After creating a backup of my data via mongodump, shutting down mongo, removing the entire database with rm -rf 'path-to-database', rebooting mongo (without the replication config), and restoring the data with mongorestore, mongodb still crashes. This time, however, it's with an Invariant failure after the open: operation not permitted. The only conclusion I can think of is that the data itself has become corrupted in some way. Thankfully, this isn't "mission critical" data, so to speak, and I can easily obtain new data.
Unfortunately, this doesn't answer my original question of "what other options do I have?". However, I'm still posting this in case others run into this same kind of issue.
EDIT: invariant issue was caused by me forgetting to re-initialize my replication set. After fixing that, it's clean. Because of this, I no longer believe it was a data corruption issue, but a checkpoint corruption issue.
EDIT 2: So the issue arose again after about a week, and after another week of trying various debugging methods, I tried simply moving the mongo process to another server. So far, that's been working. The previous server was acting up (I couldn't even run top at one point - another process had a lock on a necessary library file to run it), so here's to hoping that the current server doesn't follow suite.

Can not stop postgres despite immediate stop

I have this issue that is driving me nuts. Despite all my efforts, I am not able to force my postgres server to shut down. I have followed those instructions : http://www.question-defense.com/2008/10/17/pg_ctl-server-does-not-shut-down-force-postgres-to-shutdown
but still, nothing happens and all I got in the shell is
waiting for server to shut down............................................................... failed
pg_ctl: server does not shut down
Any help much appreciated.
Update: Checking the logs, I have this recurring error :
LOG: checkpoints are occurring too frequently (25 seconds apart)
HINT: Consider increasing the configuration parameter "checkpoint_segments".
After giving it a lot of thoughts especially on the way I installed it at the first place, I realize that I set up the install so the daemon would launch postgres at the start of my machine. Thus, any manual killing would simply result in the recreation of those process by the same daemon.
To resolve this problem you need to stop the daemon from working using launchctl and remove a .plist file in your postgres directory.
Good luck if you face the same problem.
You probably run with the default setting of "checkpoint_segments = 3", that produces the warnings. Your database does many writes, right? It takes some time to write all of this to disk, and your database is quite busy rotating the logfiles, instead doing real work.
If you increase checkpint_segments, you will see performance improvements, and less I/O.
For further readings: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Postgres SIGKILL crash

FYI only; this does not need an answer.
I was working on a Postgres server under heavy load, and issued a GRANT command that hung. It was not blocked by any other commands.
I had a few open connections and was able to kill several of the processes with a normal pg_cancel_backend (SIGTERM) command, but my GRANT command didn't respond to either that or pg_terminate_backend (SIGINT). I finally tried "kill -9 (pid)" (SIGKILL) and the server crashed.
Issuing SIGKILL to the database server process or the postmaster can cause crashes--that's well documented. Running SIGKILL against a child process can also crash the database.
Running SIGKILL against a child process can also crash the database
Any fatal signal that terminates any backend without a chance to clean up, such as SIGSEGV, SIGABRT, SIGKILL, etc, will cause the postmaster to assume that shared memory may be corrupt. It will roll back all transactions, terminate all running backends, and restart.
PostgreSQL does that to protect your data. If something went wrong before a backend crashed that caused it to scribble on shared memory, then shared_buffers could contain invalid data that'd get flushed to disk and replace good pages.
I was pretty sure that was in the docs, but all I can find is what I think you were referring to in shutting down the server.
Anyway, if you SIGKILL a backend you'll see something like:
WARNING: terminating connection because of crash of another server process
DETAIL: The postmaster has commanded this server process to roll back the
current transaction and exit, because another server process exited
abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and
repeat your command.
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
This also happens if the OOM killer kills a backend, which is why you should turn off memory overcommit on Linux.
I wrote some guidance on things to do and not to do with PostgreSQL on my blog. Worth a look.

psql seems to timeout with long queries

I am performing a bulk copy into postgres with about 80GB of data.
\copy my_table FROM '/path/csv_file.csv' csv DELIMITER ','
Before the transaction is committed I get the following error.
Server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
In the PostgreSQL logs:
LOG:server process (PID 21122) was terminated by signal 9: Killed
LOG:terminating any other active server processes
WARNING:terminating connection because of crash of another server process
DETAIL:The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
Your backend process receiving a signal 9 (SIGKILL). This can happen if:
Somebody sends a kill -9 manually;
A cron job is set up to send kill -9 under some circumstances (very unsafe, do not do this); or
the Linux out-of-memory (OOM) killer triggers and terminates the process.
In the latter case you will see reports of OOM killer activity in the kernel's dmesg output. I expect this is what you'll see in your case.
PostgreSQL servers should be configured without virtual memory overcommit so that the OOM killer does not run and PostgreSQL can handle out-of-memory conditions its self. See the PostgreSQL documentation on Linux memory overcommit.
The separate question "why is this using so much memory" remains. Answering that requires more knowledge of your setup: how much RAM the server has, how much of it is free, what your work_mem and maintenance_work_mem settings are, etc. It isn't a very interesting problem to look into until you upgrade to the current PostgreSQL 8.4 patch release to make sure the problem isn't one that's already fixed.