ERROR: child process failed, exited with error number 4 - mongodb

I was trying to run this command
#mongod --fork --logpath /var/log/mongod.log
but I got this message
about to fork child process, waiting until server is ready for connections.
forked process: 10750
all output going to: /var/log/mongod.log
log file [/var/log/mongod.log] exists; copied to temporary file
[/var/log/mongod.log.2017-05-23T14-57-31]
ERROR: child process failed, exited with error number 4
Could you please help me resolve the problem?

Per the MongoDB docs:
4
The version of the database is different from the version supported by the mongod (or mongod.exe) instance. The instance exits cleanly.

Related

How to initial sync mongo replica

My mongo slave is dead because it stopped unexpectedly due to not enough space and It wont start due to
mongodb.service: Main process exited, code=exited, status=14/n/a
I tried to fix the error with following suggestions:
https://askubuntu.com/questions/823288/mongodb-loads-but-breaks-returning-status-14
but it lead to next error code:
mongodb.service: Main process exited, code=exited, status=100/n/a
which I tried to fix with following
https://dba.stackexchange.com/questions/220411/sudo-service-mongod-start-returns-error-100
this it it log output
2021-05-01T18:25:30.987+0000 I - [initandlisten] Fatal assertion 28579 UnsupportedFormat: Unable to find metadata for table:index-3-848131710157586571 Index: {name: _id_, ns: local.me} - version too new for this mongod. See http://dochub.mongodb.org/core/3.4-index-downgrade for detailed instructions on how to handle this error. at src/mongo/db/storage/wiredtiger/wiredtiger_index.cpp 241
The command sudo service mongodb start wont work because the status command shows that the service is dead.
I figured out that it would be easier to resync the data from scratch. I found the documentation
https://docs.mongodb.com/manual/tutorial/resync-replica-set-member/#resync-a-member-of-a-replica-set
but I am not fully aware what commands run to execute this operation.
My dbPath = "/mnt/mongo/mongodb", MongoDB shell version v3.4.14, and my database has about 2.5T. Could you give my some guidance how to execute initial sync mongo replica?
From my understanding i should
sudo rm -r /mnt/mongo/mongodb/*
sudo service mongodb start
After some time everything should get back to normal(?)
Correct me if I am wrong...

mongoDB: child process failed, exited with 1 & 14

I have already tried it with the following thread but it didn't help me:
ERROR: child process failed, exited with error number 1,mongodb
Tried that:
sudo mongod --fork --config /etc/mongod.conf --logpath /var/log/mongodb/mongod.log
Result:
forked process: 2887
ERROR: child process failed, exited with 48
To see additional information in this output, start without the "--fork" option.
Tried the following as well:
sudo systemctl start mongod
One of the lines in the error message say:
ERROR: child process failed, exited with 14
Would appreciate any help. mongod was working pretty well until I have started trying to implement a replica set.
Use this command on the terminal:
sudo mongod --dbpath /System/Volumes/Data/data/db
It should start the MongoDB server and allocate a port for it. You can exit out of the server and then try the command again and it should work.

supervisord unknown error making dispatchers for : ENOENT

The supervisord config as below, myserver is golang executable put into dir /usr/tci/bin. And it indeed existed in the dir, why I still get the ENOENT error? ENOENT means can't find the entry.
[supervisord]
nodaemon=true
loglevel=debug
[program:myserver]
command=/usr/tci/bin/myserver
autostart=true
autorestart=true
Error msg:
2018-03-05 08:39:00,230 INFO spawnerr: unknown error making
dispatchers for 'myserver': ENOENT
Make sure the directory that holds your log files exists.
Supervisor was running when I removed its log directory /var/log/supervisor.
I first noticed the issue when I tried to restart a process which resulted in
an unknown error making dispatchers for ENOENT error
I readded the directory by running:
mkdir /var/log/supervisor
This fixed the issue and allowed me to restart my process sucessfully. I would also imagine a
sudo service supervisor restart
would fix it since it might generate the missing directory.
Make sure you have logfile set then restart the server:
sudo service supervisor restart
My logging config:
loglevel=debug
logfile =/var/log/supervisor/myserver.log
`
Laravel example config:

Not able to start mongodb server as a long running process

I want to keep MongoDb server running forever or as a background process.
But while doing so it throws me the below error :
about to fork child process, waiting until server is ready for connections.
forked process: 12538
ERROR: child process failed, exited with error number 1.
Command I have used is :
mongod --dbpath="<location of data files>" --fork --logpath="/var/log/mongodb/mongodb.log".
Can anyone comment here?

PostgreSQL 9.1 streaming replication restore_command: special meaning of exit code 255?

I have a PostgreSQL 9.1.3 streaming replication setup on Ubuntu 10.04.2 LTS (primary and standby). Replication is initialized with a streamed base backup (pg_basebackup). The restore_command script tries to fetch the required WAL archives from a remote archive location with rsync.
Everything works like described in the documentation when the restore_command script fails with an exit code <> 255:
At startup, the standby begins by restoring all WAL available in the archive location, calling restore_command. Once it reaches the end of WAL available there and restore_command fails, it tries to restore any WAL available in the pg_xlog directory. If that fails, and streaming replication has been configured, the standby tries to connect to the primary server and start streaming WAL from the last valid record found in archive or pg_xlog. If that fails or streaming replication is not configured, or if the connection is later disconnected, the standby goes back to step 1 and tries to restore the file from the archive again. This loop of retries from the archive, pg_xlog, and via streaming replication goes on until the server is stopped or failover is triggered by a trigger file.
But when the restore_command script fails with exit code 255 (because the exit code from a failed rsync call is returned by the script) the server process dies with the following error:
2012-05-09 23:21:30 CEST - # LOG: database system was interrupted; last known up at 2012-05-09 23:21:25 CEST
2012-05-09 23:21:30 CEST - # LOG: entering standby mode
rsync: connection unexpectedly closed (0 bytes received so far) [Receiver]
rsync error: unexplained error (code 255) at io.c(601) [Receiver=3.0.7]
2012-05-09 23:21:30 CEST - # FATAL: could not restore file "00000001000000000000003D" from archive: return code 65280
2012-05-09 23:21:30 CEST - # LOG: startup process (PID 8184) exited with exit code 1
2012-05-09 23:21:30 CEST - # LOG: aborting startup due to startup process failure
So my question is now: Is this a bug or is there a special meaning of exit code 255 which is missing in the otherwise excellent documentation or am I missing something else here?
On the primary server, you have WAL files sitting in the pg_xlog/ directory. While WAL files are there, PostgreSQL is able to deliver them to the standby should they be requested.
Typically, you also have local archived WAL location, when files are moved there by PostgreSQL, they no longer can be delivered to the standby on-line and standby is expecting them to come from the archived WAL location via restore_command.
If you have different locations for archived WALs setup on primary and on standby servers, then there's no way for a while to reach standby and you have a gap.
In your case this might mean, that:
00000001000000000000003D had been archived by the primary PostgreSQL;
standby's restore_command doesn't see it from the configured source location.
You might consider manually copying missing WAL files from primary to the standby using scp or rsync. It is also might be necessary to review your WAL locations and make sure both servers look in the same direction.
EDIT:
grep-ing for restore_command in sources, only access/transam/xlog.c references it. In function RestoreArchivedFile almost at the end (round line 3115 for 9.1.3 sources), there's a check whether restore_command had exited normally or had it received a signal.
In first case, message is classified as DEBUG2. In case restore_command received a signal other then SIGTERM (and wasn't able to handle it properly I guess), a FATAL error will be reported. This is true for all codes greater then 125.
I will not be able to tell you why though.
I recommend asking on the hackers list.
This looks like an rsync problem I encountered temporarily using NFS (with rpcbind/rstatd on port 837):
$ rsync -avz /var/backup/* backup#storage:/data/backups
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(600) [sender=3.0.6]
This fixed it for me:
service rpcbind stop
I had the same issue creating a hot standby (postgres 9.5). Streaming was working (I seeded the standby via pg_basebackup using the same credentials as would later be used in the standby's recovery.conf).
After taking the basebackup, I setup the following recovery.conf:
standby_mode = 'on'
primary_conninfo = 'host=ip.of.master port=5432 user=pgstandby password=password'
recovery_target_timeline = 'latest'
restore_command = 'sftp -q user#ip.of.wal.archive.host:data/master_wal_archive/%f "%p"'
trigger_file = '/srv/pgsql/9.5/data/trigger'
Starting the server would yield:
2016-03-08 12:34:58.981 UTC (/)LOG: database system was interrupted; last known up at 2016-03-08 12:26:10 UTC
Couldn't read packet: Connection reset by peer
2016-03-08 12:34:59.525 UTC (/)FATAL: could not restore file "00000002.history" from archive: child process exited with exit code 255
2016-03-08 12:34:59.526 UTC (/)LOG: startup process (PID 26636) exited with exit code 1
2016-03-08 12:34:59.526 UTC (/)LOG: aborting startup due to startup process failure
If I removed the restore_command line from recovey.conf, the standby started up fine and began streaming WALs from the master.
I eventually traced the problem down to not having added the standby postgres user's public key to the authorized_hosts file of the WAL archive host. I'd also forgotten to add the WAL archive host's server fingerprint to the known_hosts file of the standby postgres user.
These two mistakes were (I assume) causing the sftp restore_command to exit with code 255. As tscho says, the Postgres docs suggest that if the restore_command exits with ANY non-zero value, Postgres will simply move on to trying to stream from the master rather than refusing to start. In reality this doesn't seem to be the case if the exit code is higher than a certain number (maybe 125, as vyegorov's source code grepping suggests?).
Once I fixed the two SSH issues, the standby started fine with the restore_command present in recovery.conf.
Here is the comment describing why this behavior for high exit status from the command process was chosen, and the current code to implement it.
/*
* Remember, we rollforward UNTIL the restore fails so failure here is
* just part of the process... that makes it difficult to determine
* whether the restore failed because there isn't an archive to restore,
* or because the administrator has specified the restore program
* incorrectly. We have to assume the former.
*
* However, if the failure was due to any sort of signal, it's best to
* punt and abort recovery. (If we "return false" here, upper levels will
* assume that recovery is complete and start up the database!) It's
* essential to abort on child SIGINT and SIGQUIT, because per spec
* system() ignores SIGINT and SIGQUIT while waiting; if we see one of
* those it's a good bet we should have gotten it too.
*
* On SIGTERM, assume we have received a fast shutdown request, and exit
* cleanly. It's pure chance whether we receive the SIGTERM first, or the
* child process. If we receive it first, the signal handler will call
* proc_exit, otherwise we do it here. If we or the child process received
* SIGTERM for any other reason than a fast shutdown request, postmaster
* will perform an immediate shutdown when it sees us exiting
* unexpectedly.
*
* Per the Single Unix Spec, shells report exit status > 128 when a called
* command died on a signal. Also, 126 and 127 are used to report
* problems such as an unfindable command; treat those as fatal errors
* too.
*/
if (WIFSIGNALED(rc) && WTERMSIG(rc) == SIGTERM)
proc_exit(1);
signaled = WIFSIGNALED(rc) || WEXITSTATUS(rc) > 125;
ereport(signaled ? FATAL : DEBUG2,
(errmsg("could not restore file \"%s\" from archive: %s",
xlogfname, wait_result_to_str(rc))));