Service stop with command "systemctl stop postsgresql-14".
I checked with "systemctl status postsgresql-14" and it was stopped.
However, the postmaster.pid file exists and DB access is possible.
Why does this happen?
Related
I tried to shut down my mongo process on a Linux server by running either mongod.sh stop or mongo --shutdown.
From the logs, I can see it shutting down with code:0
However, after that line, it logged **** SERVER RESTARTED ****, and then started a new process logging MongoDB starting: pid=xxxxx port=xxxxx dbpath=xxxxxxxxx 64-bit host=xxxxxxxx, which is not expected.
I tried on other instances using the same command, all of them were successfully shut down with only one line of **** SERVER RESTARTED ****, and nothing was logged after that.
Is there anyone knowing what possibly happened in this case? What can I do to disable this automatic restart after stopping the mongo process gracefully?
MongoDB is installed as a service.
For stop and disable use:
systemctl stop mongod
systemctl disable mongod
I'm editing a mongod.conf file to try and add a specific IP to be able to access the database with.
From what I've read I just need to edit this one file and add a second entry to bindIp
Like so:
net:
port: 27017
bindIp: 127.0.0.1, 11.222.333.44
Then save, close and run sudo systemctl restart mongod
Only when I run the restart I run into:
Job for mongod.service failed because the control process exited with error code. See "systemctl status mongod.service" and "journalctl -xe" for details.
But when I run mongo I'm able to connect to the mongo shell locally but unable to connect remotely as the IP binding failed.
If you are able to connect locally, you are most likely connecting to an old process which means either:
process restart didn't work in that it failed to terminate the old process
your configuration is invalid, and restart process identified this and stopped
you have a mongod process running on the default port that you started manually and systemd can't stop it
Verify:
your configuration is correct
you don't have any mongod processes running
Then (re)start again.
Also, review the logs using the commands indicated.
I don't know what is the issue. Why the mongod command is not working now. It was working before. It takes some errors when I write mongod. I am using win32 operation system.
Within the error message it says "Detected unclean shutdown... mongod.lock is not empty"
This means that your mongod was not terminated gracefully - perhaps a hard kill.
In order to start your mongod process again, you'll have to delete this lock file. The path of the file from your error message should be:
C:\data\db\mongod.lock
Here is a link to the relevant documentation where they describe the process of recovering a mongod instance after it was shut down incorrectly.
https://docs.mongodb.com/manual/tutorial/recover-data-following-unexpected-shutdown/
I've updated jdk from 1.8_131 to 1.8_151 for CDH5. So i need to restart the cluster to make it take affect. In the begining i use cloudrea manager web page to restart, but it failed when zookeeper started which is the first step. Then I made a bad choice which is close cloudrea manager in terminal including kill -9 postgresql process. After that, i could't open the cloudrea manager web page.
I use following instructions to start the cluster.
service cloudera-scm-server-db start
service cloudera-scm-server start
service cloudera-scm-agent start
All of them are failed, because /var/log/cloudera-scm-server and /var/log/cloudera-scm-agent disappear.
So I creat these two files manually also include dg.log and cloudera-scm-agent.log
At this time, the server and agent could start. But server-db still can not. The next is some details.
Starting cloudera-scm-server-db (via systemctl): Job for
cloudera-scm-server-db.service failed because the control process
exited with error code. See "systemctl status
cloudera-scm-server-db.service" and "journalctl -xe" for details
journalctl -xe
The CM is using external DB. Failed to start embedded DB service, giving up
service --status-all
What i've done:
So, what should i do now? thank you thank you very much!!!
The above problem had been sovled.
If you open this /etc/cloudera-scm-server/db.properties file, which shown as below.
# cat /etc/cloudera-scm-server/db.properties
Auto-generated by scm_prepare_database.sh
#
Sat Oct 1 12:19:15 PDT 201
#
com.cloudera.cmf.db.type=postgresql
com.cloudera.cmf.db.host=localhost
com.cloudera.cmf.db.name=scm
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=TXqEESuhj5
com.cloudera.cmf.db.setupType=EXTERNAL
EXTERNAL is the crux.
In my CDH service, I use embedded postgresql as my server database. But it's not recommended to use by cloudera offical. I'm a new man on Cloudera, so I made a mistake.
I wrongly use a command which only prepared for Cloudera Manager Server external database.
/usr/share/cmf/schema/scm_prepare_database.sh postgresql scm scm scm_password
The above command can config db.properties
As long as you run above command, com.cloudera.cmf.db.setupType will be set to EXTERNAL(For more details about this, you can find in Cloudera docs)
The most direct and effective way is to reset password of scm.
Then
update the password
set Type as EMBEDDED
make port 7432 listening(you can use netstat -nltp to check)
in db.properties.
#vim cat /etc/cloudera-scm-server/db.properties
Auto-generated by scm_prepare_database.sh
Sat Oct 1 12:19:15 PDT 201
com.cloudera.cmf.db.type=postgresql
com.cloudera.cmf.db.host=localhost:7432
com.cloudera.cmf.db.name=scm
com.cloudera.cmf.db.user=scm
com.cloudera.cmf.db.password=new_password
com.cloudera.cmf.db.setupType=EMBEDDED
Now close all cloudera-scm service and restart in order server-db,server,agent.
If /var/log was cleared wrongly.
You can creat these files such as /var/log/cloudera-scm-server and /var/log/cloudera-scm-agent manually.
It is noteworthy that you should creat these file by user cloudera-scm, otherwise the log can not be written, and you won't find what error happened from log file.
I have stopped a virtual machine with CentOS running an instance of Context Broker. Upon relaunching the system with the enabler, the latter gives a Fatal Error. See the below log:
# contextBroker
INFO#13:18:32 contextBroker.cpp[1348]: Orion Context Broker is running
INFO#13:18:32 mongoGlobal.cpp[164]: Successful connection to database
INFO#13:18:32 contextBroker.cpp[1157]: Connected to mongo at localhost:orion
INFO#13:18:32 mongoGlobal.cpp[483]: Database Operation Successful ({ conditions.type: "ONTIMEINTERVAL" })
INFO#13:18:32 rest.cpp[901]: Fatal Error (error starting REST interface)
I'm working on the 4.1.2 version of Orion, CentOS 6 running in VirtualBox. Running with su because I get a permission denied on a log file error. For info, I have enabled bridging network connection just before the VM reboot.
Is it the fact that the broker was not closed correctly that there is something blocking its restart? (PS. yes I know that there is a nearly exact same error message in the administration guide, but I don't see any solutions there)
Thank you!
EDIT: one solution that works is uninstalling the contextBroker package and installing it again. I wish there was a cleaner way!
EDIT: this problem reproduces every time I kill the contextBroker application - then every time restarts don't help, reinstalling the package does.
Make sure there is no other instance of the broker running (ps aux | grep contextBroker), using the same port.
If there is another instance of the broker running, then the port will be taken and the REST initialization will fail.
About running as root because of log-file permissions ... Why not simply change the owner of the log-file instead?