systemctl status postgresql.service showing output as differently - postgresql

Not coming out of os(Linux) prompt.Need to do ctrl+c to come out from prompt
output is not showing as expected(Active: activating (start) instead of Active: active (running))
but postgresql services are starting.
# systemctl status postgresql.service
● postgresql.service - PostgreSQL database server
Loaded: loaded (/etc/systemd/system/postgresql.service; enabled; vendor preset: disabled)
Active: activating (start) since Thu 2021-02-04 12:38:17 CET; 15min ago
Docs: man:postgres(1)
Main PID: 6688 (postgres)
CGroup: /system.slice/postgresql.service
├─6688 /opt/data/pgsql/10.15/bin/postgres -D /opt/data/postgres/data/10/data
├─6689 postgres: logger process
├─6696 postgres: checkpointer process
cat /etc/systemd/system/postgresql.service
[Unit]
Description=PostgreSQL database server
Documentation=man:postgres(1)
[Service]
Type=notify
User=postgres
ExecStart=/opt/data/pgsql/10.15/bin/postgres -D /opt/data/postgres/data/10/data
ExecReload=/bin/kill -HUP $MAINPID
KillMode=mixed
KillSignal=SIGINT
TimeoutSec=0
[Install]
WantedBy=multi-user.target
So please can any one help me to get desired output as Active: active (running))

I tried comment the Type=Notify line and it works fine.
Only for PostgreSQL 9x.
Version 10 and above I've no problem.

Related

Postgres on Ubuntu, controlling the postgresql.conf location

I created a cluster as follows:
pg_createcluster -d /some_dir/pg_data/ -p 5432 \
--environment=/some_dir/pg_env.prod.conf \
--createclusterconf=/some_dir/pg.prod.conf \
13 apif
However, when the cluster is stood-up, it has effectively copied the pg.prod.conf file to the default directory (-c config_file=/etc/postgresql/13/apif/postgresql.conf) rather than using it. That means if I make changes to the file, of course, they won't be seen. Thats not what I wanted.
sudo systemctl status postgresql#13-apif
● postgresql#13-apif.service - PostgreSQL Cluster 13-apif
Loaded: loaded (/lib/systemd/system/postgresql#.service; indirect; vendor preset: enabled)
Active: active (running) since Wed 2021-12-15 21:15:25 UTC; 6s ago
Process: 6791 ExecStop=/usr/bin/pg_ctlcluster --skip-systemctl-redirect -m fast 13-apif stop (code=exited, status=2)
Process: 6656 ExecReload=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 13-apif reload (code=exited, status=0/SUCCESS)
Process: 6868 ExecStart=/usr/bin/pg_ctlcluster --skip-systemctl-redirect 13-apif start (code=exited, status=0/SUCCESS)
Main PID: 6873 (postgres)
Tasks: 9 (limit: 4915)
CGroup: /system.slice/system-postgresql.slice/postgresql#13-apif.service
├─6873 /usr/lib/postgresql/13/bin/postgres -D /some_dir/apif -c config_file=/etc/postgresql/13/apif/postgresql.conf
├─6874 postgres: logger
├─6876 postgres: checkpointer
├─6877 postgres: background writer
├─6878 postgres: walwriter
├─6879 postgres: autovacuum launcher
├─6880 postgres: stats collector
├─6881 postgres: pg_cron launcher
└─6882 postgres: logical replication launcher
Dec 15 21:15:22 ip-172-33-2-19 systemd[1]: Starting PostgreSQL Cluster 13-apif...
Dec 15 21:15:25 ip-172-33-2-19 systemd[1]: Started PostgreSQL Cluster 13-apif.
...
How do I do this the way I want to?... where the cluster is configured to reference a file of my choosing, permanently.
I tried to find an answer and couldn't.
But I did find an acceptable work-around. And while this isn't quite what I was looking for because it doesn't explicitly specify the location of the conf file, it does allow me to reference it in a replicable one-liner.
pg_createcluster -d /some_dir/pg_data/ -p 5432 \
--environment=/some_dir/pg_env.prod.conf \
--pgoption include_if_exists=/some_dir/pg.prod.conf \
13 apif

Mongod Process starts but Systemctl status shows failed

I have created a mongod configuration in /home/cluster1.conf with the same port I used previously for etc/mongod.conf.
I used to run etc/mongod.conf as a sudo user but expect to run cluster1.conf as user.
when I run cluster1.conf as user, the process starts. I accessed it from mongo shell and did operations. But when I'm trying to access it from another vm from the same VPC, it failed.
I checked for systemctl status mongod and it shows service is not running.
Why does systemctl status shows it not running when it really does? How can I resolve this to create a replica set??
"when I run cluster1.conf as user, the process starts." does not make much sense. cluster1.conf is a configuration file, it does not start anything.
When you run systemctl status mongod, then it typically shows
● mongod.service - MongoDB Database Server
Loaded: loaded (/etc/systemd/system/mongod.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2021-08-27 09:44:26 CEST; 2 weeks 3 days ago
Docs: https://docs.mongodb.org/manual
Main PID: 38710 (mongod)
Tasks: 28
Memory: 270.8M
CGroup: /system.slice/mongod.service
└─38710 /usr/bin/mongod -f /etc/mongod.conf
Check the service file /etc/systemd/system/mongod.service there you see entry like
Environment="OPTIONS=-f /etc/mongod.conf"
EnvironmentFile=-/etc/sysconfig/mongod
ExecStart=/usr/bin/mongod $OPTIONS
Which means, config file /etc/mongod.conf is used (rather than /home/cluster1.conf)

Unable to start mongodb as service on linux Ubuntu in digital ocean

When I tried starting MongoDB service using "systemctl start mongodb
" command nothing happens.
But when I use mongod command in terminal. The server starts so what should I do to start MongoDB as a service
The output result of mongod and mongo command are in the image link
https://i.stack.imgur.com/jsGZT.png
I also tried to get the status before using mongod command using "systemctl status mongodb" command in the terminal and below is the output result
root#lc-1gb-blr1-01:~# systemctl status mongodb
● mongodb.service - High-performance, schema-free document-oriented database
Loaded: loaded (/etc/systemd/system/mongodb.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Mon 2019-07-01 10:49:50 UTC; 3h 49min ago
Main PID: 8695 (code=exited, status=100)

postgresql could not properly start exited with active (exited)

I installed posgresql from digitalocean and in the end of installation prints the below command in terminal
/usr/lib/postgresql/10/bin/pg_ctl -D /var/lib/postgresql/10/main -l logfile start
I tried to run it with sudo root user and also with switching to postgres user but gives me below error
waiting for server to start..../bin/sh: 1: cannot create logfile:
Permission denied stopped waiting pg_ctl: could not start server
but when i check the status it says
● postgresql.service - PostgreSQL RDBMS Loaded: loaded
(/lib/systemd/system/postgresql.service; enabled; vendor preset:
enabled) Active: active (exited) since Thu 2018-05-31 13:11:18 UTC;
56s ago Main PID: 3698 (code=exited, status=0/SUCCESS)
Tasks: 0 (limit: 2362) CGroup: /system.slice/postgresql.service
May 31 13:11:18 staging systemd1: Starting PostgreSQL RDBMS... May
31 13:11:18 staging systemd1: Started PostgreSQL RDBMS.
Status is not running except is exited.what the above command do and how can i run it ? I haven't faced it in previous versions
The idea is that you supply your actual log file instead of logfile, but I recommend that you configure logging properly in postgresql.conf and use pg_ctl without the -l option.
Set logging_collector to on.
Set log_filename to postgresql-%a.log.
Set log_rotation_size to 0.
Set log_truncate_on_rotation to on.
Then you'll get the log files in the log subdirectory of your PostgreSQL data directory, and they will be rotated on a weekly basis.

Converting chef upstart template to systemd

I have one upstart template in chef-Cookbook and want to convert it into Systemd so that it can be supported in 16.04.
I have already converted but faced the issue as my server is not starting properly.
Below is the upstart script -
#!upstart
description "Server nodejs"
start on (local-filesystems and net-device-up IFACE!=lo)
stop on [!12345]
console log
setuid root
setgid www-data
chdir /srv/
exec /usr/local/bin/node /srv/my_service/src/cli/index.js >>/var/log/my_service/my_service_nodejs.log 2>&1
Conversion of same in Systemd is -
[Unit]
Description=Server nodejs
After=network.target
[Service]
User=root
Group=www-data
WorkingDirectory=/srv/
ExecStart=/usr/local/bin/node /srv/my_service/src/cli/index.js >>/var/log/my_service/my_service_nodejs.log 2>&1
[Install]
WantedBy=multi-user.target
Issues I am facing -
Node js Server is not Running
my_nodejs.service - Server nodejs
Loaded: loaded (/etc/systemd/system/my_nodejs.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2017-12-28 08:01:14 UTC; 6s ago
Main PID: 5842 (code=exited, status=64)
systemd[1]: my_nodejs.service: Main process exited, code=exited, status=64/n/a
systemd[1]: my_nodejs.service: Unit entered failed state.
systemd[1]: my_nodejs.service: Failed with result 'exit-code'.
Found the issue.
It is because of the >> which I added for appending log. >> is considered an Operator in Systemd