How to close PostgreSQL process in GitLab? - postgresql

$ [root#gitlab]# gitlab-ctl stop postgresql
timeout: run: postgresql: (pid 27881) 13974s
$ [root#gitlab]# kill -9 27881
$ [root#gitlab]# ps -aux | grep 27881
495 27881 0.0 0.0 0 0 ? Zs 14:53 0:00 [postgres]
$ [root#gitlab]# /opt/gitlab/embedded/bin/pg_ctl status
pg_ctl: cannot be run as root
Please log in (using, e.g., "su") as the (unprivileged) user that will
own the server process.
$ [root#gitlab]# whoami
root
$ [root#gitlab]# /opt/gitlab/embedded/bin/psql
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/tmp/.s.PGSQL.5432"?

Related

PostgreSQL on container throws an error chmod: /var/lib/postgresql/data: Operation not permitted

I have tried to run Postgres 12 on the docker and the following files I have created. I do not understand where do I made a mistake and what is an issue of PostgreSQL file permission.
Dockerfile:
FROM postgres:12.0-alpine
USER root
RUN chmod 0775 /var/lib/postgresql
RUN chown postgres /var/lib/postgresql
USER postgres
# RUN chmod 0775 /var/lib/postgresql
# RUN chown postgres /var/lib/postgresql
RUN ls -l /var/lib/postgresql
# RUN pgctl -D /usr/local/psql/data initdb &&\
RUN initdb -D /var/lib/postgresql/data &&\
echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf && \
echo "listen_addresses='*'" >> /var/lib/postgresql/data/postgresql.conf && \
pg_ctl start && \
psql -U postgres -tc "SELECT 1 FROM pg_database WHERE datname = 'test'" | grep -q 1 || psql -U postgres -c "CREATE DATABASE test" && \
psql --command "ALTER USER postgres WITH ENCRYPTED PASSWORD '123';"
# RUN initdb /var/lib/postgresql/data &&\
# echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf && \
# echo "listen_addresses='*'" >> /var/lib/postgresql/data/postgresql.conf && \
# pg_ctl start && \
# psql -U postgres -tc "SELECT 1 FROM pg_database WHERE datname = 'test'" | grep -q 1 || psql -U postgres -c "CREATE DATABASE test" && \
# psql --command "ALTER USER postgres WITH ENCRYPTED PASSWORD '123';"
# Add VOLUMEs to allow backup of config, logs and databases
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
EXPOSE 5432
docker-compose.yml
version: "3.7"
services:
postgresql:
build:
context: .
dockerfile: ./../postgresql/Dockerfile
volumes:
- ../postgresql/db_data:/var/lib/postgresql/data
ports:
- 5432:5432
I have run following command in terminal
docker-compose up
The following error occurs.
Building mydb_postgresql
Step 1/6 : FROM postgres:12-alpine
---> ecb176ff304a
Step 2/6 : USER postgres
---> Running in ee9fe8ed246f
Removing intermediate container ee9fe8ed246f
---> 8dfa002a5fab
Step 3/6 : RUN ls -l /var/lib/postgresql
---> Running in 08d98e1ea7b2
total 4
drwxrwxrwx 2 postgres postgres 4096 Mar 23 23:58 data
Removing intermediate container 08d98e1ea7b2
---> 548bb96ed8db
Step 4/6 : RUN initdb -D /var/lib/postgresql/data && echo "host all all 0.0.0.0/0 md5" >> /var/lib/postgresql/data/pg_hba.conf && echo "listen_addresses='*'" >> /var/lib/postgresql/data/postgresql.conf && pg_ctl start && psql -U postgres -tc "SELECT 1 FROM pg_database WHERE datname = 'mydb'" | grep -q 1 || psql -U postgres -c "CREATE DATABASE mydb" && psql --command "ALTER USER postgres WITH ENCRYPTED PASSWORD '123';"
---> Running in dbee0a2346df
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... UTC
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... sh: locale: not found
2020-04-17 18:16:13.112 UTC [11] WARNING: no usable system locales were found
ok
syncing data to disk ... ok
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2020-04-17 18:16:13.427 UTC [15] LOG: starting PostgreSQL 12.2 on x86_64-pc-linux-musl, compiled by gcc (Alpine 9.2.0) 9.2.0, 64-bit
2020-04-17 18:16:13.427 UTC [15] LOG: listening on IPv4 address "0.0.0.0", port 5432
2020-04-17 18:16:13.427 UTC [15] LOG: listening on IPv6 address "::", port 5432
2020-04-17 18:16:13.433 UTC [15] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2020-04-17 18:16:13.447 UTC [16] LOG: database system was shut down at 2020-04-17 18:16:13 UTC
2020-04-17 18:16:13.451 UTC [15] LOG: database system is ready to accept connections
done
server started
CREATE DATABASE
ALTER ROLE
Removing intermediate container dbee0a2346df
---> 8fd449f2a9e2
Step 5/6 : VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
---> Running in 76f4b6ee235e
Removing intermediate container 76f4b6ee235e
---> 343b2ddca9d2
Step 6/6 : EXPOSE 5432
---> Running in 306d1fd18818
Removing intermediate container 306d1fd18818
---> aa732eb5b2b6
Successfully built aa732eb5b2b6
Successfully tagged services_mydb_postgresql:latest
WARNING: Image for service mydb_postgresql was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.
Creating services_mydb_postgresql_1 ... done
Attaching to services_mydb_postgresql_1
mydb_postgresql_1 | chmod: /var/lib/postgresql/data: Operation not permitted
services_mydb_postgresql_1 exited with code 1
I don't know what is going on and why my process has stopped.

Postgres No such interface 'org.freedesktop.DBus.Properties'

Postgres database crashed after restart, tried just about everything including reinstalling postgres. It will not start on ubuntu 14.04,
$ systemctl status postgresql#9.6-main.service
Failed to issue method call: No such interface 'org.freedesktop.DBus.Properties' on object at path /org/freedesktop/systemd1/unit/postgresql_409_2e6_2dmain_2eservice
$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
9.6 main 5432 down postgres /var/lib/postgresql/9.6/main /var/log/postgresql/postgresql-9.6-main.log
$ sudo service postgresql start
* Starting PostgreSQL 9.6 database server
* Failed to issue method call: Unit postgresql#9.6-main.service failed to
load: No such file or directory. See system logs and 'systemctl status
postgresql#9.6-main.service' for details.
$ ps uxa|grep dbus-daemon
message+ 751 0.0 0.0 40812 4064 ? Ss 18:39 0:03 dbus-daemon --system --fork
dominic 3058 0.0 0.0 40840 4252 ? Ss 18:40 0:02 dbus-daemon --fork --session --address=unix:abstract=/tmp/dbus-S1LhlCDwl2
dominic 3145 0.0 0.0 39400 3536 ? S 18:40 0:00 /bin/dbus-daemon --config-file=/etc/at-spi2/accessibility.conf --nofork --print-address 3
dominic 17462 0.0 0.0 15956 2244 pts/4 S+ 21:45 0:00 grep --color=auto dbus-daemon
Postgres log file is empty.
I had the same error after install snap on Ubuntu 14.04. It was install some parts from systemd and broke postgresql init script.
You need to add parameter --skip-systemctl-redirect to pg_ctlcluster in file /usr/share/postgresql-common/init.d-functions
The function you need to change:
do_ctl_all() {
...
# --skip-systemctl-redirect fix postgresql No such interface 'org.freedesktop.DBus.Properties'
if [ "$1" = "stop" ] || [ "$1" = "restart" ]; then
ERRMSG=$(pg_ctlcluster --skip-systemctl-redirect --force "$2" "$name" $1 2>&1)
else
ERRMSG=$(pg_ctlcluster --skip-systemctl-redirect "$2" "$name" $1 2>&1)
fi
...
}
Ubuntu 14.04 did not switch to systemd yet. I highly recommend upgrading to 16.04 or even better, 18.04.

Using Supervisord to manage mongos process

background
I am trying to automate the restarting in case of crash or reboot for mongos process used in mongodb sharded setup.
Case 1 : using direct command, with mongod user
supervisord config
[program:mongos_router]
command=/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid
user=mongod
autostart=true
autorestart=true
startretries=10
Result
supervisord log
INFO spawned: 'mongos_router' with pid 19535
INFO exited: mongos_router (exit status 0; not expected)
INFO gave up: mongos_router entered FATAL state, too many start retries too quickly
mongodb log
2018-05-01T21:08:23.745+0000 I SHARDING [Balancer] balancer id: ip-address:27017 started
2018-05-01T21:08:23.745+0000 E NETWORK [mongosMain] listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:27017
2018-05-01T21:08:23.745+0000 E NETWORK [mongosMain] addr already in use
2018-05-01T21:08:23.745+0000 I - [mongosMain] Invariant failure inShutdown() src/mongo/db/auth/user_cache_invalidator_job.cpp 114
2018-05-01T21:08:23.745+0000 I - [mongosMain]
***aborting after invariant() failure
2018-05-01T21:08:23.748+0000 F - [mongosMain] Got signal: 6 (Aborted).
Process is seen running. But if killed does not restart automatically.
Case 2 : Using init script
here the slight change in the scenario is that some ulimit commands, creation of pid files is to be done as root and then the actual process should be started as mongod user.
mongos script
start()
{
# Make sure the default pidfile directory exists
if [ ! -d $PID_PATH ]; then
install -d -m 0755 -o $MONGO_USER -g $MONGO_GROUP $PIDDIR
fi
# Make sure the pidfile does not exist
if [ -f $PID_FILE ]; then
echo "Error starting mongos. $PID_FILE exists."
RETVAL=1
return
fi
ulimit -f unlimited
ulimit -t unlimited
ulimit -v unlimited
ulimit -n 64000
ulimit -m unlimited
ulimit -u 64000
ulimit -l unlimited
echo -n $"Starting mongos: "
#daemon --user "$MONGO_USER" --pidfile $PID_FILE $MONGO_BIN $OPTIONS --pidfilepath=$PID_FILE
#su $MONGO_USER -c "$MONGO_BIN -f $CONFIGFILE --pidfilepath=$PID_FILE >> /home/mav/startup_log"
su - mongod -c "/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid"
RETVAL=$?
echo -n "Return value : "$RETVAL
echo
[ $RETVAL -eq 0 ] && touch $MONGO_LOCK_FILE
}
daemon comman represents original script, but daemonizing under the supervisord is not logical, so using command to run the process in foreground(?)
supervisord config
[program:mongos_router_script]
command=/etc/init.d/mongos start
user=root
autostart=true
autorestart=true
startretries=10
Result
supervisord log
INFO spawned: 'mongos_router_script' with pid 20367
INFO exited: mongos_router_script (exit status 1; not expected)
INFO gave up: mongos_router_script entered FATAL state, too many start retries too quickly
mongodb log
Nothing indicating error, normal logs
Process is seen running. But if killed does not restart automatically.
Problem
How to correctly configure script / no script option for running mongos under supervisord ?
EDIT 1
Modified Command
sudo su -c "/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid" -s /bin/bash mongod`
This works if ran individually on command line as well as part of the script, but not with supervisord
EDIT 2
Added following option to config file for mongos to force it to run in the foreground
processManagement:
fork: false # fork and run in background
Now command line and script properly run it in the foreground but supervisord fails to launch it. At the same time there are 3 processes show up when ran from command line or script
root sudo su -c /usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid -s /bin/bash mongod
root su -c /usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid -s /bin/bash mongod
mongod /usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid
EDIT 3
With following supervisord config things are working fine. But I want to try and execute the script if possible to set ulimit
[program:mongos_router]
command=/usr/bin/mongos -f /etc/mongos.conf --pidfilepath=/var/run/mongodb/mongos.pid
user=mongod
autostart=true
autorestart=true
startretries=10
numprocs=1
For the mongos to run in the foreground set the following option
#how the process runs
processManagement:
fork: false # fork and run in background
with that and above supervisord.conf setting, mongos will be launched and under the supervisord control

Celery doesn't restart subprocesses

I have an issue with celery deployment - when I restart it old subprocesses don't stop and continue to process some of jobs. I use supervisord to run celery. Here is my config:
$ cat /etc/supervisor/conf.d/celery.conf
[program:celery]
; Full path to use virtualenv, honcho to load .env
command=/home/ubuntu/venv/bin/honcho run celery -A stargeo worker -l info --no-color
directory=/home/ubuntu/app
environment=PATH="/home/ubuntu/venv/bin:%(ENV_PATH)s"
user=ubuntu
numprocs=1
stdout_logfile=/home/ubuntu/logs/celery.log
stderr_logfile=/home/ubuntu/logs/celery.err
autostart=true
autorestart=true
startsecs=10
; Need to wait for currently executing tasks to finish at shutdown.
; Increase this if you have very long running tasks.
stopwaitsecs = 600
; When resorting to send SIGKILL to the program to terminate it
; send SIGKILL to its whole process group instead,
; taking care of its children as well.
killasgroup=true
; if rabbitmq is supervised, set its priority higher
; so it starts first
priority=998
Here is how celery processes look:
$ ps axwu | grep celery
ubuntu 983 0.0 0.1 47692 10064 ? S 11:47 0:00 /home/ubuntu/venv/bin/python /home/ubuntu/venv/bin/honcho run celery -A stargeo worker -l info --no-color
ubuntu 984 0.0 0.0 4440 652 ? S 11:47 0:00 /bin/sh -c celery -A stargeo worker -l info --no-color
ubuntu 985 0.0 0.5 168720 41356 ? S 11:47 0:01 /home/ubuntu/venv/bin/python /home/ubuntu/venv/bin/celery -A stargeo worker -l info --no-color
ubuntu 990 0.0 0.4 167936 36648 ? S 11:47 0:00 /home/ubuntu/venv/bin/python /home/ubuntu/venv/bin/celery -A stargeo worker -l info --no-color
ubuntu 991 0.0 0.4 167936 36648 ? S 11:47 0:00 /home/ubuntu/venv/bin/python /home/ubuntu/venv/bin/celery -A stargeo worker -l info --no-color
When I run sudo supervisorctl restart celery it only stops first process python ... honcho one and all the other ones continue. And if I try to kill them they continue (kill -9 works).
This appeared to be a bug with honcho. I ended up with workaround of starting this script from supervisor:
#!/bin/bash
source /home/ubuntu/venv/bin/activate
exec env $(cat .env | grep -v ^# | xargs) \
celery -A stargeo worker -l info --no-color

memcached restart starts a new memcached and doesn't kill the old one

I'm running my rails app in production mode and in staging mode on the same server, in different folders. They both use memcache-client which requires memcached to be running.
As yet i haven't set up a deploy script and so just do a deploy manually by sshing onto the server, going to the appropriate directory, updating the code, restarting memcached and then restarting unicorn (the processes which actually run the rails app). I restart memcached thus:
sudo /etc/init.d/memcached restart &
This starts a new memcached, but it doesn't kill the old one: check it out:
ip-<an-ip>:test.millionaire[subjects]$ ps afx | grep memcache
11176 pts/2 S+ 0:00 | \_ grep --color=auto memcache
10939 pts/3 R 8:13 \_ sudo /etc/init.d/memcached restart
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
ip-<an-ip>:test.millionaire[subjects]$ sudo /etc/init.d/memcached restart &
[1] 11187
ip-<an-ip>:test.millionaire[subjects]$ ps afx | grep memcache
11187 pts/2 T 0:00 | \_ sudo /etc/init.d/memcached restart
11199 pts/2 S+ 0:00 | \_ grep --color=auto memcache
10939 pts/3 R 8:36 \_ sudo /etc/init.d/memcached restart
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
[1]+ Stopped sudo /etc/init.d/memcached restart
ip-<an-ip>:test.millionaire[subjects]$ sudo /etc/init.d/memcached restart &
[2] 11208
ip-<an-ip>:test.millionaire[subjects]$ ps afx | grep memcache
11187 pts/2 T 0:00 | \_ sudo /etc/init.d/memcached restart
11208 pts/2 R 0:01 | \_ sudo /etc/init.d/memcached restart
11218 pts/2 S+ 0:00 | \_ grep --color=auto memcache
10939 pts/3 R 8:42 \_ sudo /etc/init.d/memcached restart
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
What might be causing it is there's another memcached running - see the bottom line. I'm mystified as to where this is from and my instinct is to kill it but i thought i'dd better check with someone who actually knows more about memcached than i do.
Grateful for any advice - max
EDIT - solution
I figured this out after a bit of detective work with a colleague. In the rails console i typed CACHE.stats which prints out a hash of values, including "pid", which i could see was set to the instance of memcached which wasn;t started with memcached restart, ie this process:
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
The memcached control script (ie that defines the start, stop and restart commands), is in /etc/init.d/memcached
A line in this says
# Edit /etc/default/memcached to change this.
ENABLE_MEMCACHED=no
So i looked in /etc/default/memcached, which was also set to ENABLE_MEMCACHED=no
So, this was basically preventing memcached from being stopped and started. I changed it to ENABLE_MEMCACHED=yes, then it would stop and start fine. Now when i stop and start memcached, it's the above process, the in-use memcached, that's stopped and started.
try using:
killall memcached