Correct way to start mysqld_safe - centos

I've been searching around a lot but could not figure out how to start mysqld in "safe mode".
This is what I got so far:
[root#localhost bin]# service mysqld_safe start
mysqld_safe: unrecognized service
I'm running CentOS, this is my mysql version:
[root#localhost ~]# mysql --version
mysql Ver 14.12 Distrib 5.0.95, for redhat-linux-gnu (i686) using readline 5.1
Any help would be appreciated!

Starting mysqld should do the trick:
[root#green-penny ~]# service mysqld start
Starting mysqld: [ OK ]
[root#green-penny ~]# ps axu | grep mysql
root 7540 0.8 0.0 5112 1380 pts/0 S 09:29 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql
mysql 7642 1.5 0.7 135480 15344 pts/0 Sl 09:29 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock
root 7660 0.0 0.0 4352 724 pts/0 S+ 09:29 0:00 grep mysql
(Note that mysqld_safe is running.)

Related

How to run a process in daemon mode with systemd service?

I've googled and read quite a bit of blogs, posts, etc. on this. I've also been trying them out manually on my EC2 instance. However, I'm still not able to properly configure the systemd service unit to have it run the process in background as I expect. The process I'm running is nessus service. Here's my service unit definition:
$ cat /etc/systemd/system/nessusagent.service
[Unit]
Description=Nessus
[Service]
ExecStart=/opt/myorg/bin/init_nessus
Type=simple
[Install]
WantedBy=multi-user.target
and here is my script /opt/myorg/bin/init_nessus:
$ cat /opt/apiq/bin/init_nessus
#!/usr/bin/env bash
set -e
NESSUS_MANAGER_HOST=...
NESSUS_MANAGER_PORT=...
NESSUS_CLIENT_GROUP=...
NESSUS_LINKING_KEY=...
#-------------------------------------------------------------------------------
# link nessus agent with manager host
#-------------------------------------------------------------------------------
/opt/nessus_agent/sbin/nessuscli agent link --key=${NESSUS_LINKING_KEY} --host=${NESSUS_MANAGER_HOST} --port=${NESSUS_MANAGER_PORT} --groups=${NESSUS_CLIENT_GROUP}
if [ $? -ne 0 ]; then
echo "Cannot link the agent to the Nessus manager, quitting."
exit 1
fi
/opt/nessus_agent/sbin/nessus-service -q -D
When I run the service, I always get the following:
$ systemctl status nessusagent.service
● nessusagent.service - Nessus
Loaded: loaded (/etc/systemd/system/nessusagent.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2020-08-24 06:40:40 UTC; 9min ago
Process: 27787 ExecStart=/opt/myorg/bin/init_nessus (code=exited, status=0/SUCCESS)
Main PID: 27787 (code=exited, status=0/SUCCESS)
...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + /opt/nessus_agent/sbin/nessuscli agent link --key=... --host=... --port=8834 --groups=...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] HostTag::getUnix: setting TAG value to '8596420322084e3ab97d3c39e5c92e00'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] Successfully linked to <myorg.com>:8834
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + '[' 0 -ne 0 ']'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[28506]: + /opt/nessus_agent/sbin/nessus-service -q -D
However, I can't see the process that I expect to see:
$ ps faux | grep nessus
root 28565 0.0 0.0 12940 936 pts/0 S+ 06:54 0:00 \_ grep --color=auto nessus
If I run the last command manually, I can see it:
$ /opt/nessus_agent/sbin/nessus-service -q -D
$ ps faux | grep nessus
root 28959 0.0 0.0 12940 1016 pts/0 S+ 07:00 0:00 \_ grep --color=auto nessus
root 28952 0.0 0.0 6536 116 ? S 07:00 0:00 /opt/nessus_agent/sbin/nessus-service -q -D
root 28953 0.2 0.0 69440 9996 pts/0 Sl 07:00 0:00 \_ nessusd -q
What is it that I'm missing here?
Eventually figured out that this was because of the extra -D option in the last command. Removing the -D option fixed the issue. Running the process in daemon mode inside a system manager is not the way to go. We need to run it in the foreground and let the system manager handle it.

why host process contains kubernetes pod process

when I am list process of host using this command:
[root#fat001 ~]# ps -o user,pid,pidns,%cpu,%mem,vsz,rss,tty,stat,start,time,args ax|grep "room"
root 3488 4026531836 0.0 0.0 107992 644 pts/11 S+ 20:06:01 00:00:00 tail -n 200 -f /data/logs/soa-room/spring.log
root 18114 4026534329 8.5 2.2 5721560 370032 ? Sl 23:17:51 00:01:53 java -jar /root/soa-room-service-1.0.0-SNAPSHOT.jar
root 19107 4026531836 0.0 0.0 107992 616 pts/8 S+ 19:14:10 00:00:00 tail -f -n 200 /data/logs/soa-room/spring.log
root 23264 4026531836 0.0 0.0 112684 1000 pts/13 S+ 23:39:57 00:00:00 grep --color=auto room
root 30416 4026531836 3.4 3.4 4122552 567232 ? Sl 19:52:03 00:07:53 /opt/dabai/tools/jdk1.8.0_211/bin/java -Xmx256M -Xms128M -jar -Xdebug -Xrunjdwp:transport=dt_socket,suspend=n,server=y,address=5011 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/data/dump /data/jenkins/soa-room-service/soa-room-service-1.0.0-SNAPSHOT.jar
I am very sure this process is kubernetes pod's process:
root 18114 4026534329 8.5 2.2 5721560 370032 ? Sl 23:17:51 00:01:53 java -jar /root/soa-room-service-1.0.0-SNAPSHOT.jar
Why the kubernetes container's process show on host?It should be in the docker's container!!!!!
This is perfectly normal. Containers are not VM.
Every process run by Docker is run on the host Kernel. There is no isolation in term of Kernel.
Of course, there is an isolation in terms of process between containers, as each container's process are run in an isolated process namespace.
In summary : container A can't see container B process (well, not by default), however as all the containers process are run inside your host, you'll always be able to see the process from your host.

Postgres No such interface 'org.freedesktop.DBus.Properties'

Postgres database crashed after restart, tried just about everything including reinstalling postgres. It will not start on ubuntu 14.04,
$ systemctl status postgresql#9.6-main.service
Failed to issue method call: No such interface 'org.freedesktop.DBus.Properties' on object at path /org/freedesktop/systemd1/unit/postgresql_409_2e6_2dmain_2eservice
$ pg_lsclusters
Ver Cluster Port Status Owner Data directory Log file
9.6 main 5432 down postgres /var/lib/postgresql/9.6/main /var/log/postgresql/postgresql-9.6-main.log
$ sudo service postgresql start
* Starting PostgreSQL 9.6 database server
* Failed to issue method call: Unit postgresql#9.6-main.service failed to
load: No such file or directory. See system logs and 'systemctl status
postgresql#9.6-main.service' for details.
$ ps uxa|grep dbus-daemon
message+ 751 0.0 0.0 40812 4064 ? Ss 18:39 0:03 dbus-daemon --system --fork
dominic 3058 0.0 0.0 40840 4252 ? Ss 18:40 0:02 dbus-daemon --fork --session --address=unix:abstract=/tmp/dbus-S1LhlCDwl2
dominic 3145 0.0 0.0 39400 3536 ? S 18:40 0:00 /bin/dbus-daemon --config-file=/etc/at-spi2/accessibility.conf --nofork --print-address 3
dominic 17462 0.0 0.0 15956 2244 pts/4 S+ 21:45 0:00 grep --color=auto dbus-daemon
Postgres log file is empty.
I had the same error after install snap on Ubuntu 14.04. It was install some parts from systemd and broke postgresql init script.
You need to add parameter --skip-systemctl-redirect to pg_ctlcluster in file /usr/share/postgresql-common/init.d-functions
The function you need to change:
do_ctl_all() {
...
# --skip-systemctl-redirect fix postgresql No such interface 'org.freedesktop.DBus.Properties'
if [ "$1" = "stop" ] || [ "$1" = "restart" ]; then
ERRMSG=$(pg_ctlcluster --skip-systemctl-redirect --force "$2" "$name" $1 2>&1)
else
ERRMSG=$(pg_ctlcluster --skip-systemctl-redirect "$2" "$name" $1 2>&1)
fi
...
}
Ubuntu 14.04 did not switch to systemd yet. I highly recommend upgrading to 16.04 or even better, 18.04.

supervisorctl 3.3.1 not working, complaining about not finding .conf file

root#dev-demo-karl:~# supervisord -v
3.3.1
I'm getting the following error when trying to access supervisorctl:
Error: .ini file does not include supervisorctl section
For help, use /usr/bin/supervisorctl -h
Supervisor not using configuration file
root#dev-demo-karl:/srv/www# /usr/bin/supervisorctl
Error: .ini file does not include supervisorctl section
For help, use /usr/bin/supervisorctl -h
root#dev-demo-karl:/srv/www# cd /etc/
root#dev-demo-karl:/etc# cat supervisor
supervisor/ supervisord/ supervisord.conf
root#dev-demo-karl:/etc# ls supervisord/conf.d
supervisord.conf
root#dev-demo-karl:/etc# ls supervisor/conf.d
supervisord.conf
root#dev-demo-karl:/etc# ls supervisord
conf.d supervisord.conf
root#dev-demo-karl:/etc# ls supervisor
conf.d supervisord.conf
All the supervisord.conf files have the following:
root#dev-demo-karl:/etc# cat supervisord.conf
[supervisord]
nodaemon=true
[program:node]
directory=/srv/www
command=npm run demo
autostart=true
autorestart=true
[program:mongod]
command=/usr/bin/mongod --auth --fork --smallfiles --logpath /var/log/mongodb.log
I KNOW supervisord is finding one of them because the services are up:
root#dev-demo-karl:~# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.8 47624 17744 ? Ss 21:03 0:00 /usr/bin/python /usr/bin/supervisord
root 8 0.1 2.4 1003400 49580 ? Sl 21:03 0:00 npm
root 16 0.6 2.3 295224 48192 ? Sl 21:03 0:03 /usr/bin/mongod --auth --fork --smallfiles --logpath /var/log/mongodb.log
root 40 0.0 0.0 4512 844 ? S 21:03 0:00 sh -c npm run prod
root 41 0.1 2.4 1003412 49584 ? Sl 21:03 0:00 npm
root 52 0.0 0.0 4512 712 ? S 21:03 0:00 sh -c NODE_ENV=production NODE_PATH="$(pwd)" node src/index.js
root 54 0.4 8.1 1068568 166080 ? Sl 21:03 0:02 node src/index.js
root 79 0.0 0.1 18240 3248 ? Ss+ 21:04 0:00 bash
root 238 0.0 0.1 18248 3248 ? Ss 21:06 0:00 bash
root 501 0.0 0.1 34424 2884 ? R+ 21:12 0:00 ps aux
Why isn't supervisorctl not working?
And last:
root#dev-demo-karl:~# cat /etc/supervisord.conf
[supervisord]
nodaemon=true
[program:node]
directory=/srv/www
command=npm run demo
autostart=true
autorestart=true
[program:mongod]
command=/usr/bin/mongod --auth --fork --smallfiles --logpath /var/log/mongodb.log
root#dev-demo-karl:~# supervisorctl -c /etc/supervisord.conf
Error: .ini file does not include supervisorctl section
For help, use /usr/bin/supervisorctl -h
Wth? Anyone know what I'm doing wrong? I'm starting it via the command in a Docker container:
CMD ["/usr/bin/supervisord"]
First, how to start supervisord:
# Start server
supervisord -c /path/to/supervisor.conf
# Then use client
supervisorctl -c /path/to/supervisor.conf status
Second, typical configuration (work for me supervisord --v -> 4.1.0)
[inet_http_server]
port = 127.0.0.1:9001
[supervisorctl]
serverurl = http://127.0.0.1:9001
[program:<your_program_name>]
or
[unix_http_server]
file=/tmp/supervisor.sock
[supervisord]
nodaemon=true
logfile=/var/log/supervisor/supervisord.log
pidfile=/var/run/supervisord.pid
childlogdir=/var/log/supervisor
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock
[program:<your_program_name>]
...
p.s. I've answered on this post, because it first in google search.)
Supervisor in version 3.3.1 has added a few more required fields. http://supervisord.org/configuration.html
I added them and it is now read!

memcached restart starts a new memcached and doesn't kill the old one

I'm running my rails app in production mode and in staging mode on the same server, in different folders. They both use memcache-client which requires memcached to be running.
As yet i haven't set up a deploy script and so just do a deploy manually by sshing onto the server, going to the appropriate directory, updating the code, restarting memcached and then restarting unicorn (the processes which actually run the rails app). I restart memcached thus:
sudo /etc/init.d/memcached restart &
This starts a new memcached, but it doesn't kill the old one: check it out:
ip-<an-ip>:test.millionaire[subjects]$ ps afx | grep memcache
11176 pts/2 S+ 0:00 | \_ grep --color=auto memcache
10939 pts/3 R 8:13 \_ sudo /etc/init.d/memcached restart
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
ip-<an-ip>:test.millionaire[subjects]$ sudo /etc/init.d/memcached restart &
[1] 11187
ip-<an-ip>:test.millionaire[subjects]$ ps afx | grep memcache
11187 pts/2 T 0:00 | \_ sudo /etc/init.d/memcached restart
11199 pts/2 S+ 0:00 | \_ grep --color=auto memcache
10939 pts/3 R 8:36 \_ sudo /etc/init.d/memcached restart
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
[1]+ Stopped sudo /etc/init.d/memcached restart
ip-<an-ip>:test.millionaire[subjects]$ sudo /etc/init.d/memcached restart &
[2] 11208
ip-<an-ip>:test.millionaire[subjects]$ ps afx | grep memcache
11187 pts/2 T 0:00 | \_ sudo /etc/init.d/memcached restart
11208 pts/2 R 0:01 | \_ sudo /etc/init.d/memcached restart
11218 pts/2 S+ 0:00 | \_ grep --color=auto memcache
10939 pts/3 R 8:42 \_ sudo /etc/init.d/memcached restart
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
What might be causing it is there's another memcached running - see the bottom line. I'm mystified as to where this is from and my instinct is to kill it but i thought i'dd better check with someone who actually knows more about memcached than i do.
Grateful for any advice - max
EDIT - solution
I figured this out after a bit of detective work with a colleague. In the rails console i typed CACHE.stats which prints out a hash of values, including "pid", which i could see was set to the instance of memcached which wasn;t started with memcached restart, ie this process:
7453 ? Sl 0:00 /usr/bin/memcached -m 64 -p 11211 -u nobody -l 127.0.0.1
The memcached control script (ie that defines the start, stop and restart commands), is in /etc/init.d/memcached
A line in this says
# Edit /etc/default/memcached to change this.
ENABLE_MEMCACHED=no
So i looked in /etc/default/memcached, which was also set to ENABLE_MEMCACHED=no
So, this was basically preventing memcached from being stopped and started. I changed it to ENABLE_MEMCACHED=yes, then it would stop and start fine. Now when i stop and start memcached, it's the above process, the in-use memcached, that's stopped and started.
try using:
killall memcached