I have problems running sphinx search on my debian Wheezy server.
Currently, there are 2 searchd ports running
root#ns243216:~# netstat -tlpn | grep search
tcp 0 0 0.0.0.0:9306 0.0.0.0:* LISTEN 11266/searchd
tcp 0 0 0.0.0.0:9312 0.0.0.0:* LISTEN 11266/searchd
First Problem
When I want to execute this
sudo /usr/bin/indexer -c /etc/sphinxsearch/sphinx.conf beta_jobs --rotate
It gives me this :
Sphinx 2.2.10-id64-release (2c212e0)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/etc/sphinxsearch/sphinx.conf'...
indexing index 'beta_jobs'...
collected 6 docs, 0.0 MB
collected 0 attr values
sorted 0.0 Mvalues, 100.0% done
sorted 0.0 Mhits, 100.0% done
total 6 docs, 867 bytes
total 0.046 sec, 18747 bytes/sec, 129.73 docs/sec
total 6 reads, 0.000 sec, 0.4 kb/call avg, 0.0 msec/call avg
total 12 writes, 0.000 sec, 0.9 kb/call avg, 0.0 msec/call avg
WARNING: failed to scanf pid from pid_file '/usr/local/sphinx/var/log/searchd/searchd.pid'.
WARNING: indices NOT rotated.
2 warnings I can't remove...
Second Problem: And when I want to stop my searchd with searchd --stop, it tells me this :
Sphinx 2.2.10-id64-release (2c212e0)
Copyright (c) 2001-2015, Andrew Aksyonoff
Copyright (c) 2008-2015, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file '/etc/sphinxsearch/sphinx.conf'...
FATAL: stop: failed to read valid pid from '/usr/local/sphinx/var/log/searchd/searchd.pid'
I tried setting chmod 755 to everything inside /usr/local/sphinx/var/log/searchd/, still doesn't work.
My sphinx.conf is here Sphinx.conf on gist
EDIT (answer to #aeryaguzov comment)
root#ns213646:~# sudo cat /usr/local/sphinx/var/log/searchd/searchd.pid
root#ns213646:~# ps aux | grep searchd
root 11265 0.0 0.0 79692 1228 ? S Nov30 0:00 /usr/bin/searchd
root 11266 0.1 0.0 91404 4696 ? Sl Nov30 26:54 /usr/bin/searchd
root 22783 0.0 0.0 8292 632 pts/1 S+ 15:32 0:00 grep searchd
Okay it appears that for some unknown reasons the searchd.pid was badly created by searchd (which is running). So I decided to delete the search.pid and to kill searchd. Then I re-indexed and started searchd with no problems.
Related
I've googled and read quite a bit of blogs, posts, etc. on this. I've also been trying them out manually on my EC2 instance. However, I'm still not able to properly configure the systemd service unit to have it run the process in background as I expect. The process I'm running is nessus service. Here's my service unit definition:
$ cat /etc/systemd/system/nessusagent.service
[Unit]
Description=Nessus
[Service]
ExecStart=/opt/myorg/bin/init_nessus
Type=simple
[Install]
WantedBy=multi-user.target
and here is my script /opt/myorg/bin/init_nessus:
$ cat /opt/apiq/bin/init_nessus
#!/usr/bin/env bash
set -e
NESSUS_MANAGER_HOST=...
NESSUS_MANAGER_PORT=...
NESSUS_CLIENT_GROUP=...
NESSUS_LINKING_KEY=...
#-------------------------------------------------------------------------------
# link nessus agent with manager host
#-------------------------------------------------------------------------------
/opt/nessus_agent/sbin/nessuscli agent link --key=${NESSUS_LINKING_KEY} --host=${NESSUS_MANAGER_HOST} --port=${NESSUS_MANAGER_PORT} --groups=${NESSUS_CLIENT_GROUP}
if [ $? -ne 0 ]; then
echo "Cannot link the agent to the Nessus manager, quitting."
exit 1
fi
/opt/nessus_agent/sbin/nessus-service -q -D
When I run the service, I always get the following:
$ systemctl status nessusagent.service
● nessusagent.service - Nessus
Loaded: loaded (/etc/systemd/system/nessusagent.service; enabled; vendor preset: enabled)
Active: inactive (dead) since Mon 2020-08-24 06:40:40 UTC; 9min ago
Process: 27787 ExecStart=/opt/myorg/bin/init_nessus (code=exited, status=0/SUCCESS)
Main PID: 27787 (code=exited, status=0/SUCCESS)
...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + /opt/nessus_agent/sbin/nessuscli agent link --key=... --host=... --port=8834 --groups=...
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] HostTag::getUnix: setting TAG value to '8596420322084e3ab97d3c39e5c92e00'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: [info] [agent] Successfully linked to <myorg.com>:8834
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[27787]: + '[' 0 -ne 0 ']'
Aug 24 06:40:40 ip-10-27-0-104 init_nessus[28506]: + /opt/nessus_agent/sbin/nessus-service -q -D
However, I can't see the process that I expect to see:
$ ps faux | grep nessus
root 28565 0.0 0.0 12940 936 pts/0 S+ 06:54 0:00 \_ grep --color=auto nessus
If I run the last command manually, I can see it:
$ /opt/nessus_agent/sbin/nessus-service -q -D
$ ps faux | grep nessus
root 28959 0.0 0.0 12940 1016 pts/0 S+ 07:00 0:00 \_ grep --color=auto nessus
root 28952 0.0 0.0 6536 116 ? S 07:00 0:00 /opt/nessus_agent/sbin/nessus-service -q -D
root 28953 0.2 0.0 69440 9996 pts/0 Sl 07:00 0:00 \_ nessusd -q
What is it that I'm missing here?
Eventually figured out that this was because of the extra -D option in the last command. Removing the -D option fixed the issue. Running the process in daemon mode inside a system manager is not the way to go. We need to run it in the foreground and let the system manager handle it.
I am using barman 2.11 and postgres 9.5 in my setup. I specified "create_slot = auto" in the server config for automatic replication slot creation as mentioned in the docs but it unfortunately appears to have no effect & the barman check reports the issue as below,
My server config:
[postgres-source-db]
; Configuration options for the server named 'postgres-source-db'
description = "Config for PostgreSQL Database Backup via rsync/SSH with WAL streaming"
ssh_command = ssh -q postgres#postgres-source-db
conninfo = host=postgres-source-db user=barman dbname=dcmdb
backup_method = rsync
parallel_jobs = 1
reuse_backup = link
archiver = on
backup_options = exclusive_backup
streaming_conninfo = host=postgres-source-db user=barman
streaming_archiver = on
slot_name = barman
create_slot = auto
===
Barman check output:
barman#4f5c93878899:~$ barman check postgres-source-db
Server postgres-source-db:
WAL archive: FAILED (please make sure WAL shipping is setup)
PostgreSQL: OK
superuser or standard user with backup privileges: OK
PostgreSQL streaming: OK
wal_level: OK
replication slot: FAILED (replication slot 'barman' doesn't exist. Please execute 'barman receive-wal --create-slot postgres-source-db')
directories: OK
retention policy settings: OK
backup maximum age: FAILED (interval provided: 1 day, latest backup age: No available backups)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: FAILED (have 0 backups, expected at least 1)
ssh: OK (PostgreSQL server)
not in recovery: OK
systemid coherence: OK (no system Id stored on disk)
pg_receivexlog: OK
pg_receivexlog compatible: OK
receive-wal running: FAILED (See the Barman log file for more details)
archive_mode: OK
archive_command: OK
archiver errors: OK
barman#4f5c93878899:~$
I should note that the test check succeeds as shown below,
barman#4f5c93878899:~$ psql -U barman -h postgres-source-db -c "IDENTIFY_SYSTEM" replication=1
perl: warning: Setting locale failed.
perl: warning: Please check that your locale settings:
LANGUAGE = (unset),
LC_ALL = (unset),
LANG = "en_US.UTF-8"
are supported and installed on your system.
perl: warning: Falling back to the standard locale ("C").
systemid | timeline | xlogpos | dbname
---------------------+----------+-----------+--------
6854705426793291833 | 1 | 0/3000AE0 |
(1 row)
barman#4f5c93878899:~$
Am i missing something?
UPDATE (made partial headway, but still not out of the woods):
One more update & info to add. I am setting this up on docker containers & notice that the cron setup was missing despite my installing this from the PostgreSQL apt-repository. Once i logged into the container & ran
'/usr/bin/barman -q cron'
to start the WAL receiver i see that the status has changed to success. Not sure why it did not run automatically, any clue?
Doesn't look like a permission issue but the syntax of the content in '/etc/cron.d/barman' seems strange to me,
barman#bef22f0beec3:~$ cat /etc/cron.d/barman
# /etc/cron.d/barman: crontab entries for the barman package
MAILTO=root
* * * * * barman [ -x /usr/bin/barman ] && /usr/bin/barman -q cron
barman#bef22f0beec3:~$
Below are the terminal outputs,
barman#4f5c93878899:~$ crontab -l
no crontab for barman
barman#4f5c93878899:~$ su root
Password:
root#4f5c93878899:/var/lib/barman# crontab -l
no crontab for root
root#4f5c93878899:/var/lib/barman# exit
exit
barman#4f5c93878899:~$ id
uid=102(barman) gid=103(barman) groups=103(barman)
barman#4f5c93878899:~$ pwd
/var/lib/barman
barman#4f5c93878899:~$ cat /etc/cron.d/barman
# /etc/cron.d/barman: crontab entries for the barman package
MAILTO=root
* * * * * barman [ -x /usr/bin/barman ] && /usr/bin/barman -q cron
barman#4f5c93878899:~$ ls -ltr /etc/cron.d/barman
-rw-r--r-- 1 root root 140 Jul 9 11:18 /etc/cron.d/barman
barman#4f5c93878899:~$ /usr/bin/barman -q cron
barman#4f5c93878899:~$ ps -aef
UID PID PPID C STIME TTY TIME CMD
root 1 0 0 05:00 ? 00:00:00 /bin/bash /docker-entrypoint.sh barman
root 24 1 0 05:00 ? 00:00:00 /usr/sbin/sshd -D
root 27 24 0 05:03 ? 00:00:00 sshd: barman [priv]
barman 33 27 0 05:03 ? 00:00:00 sshd: barman#pts/0
barman 34 33 0 05:03 pts/0 00:00:00 -bash
barman 107 1 4 05:18 ? 00:00:00 /usr/bin/python3 /usr/bin/barman -c /etc/barman.conf -q receive-wal postgres-source-db
barman 111 107 1 05:18 ? 00:00:00 /usr/lib/postgresql/12/bin/pg_receivewal --dbname=dbname=replication host=postgres-source-db options=-cdatestyle=iso replication=true user=barman application_name
barman 114 34 0 05:18 pts/0 00:00:00 ps -aef
barman#4f5c93878899:~$
barman#4f5c93878899:~$ barman check postgres-source-db
Server postgres-source-db:
PostgreSQL: OK
superuser or standard user with backup privileges: OK
PostgreSQL streaming: OK
wal_level: OK
replication slot: OK
directories: OK
retention policy settings: OK
backup maximum age: FAILED (interval provided: 1 day, latest backup age: No available backups)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: FAILED (have 0 backups, expected at least 1)
ssh: OK (PostgreSQL server)
not in recovery: OK
systemid coherence: OK (no system Id stored on disk)
pg_receivexlog: OK
pg_receivexlog compatible: OK
receive-wal running: OK
archive_mode: OK
archive_command: OK
continuous archiving: OK
archiver errors: OK
barman#4f5c93878899:~$
Thanks
Just for others that might be facing the same issue, i resolved it.
The problem appears to be due to two bugs in barman (i see it on Debian, not sure of others),
The cron.d entry (/etc/cron.d/barman) is missing a new line at the end & it turns out that is needed for cron to execute them at-least on Debian.
The cron was not started automatically by default & i had to start it from my entry point script.
With the above two fixed it works like a charm.
I want to know the method the way to get environment variable in HPUX from pid
by ps command, file, or programming.
it is possible to get variable
# /proc/$pid/environ in environ or ps e -ww -p $pid in linux
# ps ewww pid in aix
# pargs in solaris
HP-UX : use gdb to track but there is no gdb on a server(HPUX) and it's impossible to install it.
let me know that.
If you can install software onto this host, the latest HP-UX Linker, Libraries and Tools patch should give you the pargs(1) command:
[ hp-ux_ia64 sw ] $ /usr/ccs/bin/pargs -v
HP pstack/pldd/pargs version B.12.67 for HP Itanium(R) Systems.
[ hp-ux_ia64 sw ] $ /usr/ccs/bin/pargs -h
usage: pargs [-h] [-v] {-a pid | -e pid}
Given the pid of a running process, pargs prints process arguments and all
environment variables and its values.
pargs works by attaching to the process to read its memory.
[ hp-ux_ia64 sw ] $ ps -fu ranga
UID PID PPID C STIME TTY TIME COMMAND
ranga 9949 9923 0 Mar 17 pts/3 0:00 /usr/bin/sh /home/ranga/bin/tmux
ranga 16795 10007 0 10:40:06 pts/7 0:00 ssh hp-ux_ia64
ranga 9952 9949 0 Mar 17 pts/3 0:00 tmux
ranga 16538 16376 1 21:35:16 pts/4 0:00 ps -fu ranga
ranga 9918 9916 0 Mar 17 ? 0:04 sshd: ranga#pts/3
ranga 9954 1 2 Mar 17 ? 1:15 tmux
[ hp-ux_ia64 sw ] $ PHSS_44731/C-MIN/usr/ccs/bin/pargs -e 9949
SOCKS_CONF=/home/ranga/etc/socks.conf
MAIL=/var/mail/ranga
PATH=/usr/bin:/usr/ccs/bin:/usr/contrib/bin:/opt/langtools/bin:/usr/local/bin
PWD=/home/ranga
EDITOR=vim
TZ=IST-5:30
ERASE=^H
PS1=[ \h \W ] \$
SHLVL=1
SHELL=/usr/bin/bash
SFTP_PERMIT_CHMOD=1
HOME=/home/ranga
TERMINFO=/home/ranga/lib/terminfo
LOGNAME=ranga
SSH_CONNECTION=1.4.5.1 44584 1.2.2.2 22
SSH_CLIENT=1.1.0.6 44584 22
SHLIB_PATH=/home/ranga/local/lib
SFTP_UMASK=
_=/home/ranga/bin/tmux
USER=ranga
TERM=rxvt-256color
SOCKS5_SERVER=socks-server.ranga.com
LINES=70
Even if you can't install the patch, the pargs executable can be extracted from it and used.
If you can copy files out of this host, you could
use gcore(1) to generate a core file of the process
copy this core file along with the executable and the appropriate version of
libc (32-bit or 64-bit, use pldd(1) to confirm) to an environment
where gdb is available
use gdb to hack into the __envp string table
[ hp-ux-ia64 ~ ] $ ps -f
UID PID PPID C STIME TTY TIME COMMAND
ranga 5779 4411 0 13:12:47 pts/0 0:00 ps -f
ranga 4411 4403 0 12:45:42 pts/0 0:00 -bash
[ hp-ux-ia64 ~ ] $ pldd 4411
4411: /usr/bin/bash
/usr/bin/bash
/usr/lib/hpux32/dld.so
/usr/local/lib/hpux32/libtermcap.so
/usr/local/lib/hpux32/libintl.so
/usr/local/lib/hpux32/libiconv.so
/usr/lib/hpux32/libdl.so.1
/usr/lib/hpux32/libc.so.1
[ hp-ux-ia64 ~ ] $ gcore 4411
[ hp-ux-ia64 ~ ] $ gdb -q /usr/bin/bash core.4411
warning: Load module /usr/bin/bash has been stripped.
Debugging information is not available.
(no debugging symbols found)...Core was generated by `bash'.
(no debugging symbols found)...
warning: Load module /usr/local/lib/hpux32/libtermcap.so has been stripped.
Debugging information is not available.
(no debugging symbols found)...
#0 0x60000000c05660f0:0 in _waitpid_sys+0x30 () from /usr/lib/hpux32/libc.so.1
(gdb) x/s *(char**)__envp
0x200000007ffffeae: "USER=ranga"
(gdb)
:
0x200000007fffff45: "SSH_CLIENT=3.3.3.3 50072 22"
:
0x200000007fffffe4: "SFTP_PERMIT_CHOWN=1"
(gdb)
0x200000007ffffff8: ""
I've been searching around a lot but could not figure out how to start mysqld in "safe mode".
This is what I got so far:
[root#localhost bin]# service mysqld_safe start
mysqld_safe: unrecognized service
I'm running CentOS, this is my mysql version:
[root#localhost ~]# mysql --version
mysql Ver 14.12 Distrib 5.0.95, for redhat-linux-gnu (i686) using readline 5.1
Any help would be appreciated!
Starting mysqld should do the trick:
[root#green-penny ~]# service mysqld start
Starting mysqld: [ OK ]
[root#green-penny ~]# ps axu | grep mysql
root 7540 0.8 0.0 5112 1380 pts/0 S 09:29 0:00 /bin/sh /usr/bin/mysqld_safe --datadir=/var/lib/mysql --socket=/var/lib/mysql/mysql.sock --pid-file=/var/run/mysqld/mysqld.pid --basedir=/usr --user=mysql
mysql 7642 1.5 0.7 135480 15344 pts/0 Sl 09:29 0:00 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql --log-error=/var/log/mysqld.log --pid-file=/var/run/mysqld/mysqld.pid --socket=/var/lib/mysql/mysql.sock
root 7660 0.0 0.0 4352 724 pts/0 S+ 09:29 0:00 grep mysql
(Note that mysqld_safe is running.)
In the document(jbossperformancetuning.pdf), it suggest us to enable large page memory for the JVM.
But actually after I added the following to our command-line / script start-up:
"-XX:+UseLargePages"
It didn't work, so I investigated more, enabled the large page memory on OS first, then added "-XX:+UseLargePages -XX:LargePageSizeInBytes=2m" to start-up script.
But unfortunately, it didn't work neither, so could someone give us some suggestions of how to enable the large page memory for the JVM successfully?
Here are some details of our server:
[root#localhost ~]# cat /proc/meminfo
MemTotal: 37033340 kB
MemFree: 318108 kB
Buffers: 179452 kB
Cached: 5934940 kB
SwapCached: 0 kB
...
HugePages_Total: 10251
HugePages_Free: 10251
HugePages_Rsvd: 0
Hugepagesize: 2048 kB
[root#localhost ~]# ps aux | grep java
root 22525 0.2 20.3 28801756 7552420 ? Sl Nov03 31:54 java -Dprogram.name=run.sh -server -Xms1303m -Xmx24g -XX:MaxPermSize=512m -XX:+UseLargePages -XX:LargePageSizeInBytes=2m -Dorg.jboss.resolver.warning=true -Dsun.rmi.dgc.client.gcInterval=3600000 -Dsun.rmi.dgc.server.gcInterval=3600000 -Dsun.lang.ClassLoader.allowArraySyntax=true -verbose:gc -Xloggc:/tmp/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Djava.net.preferIPv4Stack=true -Djava.endorsed.dirs=/opt/jboss-as/lib/endorsed -classpath /opt/jboss-as/bin/run.jar org.jboss.Main -c default -b 0.0.0.0
root 31962 0.0 0.0 61200 768 pts/2 S+ 22:46 0:00 grep java
[root#localhost ~]# cat /etc/sysctl.conf
...
# JBoss is running as root, so the group id is 0
vm.hugetlb_shm_group = 0
# The pages number
vm.nr_hugepages = 12288
Finally I fixed this issue, at first set the large pages memory bigger than JVM heap size, then just reboot the server, because there is no way to make it work unless you upgrade the kernel to the newer one - in RHEL 6.0.