We have installed and configured the sphinx search in our site for some time now. It was
running very well. But recently we noticed that while updating the main and delta indexes
through ssh in the sphinx server the servers load average increases drastically. It goes
up to 11 while the indexer script is in process. The code that we are running is this:
1)ssh -p 90 root#host "/usr/bin/indexer --rotate IdxDelta_domainname"
2)ssh -p 90 root#host "/usr/bin/indexer --rotate IdxDeltaOutlineSearchIndex_domainname"
3)ssh -p 90 root#host "/usr/bin/indexer --rotate IdxDeltaStatus_grmtech"
4) ssh -p 90 root#host "/usr/bin/indexer --rotate --merge IdxMainSearchIndex_domainname
IdxDelta_domainname --merge-klists --sighup-each "
5)ssh -p 90 root#host "/usr/bin/indexer --rotate --merge
IdxMainOutlineSearchIndex_grmtech IdxDeltaOutlineSearchIndex_domainname --merge-klists
--sighup-each "
6)ssh -p 90 root#host "/usr/bin/indexer --rotate --merge IdxMainStatus_grmtech
IdxDeltaStatus_grmtech --merge-klists --sighup-each "
This is running from the domain of the original site through a crawler script
The Sphinx index table has 22 fields and data of 689,325 rows.
The server is strong one (with 16 core processor and 6GB of RAM)
While the indexer process runs all 16 cores shows 100% CPU usage (Through top command)
and load average shoots up (from the 4th step)
Any way out? please help
Related
I am trying to install docker compose on the Ubuntu 18.04.2 LTS.
I tried installing using the official link here and followed the Docker Compose documentation given, but when i run the command
sudo curl -L "https://github.com/docker/compose/releases/download/1.24.1/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
then after some time it gives me this error
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 617 0 617 0 0 613 0 --:--:-- 0:00:01 --:--:-- 613
24 8280k 24 2056k 0 0 789 0 2:59:06 0:44:27 2:14:39 0
**curl: (56) OpenSSL SSL_read: SSL_ERROR_SYSCALL, errno 104**
Kindly help me on this i have tried many times but it is not working.
I had the same problem. I assume that you are using Docker Docs, which are usually outdated. You should go to Docker Compose Github instead.
Solution
1 - Open Linux Terminal by pressing Ctrl + Alt + T
2 - Install curl:
sudo apt install curl
3 - Turn on root privileges in terminal for your user (something like admin in Windows OS), with command:
sudo -i
4 - Go to Docker Compose Github. In releases you will find this code. Run it in your linux terminal.
curl -L https://github.com/docker/compose/releases/download/1.25.1-rc1/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
5 - Turn off root privileges in terminal for your user, with command:
exit
6 - Check if docker-compose is installed with command:
docker-compose version
Outcome: In your terminal, you should see docker-compose version number and some other informations.
I use postgresql on Debian.
The postgresql service can not start after I edit the config file:
#data_directory = '/var/lib/postgresql/9.4/main' # use data in another directory
data_directory = '/opt/data/postgresql/data'
(yeah,I just use custom directory instead of the default data_directory)
I find the log in /var/log/syslog
Sep 14 10:22:17 thinkserver-ckd postgresql#9.4-main[11324]: Error: could not exec /usr/lib/postgresql/9.4/bin/pg_ctl /usr/lib/postgresql/9.4/bin/pg_ctl start -D /opt/data/postgresql/data -l /var/log/postgresql/postgresql-9.4-main.log -s -o -c config_file="/etc/postgresql/9.4/main/postgresql.conf" :
Sep 14 10:22:17 thinkserver-ckd systemd[1]: postgresql#9.4-main.service: control process exited, code=exited status=1
Sep 14 10:22:17 thinkserver-ckd systemd[1]: Failed to start PostgreSQL Cluster 9.4-main.
Sep 14 10:22:17 thinkserver-ckd systemd[1]: Unit postgresql#9.4-main.service entered failed state.
And nothing in /var/log/postgresql/postgresql-9.4-main.log
Thanks.
I finally got this answer:
What this error means in PostgreSQL?
#langton 's answer.
He said that
you should run pg_upgradecluster or similar, or just create a new cluster with pg_createcluster (these commands are for debian systems - you didn't specify your OS)
So I executed the command:
pg_createcluster -d /opt/data/postgresql/data -l /opt/data/postgresql/log 9.4 ckd
And then :
service postgresql restart
it started!
If downtime is allowed and you already have databases with data in the old cluster location you only need to physically copy the data to the new location.
This is a more or less common operation if you partition is out of space.
# Check that current data directory is the same that
# the one in the postgresql.conf config file
OLD_DATA_DIR=$(sudo -u postgres psql --no-psqlrc --no-align --tuples-only --quiet -c "SHOW data_directory;")
echo "${OLD_DATA_DIR}"
CONFIG_FILE=$(sudo -u postgres psql --no-psqlrc --no-align --tuples-only --quiet -c "SHOW config_file;")
echo "${CONFIG_FILE}"
# Stop PostgreSLQ
systemctl stop postgresql
# Change the data directory in the config
# Better to do it with an editor, instead of sed
NEW_DATA_DIR='/opt/data/postgresql/data'
sed -i "s%data_directory = '${OLD_DATA_DIR}'%data_directory = '${NEW_DATA_DIR}'%" "${CONFIG_FILE}"
# Move/Copy the data for example using rsync
rsync -av --dry-run "${OLD_DATA_DIR}" "${NEW_DATA_DIR}"
# Take care with the classical issues of rsync and end backslashes
rsync -av "${OLD_DATA_DIR}" "${NEW_DATA_DIR}"
# Rename the old dir, just to avoid missunderstandings and set
# check the permissions on the new one
# Start postgres
systemctl start postgresql
# Check that everything goes well and eventually drop the old data
# Make sure that the logs and everything else is where you want.
vnstat is updating only one interface every five minutes. I have to use
vnstat -u
to manually update the rest of interfaces. All interfaces are already enabled, but only one interface is updating every 5 minutes.
Check which user the vnstat daemon is running as using ps aux | grep [v]nstat.
I recently had the same problem and after priming the database with
vnstat -u -i eth0 as root the vnstat process couldn't write to the /var/lib/vnstat/eth0
file as it was running as user "vnstat".
If vnstat is running as user "vnstat" ensure that it has permission to write to /var/lib/vnstat/eth0.
When you add the interface for eth0 or ppp0 or whatever, make sure you do it as the vnstat user. ie
sudo -u vnstat vnstat -i ppp0 -u
If you run this as root first you are will have problems even if you chmod the file in /var/lib/vnstat. This is due to the creation of a back file called .ppp0 which you might miss if you are not looking for it. There will be an error in syslog saying that the backup file cannot be written.
So I was having a similar problem where i was getting the following:
$ vnstat -i eno1
eno1: not enough data available yet
I also tried every other command while pointing to eno1. I would sometimes even get:
Error: Unable to create database backup "/var/lib/vnstat/.eno1"
OR
Segmentation fault (core dumped)
I tried reinstalling, and everything else under the sun.
Following Andrew's answer to the 't' returned:
Error: Unable to open database "/var/lib/vnstat/eno1" for writing: Permission denied
so instead I did the following, but I'm not sure which one of these commands did the trick.
$ sudo vnstat -i eno1 -u
$ sudo vnstat -u -i eno1
Then I checked to see if the interface was working again:
$ sudo vnstat -i eno1
which returned:
>
Database updated: Wed Dec 5 10:17:37 2018
(eno1) since 1969-12-31
rx: 2 KiB tx: 1 KiB total: 3 KiB
monthly
rx | tx | total | avg. rate
------------------------+-------------+-------------+---------------
Dec '69 2 KiB | 1 KiB | 3 KiB | 0.00 kbit/s
------------------------+-------------+-------------+---------------
estimated -- | -- | -- |
daily
rx | tx | total | avg. rate
------------------------+-------------+-------------+---------------
today 2 KiB | 1 KiB | 3 KiB | 0.00 kbit/s
------------------------+-------------+-------------+---------------
estimated -- | -- | -- |
Now its finally able to read and write to eno1 log. I noticed this problem since conky was not showing up any stats reports on today && Month && total. I wasn't expecting anything under month, but after a couple days I was expecting something under hours.
I realise the rest will take a while to populate with data. But now I know for sure it is working. Also, my conky app is finally displaying the information.
However, prior to this solution, I had already chmod the file.
Additional info for newbies such as myself:
- make sure to check which interface you are using, I often see solutions for eth0 and others that do not appear when using "$ ifconfig". Enter:
$ ifconfig
and you should see on the left hand side of the results the interface name. Mine are, eno1, lo, and wlo1.
next to the label: "Link encap:" it should say if it is wireless, ethernet, or local loopback
lo is the local loopback a.k.a localhost/127.0.0.1
What I am not sure of, in my case, is the difference between eno1 and wlo1. they both say "Ethernet". I wonder if doesn't have something to do with my direct wifi printer.
I am trying to run the example on "http://gearman.org/getting_started" on Ubuntu in VirtualBox environment.
At first I tried to download an old version 0.16 by using apt-get install gearman-job-server, apt-get install gearman-tools and everything worked well. The server ran in the background, I was able to create 2 workers and verify that I can call them by creating a client.
I decided to download and compile the latest version, 1.1.6. Now, I am trying to do the same thing with the new version and I am having errors.
I run the server as admin:
sudo gearmand
The statement
gearadmin --getpid
seems to work - it returns me the process ID of the server. Thus, the server is running, and this answer is not relevant.
Now, I am adding a worker:
gearman -w -f wc -- wc -l
It seems to run.
Nevertheless,
gearadmin --workers
results in something that probably represents and empty list :
33 127.0.0.1 - :
.
(In version 0.16, I was able to see 2 lines, the second showing the registered function name.)
Attempting to run the client
gearman -f wc < /etc/passwd
results in
gearman: gearman_client_run_tasks : flush(GEARMAN_COULD_NOT_CONNECT) localhost:0 -> libgearman/connection.cc:671"
This might be the very same problem described in here - the port not specified, but I have no idea how to do it through the command line tool.
Any idea?
Ok, It looks like the answer in here was the key to success. Probably, the "getting started" section was not updated for a while. Indeed, one must specify a port explicitly for gearmand and gearman .
Server:
sudo gearmand -p 5000
Worker:
gearman -p 5000 -w -f wc -- wc -l
Client:
gearman -p 5000 -f wc < /etc/passwd
I am working on a PostgreSQL extension in C that segfaults, so I want to look at the core dump file on my OS X Lion box. However, there are no core files in /cores or anywhere else that I can find. It appears that they are enabled in the system but are limited to a size of 0:
> sysctl kern.coredump
kern.coredump: 1
> ulimit -c
0
I tried setting ulimit -c unlimited in the shell session I'm using to start and stop PostgreSQL, and it seems to stick:
> ulimit -c
unlimited
And yet no matter what I do, no core files. I am starting PostgreSQL with pg_ctl -c, where the -c tells PostgreSQL to generate core dumps. But the system has nothing. How can I get Lion to dump core files?
The /cores/ directory is not necessarily there in Lion , and if it's not there, you won't get cores. You should be able to set the ulimit (as you have), run a program like cat(1), quit with a SIGQUIT (control-backslash) and get a coredump:
lion:~ user$ ulimit -c unlimited
lion:~ user$ cat
^\
^\
Quit: 3 (core dumped)
lion:~ user$ ls -l /cores/
total 716584
-r-------- 1 user user 366891008 Jun 21 23:35 core.1263
lion:~ user$
Technical Note TN2124 http://developer.apple.com/library/mac/#technotes/tn2124/ as suggested by Yuji in https://stackoverflow.com/a/3783403/225077 is helpful.