New zero sized sphinx index - sphinx

I have just added a new index in sphinx server where mysql is running on local server, sphinx quires to local mysql, it searches data but index is zero sized,
sing config file '/usr/local/sphinx/etc/sphinx.conf'...
indexing index 'test'...
collected 861000 docs, 0.0 MB
collected 932575 docs, 0.0 MB
total 932575 docs, 0 bytes
total 479.180 sec, 0 bytes/sec, 1946.18 docs/sec
total 0 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 4 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=5545).
please help

This issue was resolved after restarting sphinx service and .new.sp* files which sphinx created also gone after restarting sphinx.

Related

Postgres - pg_statistic vacuum timeout

In my Aurora Postgres server I continuously see a vacuum timeout every minute for the following:
autovacuum: VACUUM pg_catalog.pg_statistic
I tried doing it manually and got following output:
INFO: vacuuming "pg_catalog.pg_statistic"
INFO: "pg_statistic": found 0 removable, 409109 nonremovable row versions in 19981 out of 19990 pages
DETAIL: 408390 dead row versions cannot be removed yet, oldest xmin: 4230263474
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 9 frozen pages.
0 pages are entirely empty.
CPU: user: 0.06 s, system: 0.00 s, elapsed: 0.07 s.
INFO: vacuuming "pg_toast.pg_toast_2619"
INFO: "pg_toast_2619": found 0 removable, 272673 nonremovable row versions in 61558 out of 61558 pages
DETAIL: 219 dead row versions cannot be removed yet, oldest xmin: 4230263474
There were 0 unused item identifiers.
Skipped 0 pages due to buffer pins, 0 frozen pages.
0 pages are entirely empty.
CPU: user: 0.14 s, system: 0.00 s, elapsed: 0.14 s.
VACUUM
Query returned successfully in 5 secs 55 msec.
Anyone can point to the reason why this is happening?
I think your autovacuum might work as well, Timeout:VacuumDelay might only be a metric From AWS
A process is waiting in a cost-based vacuum delay point.
The most similar setting is autovacuum_vacuum_cost_delay which exists in PostgreSQL.
That want to slow down autovacuum. autovacuum will sleep for these many milliseconds when a cleanup reaching autovacuum_vacuum_cost_limit cost is done.
We can try to use pg_stat_user_tables to verify the latest time of autovacuum.
SELECT
schemaname, relname,
last_autovacuum,
last_autoanalyze
FROM pg_stat_user_tables;

checkpoint_completion_target being ignored

I'm testing checkpoint_completion_target in RDS PostgreSQL and see that checkpoint is taking total time of 28.5 seconds. However, I configured the
checkpoint_completion_target = 0.9
checkpoint_timeout = 300
According to this, should the checkpoint spread for 300*0.9 which is 270 seconds?
PostgreSQL version 11.10
Log:
2021-03-19 16:06:47 UTC::#:[25023]:LOG: checkpoint starting: time
2021-03-19 16:07:16 UTC::#:[25023]:LOG: checkpoint complete: wrote 283 buffers (0.2%); 0 WAL file(s) added, 0 removed, 1 recycled; write=28.500 s, sync=0.006 s, total=28.533 s; sync files=56, longest=0.006 s, average=0.000 s; distance=64990 kB, estimate=68721 kB
https://www.postgresql.org/docs/10/runtime-config-wal.html
https://www.postgresql.org/docs/11/wal-configuration.html
The checkpointer implements its throttling by napping in 0.1 second chunks. And there is no provision for taking more than one nap per buffer needing to be written. So if there is very little work to be done, it will finish early despite the setting of checkpoint_completion_target.

joblib Parallel running out of memory

I have something like this
outputs = Parallel(n_jobs=12, verbose=10)(delayed(_process_article)(article, config) for article in data)
Case 1: Run on ubuntu with 80 cores:
CPU(s): 80
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
There are a total of 90,000 tasks. At around 67k it fails and is terminated.
joblib.externals.loky.process_executor.BrokenProcessPool: A process in the executor was terminated abruptly, the pool is not usable anymore.
When I monitor the top at 67k I see a sharp fall in the memory
top - 11:40:25 up 2 days, 18:35, 4 users, load average: 7.09, 7.56, 7.13
Tasks: 32 total, 3 running, 29 sleeping, 0 stopped, 0 zombie
%Cpu(s): 7.6 us, 2.6 sy, 0.0 ni, 89.8 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem : 33554432 total, 40 free, 33520996 used, 33396 buff/cache
KiB Swap: 0 total, 0 free, 0 used. 40 avail Mem
Case 2: Mac with 8 cores
hw.physicalcpu: 4
hw.logicalcpu: 8
But on the mac it is much much slower .. And surprisingly it does not get killed at 67k..
Additionally, I reduced the parallelism (in case 1) to 2,4 and it still fails :(
Why is this happening? Has anyone faced this issue before and has a fix?
Note: when I run for 50,000 tasks it runs well and does not give any problems.
Thank you!
Got a machine with an increased memory of 128GB and that solved the problem!

Minimum hardware requirements for JIRA Software, Confluence and MySQL?

My company is considering a self-hosted option for a combination of JIRA, Confluence and MySQL running behind an nginx proxy. We are a very small team of 5, and expect extremely mild usage for now. I hardly even expect any concurrent usage at this point.
I am a bit puzzled by the various guidelines posted by Atlassian:
https://confluence.atlassian.com/enterprise/jira-sizing-guide-461504623.html
https://confluence.atlassian.com/adminjiraserver075/jira-applications-installation-requirements-935390824.html
https://confluence.atlassian.com/doc/example-size-and-hardware-specifications-from-customer-survey-76840961.html
https://confluence.atlassian.com/doc/server-hardware-requirements-guide-30736403.html
It seems they don't want to bother providing actual minimum hardware requirements. For example, on the same page they could say "minimum heap size to allocate to Confluence is 1 GB and 1 GB for Synchrony (which is required for collaborative editing)" and also that " minimum hardware recommendation" is 6GB. The leap from 1 required plus 1 optional to 6 recommended minimum is bizarre, to say the least.
I think what I want to know is whether I will be able to fit this setup into a 2GB RAM machine or a 4GB RAM machine (both dual CPU).
OK, I have done a test with following configuration:
VM with 2 cores capped at ~2.2Ghz and 4GB RAM
Ubuntu 16.04 server
Docker and docker-compose
Containers:
nginx
jwilder/docker-gen
jrcs/letsencrypt-nginx-proxy-companion
cptactionhank/atlassian-jira-software
cptactionhank/atlassian-confluence
mysql
This 4GB RAM machine is barely capable of running this setup:
$ free -m
total used free shared buff/cache available
Mem: 3951 3553 107 0 291 157
Swap: 974 725 249
CPU usage was going up to 200% only during initialisation when JIRA and Confluence started with empty home dirs. The following top output is after:
creating a space and a page in Confluence
and a project with ~10 issues in JIRA
and linking JIRA and Confluence together
$ top -o %MEM | head -15
top - 16:14:33 up 6:12, 2 users, load average: 0.15, 0.04, 0.01
Tasks: 132 total, 1 running, 131 sleeping, 0 stopped, 0 zombie
%Cpu(s): 2.6 us, 0.5 sy, 0.0 ni, 95.8 id, 1.0 wa, 0.0 hi, 0.1 si, 0.0 st
KiB Mem : 4046364 total, 128808 free, 3638444 used, 279112 buff/cache
KiB Swap: 998396 total, 252956 free, 745440 used. 161144 avail Mem
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
6328 bin 20 0 3306232 1.468g 0 S 0.0 38.1 12:03.27 java
6418 bin 20 0 2860000 1.320g 0 S 0.0 34.2 10:56.24 java
7205 bin 20 0 2807088 476592 1724 S 0.0 11.8 1:58.37 java
5752 999 20 0 1815480 99804 4728 S 0.0 2.5 1:11.29 mysqld
1070 root 20 0 621908 28672 8904 S 0.0 0.7 0:30.74 dockerd
1179 root 20 0 623004 7536 2520 S 0.0 0.2 0:16.66 docker-containe
968 root 20 0 291352 6536 1912 S 0.0 0.2 0:00.77 snapd
8310 root 20 0 15388 5064 3056 S 0.0 0.1 0:21.39 docker-gen
Confluence also allocated ~500MB RAM to Synchrony:
$ ps aux --sort -rss | head -4
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
bin 6328 3.3 38.3 3306232 1551120 ? Ssl 10:14 12:12 /usr/lib/jvm/java-1.8-openjdk/bin/java -Djava.util.logging.config.file=/opt/atlassian/confluence...
bin 6418 2.9 34.1 2860000 1382868 ? Ssl 10:14 10:57 /usr/lib/jvm/java-1.8-openjdk/bin/java -Djava.util.logging.config.file=/opt/atlassian/jira/...
bin 7205 0.5 11.7 2807088 476588 ? Sl 10:44 1:59 /usr/lib/jvm/java-1.8-openjdk/jre/bin/java -classpath /opt/atlassian/confluence/temp/... synchrony.core sql
During JIRA and Confluence install stage, MySQL peaked at around 500MB RAM usage, and during normal operation it sits around 100MB.
In my attempts, a 2GB machine was only enough to run either JIRA or Confluence without MySQL.
Conclusion:
It looks like 4GB RAM Dual core machine is the absolute minimum required for JIRA+Confluence+MySQL. But keep in mind that such a machine is barely enough for a practically empty project.
I personally was not expecting these applications to be that RAM hungry being empty.

Interpret differences in prstat vs. 'prstat -m' on Solaris

I've been using prstat and prstat -m a lot to investigate performance issues lately, and I think I've basically understood the differences of sampling vs. microstate accounting in Solaris 10. So I don't expect both to always show the exactly same number.
Today I came across an occasion where the 2 showed so vastly different outputs, that I have problems interpreting them and making sense of the output. The machine is a heavily loaded 8-CPU Solaris 10, with several large WebSphere processes and an Oracle database. The system practically came to a halt today for about 15 minutes (load averages of >700). I had difficulties to get any prstat information, but was able to get some outputs from "prtstat 1 1" and "prtstat -m 1 1", issued shortly one after another.
The top lines of the outputs:
prstat 1 1:
PID USERNAME SIZE RSS STATE PRI NICE TIME CPU PROCESS/NLWP
8379 was 3208M 2773M cpu5 60 0 5:29:13 19% java/145
7123 was 3159M 2756M run 59 0 5:26:45 7.7% java/109
5855 app1 1132M 26M cpu2 60 0 0:01:01 7.7% java/18
16503 monitor 494M 286M run 59 19 1:01:08 7.1% java/106
7112 oracle 15G 15G run 59 0 0:00:10 4.5% oracle/1
7124 oracle 15G 15G cpu3 60 0 0:00:10 4.5% oracle/1
7087 app1 15G 15G run 58 0 0:00:09 4.0% oracle/1
7155 oracle 96M 6336K cpu1 60 0 0:00:07 3.6% oracle/1
...
Total: 495 processes, 4581 lwps, load averages: 74.79, 35.35, 23.8
prstat -m 1 1 (some seconds later)
PID USERNAME USR SYS TRP TFL DFL LCK SLP LAT VCX ICX SCL SIG PROCESS/NLWP
7087 app1 0.1 56 0.0 0.2 0.4 0.0 13 30 96 2 33 0 oracle/1
7153 oracle 0.1 53 0.0 3.2 1.1 0.0 1.0 42 82 0 14 0 oracle/1
7124 oracle 0.1 47 0.0 0.2 0.2 0.0 0.0 52 77 2 16 0 oracle/1
7112 oracle 0.1 47 0.0 0.4 0.1 0.0 0.0 52 79 1 16 0 oracle/1
7259 oracle 0.1 45 9.4 0.0 0.3 0.0 0.1 45 71 2 32 0 oracle/1
7155 oracle 0.0 42 11 0.0 0.5 0.0 0.1 46 90 1 9 0 oracle/1
7261 oracle 0.0 37 9.5 0.0 0.3 0.0 0.0 53 61 1 17 0 oracle/1
7284 oracle 0.0 32 5.9 0.0 0.2 0.0 0.1 62 53 1 21 0 oracle/1
...
Total: 497 processes, 4576 lwps, load averages: 88.86, 39.93, 25.51
I have a very hard time interpreting the output. prstat seems to tell me that a fair amount of Java processing is going on, together with some Oracle stuff, just as I would expect in normal situation. prtstat -m shows a machine totally dominated by Oracle, consuming huge amounts of system time, and the overall CPU being heavily overloaded (large numbers in LAT).
I'm inclined to believe the output of prstat -m, because that matches much mor closely to what the system felt like during this time. Totally sluggish, almost no more user request processing going on from WebSphere, etc. But why would prstat show so heavily differing numbers?
Any explanation of this would be welcome!
CU, Joe
There's a known problem with prstat -m on Solaris in the way cpu usage figures are calculated - the value you see has been averaged over all threads (LWPs) in a process, and hence is far far too low for heavily multithreaded processes - such as your Java app servers, which can have hundreds of threads each (see your NLWP). Less than a dozen of them are probably CPU hogs hence the CPU usage by java looks "low". You'd need to call it with prstat -Lm to get the per-LWP (thread) breakdown to see that effect. Reference:
http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6780169
Without further performance monitoring data it's hard to give non-speculative explanations of what you've seen there. I assume lock contention within java. One particular workload that can cause this is heavily multithreaded memory mapped I/O, they'll all pile up on the process address space lock. But it could be a purely java userside lock of course. A plockstat on one of the java processes, and/or simple dtrace profiling would be helpful.