Recurring linux process consuming cpu - server

On my opensuse server, I keep seeing this process coming up.
I've tried kill -9 and it comes back with a new process id within 30 seconds.
htop lists it as "bash", while top lists it as "xs".
The attached screenshot is what I could get from ps.
It stays after multiple reboots.
It doesn't seem like a normal zombie process to me.
Wondering if anyone has any advice?
Thanks
ps info

Related

dotMemory command line scheduled snapshots

I'm running dotMemory command line against an IoT Windows Forms application which requires many hours of tests on a custom appliance.
My purpose is to get memory snapshots on a time basis, while the application is running on the appliance. For example, if the test is designed to run for 24h, I want to get a 10 seconds memory snapshot each hour.
I found 2 ways of doing it:
Run dotMemory.exe and get a standalone snapshot on a time basis, by using schtasks to schedule each execution;
Run dotMemory using the attach and trigger arguments and get all the snapshots on a single file.
The first scenario it's ready for me, but as it is easy to see, the second one is much better for further analysis after collecting the data.
I'm able to start it by using a command just like:
C:\dotMemory\dotMemory.exe attach $processId --trigger-on-activation --trigger-timer=10s --trigger-max-snapshots=24 --trigger-delay=3600s --save-to-dir=c:\dotMemory\Snapshots
Here comes my problem:
How can I make the command/process stop after it reaches the max-snapshot value without any human intervention?
Reference: https://www.jetbrains.com/help/dotmemory/Working_with_dotMemory_Command-Line_Profiler.html
If you start your app under profiling instead of attaching to the already running process, stopping the profiling session will kill the app under profiling. You can stop profiling session by passing ##dotMemory["disconnect"] command to the dotMemory console stdin. (E.g. some script can do that after some time).
See dotmemory help service-messages for details
##dotMemory["disconnect"] Disconnect profiler.
If you started profiling with 'start*' commands, the profiled process will be killed.
If you started profiling with 'attach' command, the profiler will detach from the process.
P.S.
Some notes about your command line. With this comand line dotMemory will get a snapshot each 10 seconds but will start to do it after one hour. There is no such thing as "10 seconds memory snapshot" memory snapshot is a momentary snapshot of an object graph in the memory. Right command line for your task will be C:\dotMemory\dotMemory.exe attach $processId --trigger-on-activation --trigger-timer=1h --trigger-max-snapshots=24 --save-to-dir=c:\dotMemory\Snapshots

Is it possible to restart the Mojolicious Minion worker gracefully?

I'd like to be able to gracefully restart the Minion worker I am working on developing (i.e.: without gong back to the command line, killing it and restarting it, which is what I do now).
Is that possible? I'm hoping for something similar to what one can do with a Plack server, i.e.: sending a HUP signal restarts the server.
See this proposed feature - though nobody has worked on it yet.

why there is several crond process on centos OS

I have a server which CPU usage is very high, and I find there is many crond process on this server. I can not understand why this occur.Anyone know the reason? Please tell me.
When I run "ps aux | grep crond" on this server.
enter image description here
crond forks a process for each job it executes. In your case, it looks like several jobs are being started every five minutes. All of them, though, appear to be waiting for I/O (that's the meaning of the "D" process state in the 8th column of ps output, according to the ps man page), and thus are not contributing to CPU load.
If you want to know what's eating the CPU, start with top.
crond is not the cause of the problem, suggest you use top to check Cost the process of high performance.

mongod main process killed by KILL signal

One of the mongo nodes in the replica set went down today. I couldn't find what happened but when i checked the logs on the server, I saw this message 'mongod main process killed by KILL signal'. I tried googling for more information but failed. Basically i like to know what is KILL signal, who triggered it and possible causes/fixes.
Mongo version 3.2.10 on Ubuntu.
The KILL signal means that the app will be killed instantly and there is no chance left for the process to exit cleanly. It is issued by the system when something goes very wrong.
If this is the only log left, it was killed abruptly. Probably this means that your system ran out of memory (I've had this problem with other processes before). You could check if swap is configured on your machine (by using swapon -s), but perhaps you should consider adding more memory to your server, because swap would be just for it not to break, as it is very slow.
Another thing worth looking at is the free disk space left and the syslog (/var/log/syslog)

Jobs in a queue is dropped unexpectedly in Gearman

I'm dealing with a very strange problem now.
Since I queue the jobs over 1,000 at once, Gearman doesn't work properly so far...
The problem is that, when I reserve the jobs in background mode, I could see the jobs were correctly queued from the monitoring page (gearman monitor),
but It is drained right after without delivering it to the worker. (within a few seconds)
After all, the jobs never be executed by the worker, just disappeared from the queue (job server).
So I tried rebooting the server entirely, and reinstall gearman as well as php library. (I'm using 1 CentOS, 1 Ubuntu with PHP gearman library, and version is 0.34 and 1.0.2)
But no luck yet... Job server just misbehaving as I explained in aobve.
What should I do for now?
Can I check the workers state, or see and monitor the whole process from queueing the jobs to the delivering to the worker?
When I tried gearmand with a option like: 'gearmand -vvvv' It never print anything on the screen while I register worker to the server, and run a job with client code (PHP)
Any comment will be appreciated.
For your information, I'm not considering persistent queue using MySQL or SQLite for now, because it sometimes occurs performance issue with slow execution.