why there is several crond process on centos OS - centos

I have a server which CPU usage is very high, and I find there is many crond process on this server. I can not understand why this occur.Anyone know the reason? Please tell me.
When I run "ps aux | grep crond" on this server.
enter image description here

crond forks a process for each job it executes. In your case, it looks like several jobs are being started every five minutes. All of them, though, appear to be waiting for I/O (that's the meaning of the "D" process state in the 8th column of ps output, according to the ps man page), and thus are not contributing to CPU load.
If you want to know what's eating the CPU, start with top.

crond is not the cause of the problem, suggest you use top to check Cost the process of high performance.

Related

CentOS 7 is starting >70000 processes after startup

we have two identical (hardware-wise) workstations running CentOS 7. On one of the machines (the "bad" one), we observe lags and it's generally less responsive:
After rebooting both machines, both machines showed no difference in cpu/memory load, but the latest PID (by checking ps -A) is extremely different. The "good" machine has a PID around 6.000, whereas the "bad" machine is above 70.000. I checked the "bad" machine directly after startup with ps -A and top, and "dracut" and possible related processes (gzip, xy) pops up a couple of hundred times.
What I think is, that on the "bad" machine some configuration is wrong, leading to the startup of these subprocesses, which resulted in an overall less responsive system.
My questions now:
How can I precisely log the startup on the machines?
How to solve eventually the issue?
And if this is dracut related, how can I check the dracut configurations and compare them between both machines?
Thank you very much.
christian

Recurring linux process consuming cpu

On my opensuse server, I keep seeing this process coming up.
I've tried kill -9 and it comes back with a new process id within 30 seconds.
htop lists it as "bash", while top lists it as "xs".
The attached screenshot is what I could get from ps.
It stays after multiple reboots.
It doesn't seem like a normal zombie process to me.
Wondering if anyone has any advice?
Thanks
ps info

Rundeck - any command execution fails when running on 5.8k nodes

I'm running a rundeck server to delegate a simple script to 5.8k other linux servers.
The very simple script is bellow
!/bin/bash
A=$(hostname)
echo $A
When i run the same job with a smaller number of targets (4089 nodes)
the comands work fine
I tried looking at my service.log page and its not incrementing anything
Any ideas on how to be able to run on all the 5.8k nodes? And where should i look for errors?
Rundeck does not have limits to nodes, certainly depends on how many executions you want to run, how much ram, how many processors and disk space.
Maybe you need to increase the Java heap size:
https://rundeck.org/docs/administration/maintenance/tuning-rundeck.html#java-heap-size
And how to adapt this to your SSH plugin:
https://rundeck.org/docs/administration/maintenance/tuning-rundeck.html#built-in-ssh-plugins

mongod main process killed by KILL signal

One of the mongo nodes in the replica set went down today. I couldn't find what happened but when i checked the logs on the server, I saw this message 'mongod main process killed by KILL signal'. I tried googling for more information but failed. Basically i like to know what is KILL signal, who triggered it and possible causes/fixes.
Mongo version 3.2.10 on Ubuntu.
The KILL signal means that the app will be killed instantly and there is no chance left for the process to exit cleanly. It is issued by the system when something goes very wrong.
If this is the only log left, it was killed abruptly. Probably this means that your system ran out of memory (I've had this problem with other processes before). You could check if swap is configured on your machine (by using swapon -s), but perhaps you should consider adding more memory to your server, because swap would be just for it not to break, as it is very slow.
Another thing worth looking at is the free disk space left and the syslog (/var/log/syslog)

bind9 (named) does not start in multi-threaded mode

From the bind9 man page, I understand that the named process starts one worker thread per CPU if it was able to determine the number of CPUs. If its unable to determine, a single worker thread is started.
My question is how does it calculate the number of CPUs? I presume by CPU, it means cores. The Linux machine I work is customized and has kernel 2.6.34 and does not support lscpu or nproc utilities. named is starting a single thread even if i give -n 4 option. Is there any other way to force named to start multiple threads?
Thanks in advance.