CentOS 7 is starting >70000 processes after startup - operating-system

we have two identical (hardware-wise) workstations running CentOS 7. On one of the machines (the "bad" one), we observe lags and it's generally less responsive:
After rebooting both machines, both machines showed no difference in cpu/memory load, but the latest PID (by checking ps -A) is extremely different. The "good" machine has a PID around 6.000, whereas the "bad" machine is above 70.000. I checked the "bad" machine directly after startup with ps -A and top, and "dracut" and possible related processes (gzip, xy) pops up a couple of hundred times.
What I think is, that on the "bad" machine some configuration is wrong, leading to the startup of these subprocesses, which resulted in an overall less responsive system.
My questions now:
How can I precisely log the startup on the machines?
How to solve eventually the issue?
And if this is dracut related, how can I check the dracut configurations and compare them between both machines?
Thank you very much.
christian

Related

Why the 1.69.2 vscode server has so high CPU usage (>50%)?

I use VS Code latest version 1.69.2 , and remote connect to my cloud VM. After one or two days, I found the cpu usage is very high .The detail of the process is :
my-user 18954 17082 0 12:10 ? 00:00:04 /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/node /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/out/bootstrap-fork --type=extensionHost --transformURIs --useHostProxy=false
There are 8 node processes in total,every cpu usage is large than 50%
The question is:
What process is this?
Why the cpu usage is so high?
When I close all the connected windows and then reconnect to my remote VM, these processes are still here. Why are these processes not closing automatically?
Is this the bug of VSCode 1.69.2?
VSCode uses Node, I suppose it's one of these process used for autocomplete that scans your files, but it's never ending. I have to kill them manually, 1 node process uses 100% of the CPU.
I solved the issue by removing the .vscode-server folder created in my home folder. You have to do this from a remote shell (not from vscode terminal).
I solved the issue by removing the Settings Sync plugin。

WSL (Windows Subsystem for Linux) how to set CPU / core?

I had asked this question several days ago and have finally gotten a definitive answer when I had some time earlier today to install Ubuntu on a VM. Basically, it would seem that the WSL does not behave correctly when attempting to set the CPU of a program. On the WSL when I ran my code both with and without sudo, the output was the same, all threads printing seemingly at random. However, on my Ubuntu VM, running the same code without sudo had the same effect, and running it with sudo caused the original goal of the code, and starved the other 2 threads allowing only the the process with the highest affinity (in this case 40) to print.
I was hoping someone here knew the WSL very intimately and could help me achieve the intended behavior on it.

Centos - my Centos 5 (hosted asterisk) always have a large CPU usage process

i have a Centos 5 server which hosted asterisk 13.
server works fine last week but now top command always show me a process with large amount of CPU usage. when i kill the process a few second later another command with large CPU usage started. many times processes command is ".syslog" but have other command like "qjennjifes", "vnvebynufu" and another unknown commands like that.
1) Check you have firewall and fail2ban recomended settings
2) Check you have no DoS/DDoS by "sip show channels"
3) Check your system not hacked/no broken soft on your host.

why there is several crond process on centos OS

I have a server which CPU usage is very high, and I find there is many crond process on this server. I can not understand why this occur.Anyone know the reason? Please tell me.
When I run "ps aux | grep crond" on this server.
enter image description here
crond forks a process for each job it executes. In your case, it looks like several jobs are being started every five minutes. All of them, though, appear to be waiting for I/O (that's the meaning of the "D" process state in the 8th column of ps output, according to the ps man page), and thus are not contributing to CPU load.
If you want to know what's eating the CPU, start with top.
crond is not the cause of the problem, suggest you use top to check Cost the process of high performance.

Can I configure icecream (icecc) to do zero local jobs

I'm trying to build a project on a rather underpowered system (intel compute stick with 1GB of RAM). Some of the compilation steps run out of memory. I've configured icecc so that it can send some jobs to a more powerful machine, but it seems that icecc will always do at least one job on the local machine.
I've tried setting ICECC_MAX_JOBS="0" in /etc/icecc/icecc.conf (and restarting iceccd), but the comments in this file say:
# Note: a value of "0" is actually interpreted as "1", however it
# also sets ICECC_ALLOW_REMOTE="no".
I also tried disabling the icecc daemon on the compute stick by running /etc/init.d/icecc stop. However, it seems that icecc is still putting one job on the local machine (perhaps if the daemon is off it's putting all jobs on the local machine?).
The project is makefile based and it appears that I'm stuck on a bottleneck step where calling make with -j > 1 still only issues one job, and this compilation is expiring the system memory.
The only work around I can think of is to actually compile on a different system and then ship the binaries back over but I expect to enter a tweak/build/evaluate cycle on this platform so I'd like to be able to work from the compute stick directly.
Both systems are running ubuntu 14.04 if that helps.
I believe it is not supported since if there are network issues, icecc resorts to compiling on the host machine itself. Best solution would be to compile on the remote machine and copy back the resulting binary.
Have you tried setting ICECC_TEST_REMOTEBUILD in client's terminal (where you run make)?
export ICECC_TEST_REMOTEBUILD=1
In my tests this always forces all sources to be compiled remotely.
Just remember that linking is always done on local machine.