Lustre hangs while mounting oss - lustre

I have installed Parallel file system "Lustre" along with this slide with RPM.
Have set node A, B.
Installed mds and mdt to node A. Its mount was successful.
But, After format oss to node B using mkfs.lustre, then I mounted it, but it began Infinite waiting.
And it retrieve this error once 120 seconds.
INFO: task mount.lustre:1541 blocked for more than 120 seconds.
Not tainted 2.6.32-504.8.1.el6_lustre.x86_64 #1
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
Why it occurs? Or can you give me better tutorial or experience? Its version of Lustre is 2.7.0.
Thanks a lot.

It is a info message. As mentioned in the message, though you can echo 0 to "hung_task_timeout_secs" to disable the message from showing up but still I will not recommend it.
Try to lower the mark for flushing the cache from 40% to 10% by setting “vm.dirty_ratio=5″ & "vm.dirty_background_ratio=5" in /etc/sysctl.conf. Activate it by using sysctl -p command, there is no need to reboot the system.

Related

Why the 1.69.2 vscode server has so high CPU usage (>50%)?

I use VS Code latest version 1.69.2 , and remote connect to my cloud VM. After one or two days, I found the cpu usage is very high .The detail of the process is :
my-user 18954 17082 0 12:10 ? 00:00:04 /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/node /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/out/bootstrap-fork --type=extensionHost --transformURIs --useHostProxy=false
There are 8 node processes in total,every cpu usage is large than 50%
The question is:
What process is this?
Why the cpu usage is so high?
When I close all the connected windows and then reconnect to my remote VM, these processes are still here. Why are these processes not closing automatically?
Is this the bug of VSCode 1.69.2?
VSCode uses Node, I suppose it's one of these process used for autocomplete that scans your files, but it's never ending. I have to kill them manually, 1 node process uses 100% of the CPU.
I solved the issue by removing the .vscode-server folder created in my home folder. You have to do this from a remote shell (not from vscode terminal).
I solved the issue by removing the Settings Sync plugin。

Artemis: AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers

I'm sending a message from Application A to Artemis but I'm getting this error from Application A:
AMQ212054: Destination address=my-service is blocked. If the system is configured to block make sure you consume messages on this configuration.
Looking at the logs of artemis starting up this is what I see which I believe is the cause:
AMQ222210: Storage usage is beyond max-disk-usage. System will start blocking producers
I've looked at the documentation here and found nothing that could help. Also have logged into the running container and changed the 'max-disk-usage' to 100 as per my google research and so far nothing has helped.
I'm running artemis using the following command:
docker run -it --rm -e ARTEMIS_USERNAME=artemis -e ARTEMIS_PASSWORD=artemis -p 8161:8161 -p 61616:61616 vromero/activemq-artemis
Any help is appreciated~ Thank you
You are receiving this message because you computer's disk space is over 90% full and Artemis blocks producers once this happens. To solve your problem you can either:
Clear up disk space on your computer so that it is below 90% .
Increase how full your disk can be before Artimes blocks producers. To do this you need to modify the broker configuration file which is located at:
path-to-broker\artemis\etc\broker.xml
In this file, there is a tag labeled max-disk-usage which is by default set to 90. Simply increase this to 100 (or whatever value you feel comfortable with).
Note that the reason Artemis configures your brokers to start blocking producers once you computer's disk space usage reaches 90% and above is to prevent potentially using up all of your disk space in the case of message back log.
I've downloaded a different version and this issue hasn't occurred anymore.

Centos - my Centos 5 (hosted asterisk) always have a large CPU usage process

i have a Centos 5 server which hosted asterisk 13.
server works fine last week but now top command always show me a process with large amount of CPU usage. when i kill the process a few second later another command with large CPU usage started. many times processes command is ".syslog" but have other command like "qjennjifes", "vnvebynufu" and another unknown commands like that.
1) Check you have firewall and fail2ban recomended settings
2) Check you have no DoS/DDoS by "sip show channels"
3) Check your system not hacked/no broken soft on your host.

why there is several crond process on centos OS

I have a server which CPU usage is very high, and I find there is many crond process on this server. I can not understand why this occur.Anyone know the reason? Please tell me.
When I run "ps aux | grep crond" on this server.
enter image description here
crond forks a process for each job it executes. In your case, it looks like several jobs are being started every five minutes. All of them, though, appear to be waiting for I/O (that's the meaning of the "D" process state in the 8th column of ps output, according to the ps man page), and thus are not contributing to CPU load.
If you want to know what's eating the CPU, start with top.
crond is not the cause of the problem, suggest you use top to check Cost the process of high performance.

mongod main process killed by KILL signal

One of the mongo nodes in the replica set went down today. I couldn't find what happened but when i checked the logs on the server, I saw this message 'mongod main process killed by KILL signal'. I tried googling for more information but failed. Basically i like to know what is KILL signal, who triggered it and possible causes/fixes.
Mongo version 3.2.10 on Ubuntu.
The KILL signal means that the app will be killed instantly and there is no chance left for the process to exit cleanly. It is issued by the system when something goes very wrong.
If this is the only log left, it was killed abruptly. Probably this means that your system ran out of memory (I've had this problem with other processes before). You could check if swap is configured on your machine (by using swapon -s), but perhaps you should consider adding more memory to your server, because swap would be just for it not to break, as it is very slow.
Another thing worth looking at is the free disk space left and the syslog (/var/log/syslog)