WSL (Windows Subsystem for Linux) how to set CPU / core? - scheduler

I had asked this question several days ago and have finally gotten a definitive answer when I had some time earlier today to install Ubuntu on a VM. Basically, it would seem that the WSL does not behave correctly when attempting to set the CPU of a program. On the WSL when I ran my code both with and without sudo, the output was the same, all threads printing seemingly at random. However, on my Ubuntu VM, running the same code without sudo had the same effect, and running it with sudo caused the original goal of the code, and starved the other 2 threads allowing only the the process with the highest affinity (in this case 40) to print.
I was hoping someone here knew the WSL very intimately and could help me achieve the intended behavior on it.

Related

Why the 1.69.2 vscode server has so high CPU usage (>50%)?

I use VS Code latest version 1.69.2 , and remote connect to my cloud VM. After one or two days, I found the cpu usage is very high .The detail of the process is :
my-user 18954 17082 0 12:10 ? 00:00:04 /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/node /home/my-user/work/.vscode-server/bin/3b889b090b5ad5793f524b5d1d39fda662b96a2a/out/bootstrap-fork --type=extensionHost --transformURIs --useHostProxy=false
There are 8 node processes in total,every cpu usage is large than 50%
The question is:
What process is this?
Why the cpu usage is so high?
When I close all the connected windows and then reconnect to my remote VM, these processes are still here. Why are these processes not closing automatically?
Is this the bug of VSCode 1.69.2?
VSCode uses Node, I suppose it's one of these process used for autocomplete that scans your files, but it's never ending. I have to kill them manually, 1 node process uses 100% of the CPU.
I solved the issue by removing the .vscode-server folder created in my home folder. You have to do this from a remote shell (not from vscode terminal).
I solved the issue by removing the Settings Sync plugin。

MongoDB Performance in Docker

I did an experiment by running a python app that is writing 2000 records into mongoDB.
The details of my setup of the experiment as follows:
Test 1: Local PC - Python App running on Local PC with mongoDB on Local PC (baseline)
Test 2: Docker - Python App on Linux Container with mongoDB on Linux Container with persist volume
Test 3: Docker - Python App on Linux Container with mongoDB on Linux Container without persist volume
I’ve generated the result in chart - on average writing data on local PC is about 30 secs. Where else on Docker, it takes about 80plus secs. Hence it seems like writing on Docker is almost 3 times slower than writing on local PC itself.
Should I want to improve the write speed or performance of the mongoDB in docker container, what is the recommended practice? Or should I put the mongoDB as a external volume without docker?
Thank you!
graph
Your system is not consistent in many ways - dynamic storage and CPU performance, other processes, dynamic system settings etc. There are a LOT of underlying things under storage only.
60 sec tests are not enough for anything
Simple operations are not good enough for baseline comparisons
There is ZERO performance impact with storage and CPU in case of containers, there is an impact in networking, but i assume, this is not applicable here
Databases and database management systems must be optimized in special ways, there is no "install and run" approach. We, sysadmins/db admins usually need days to have it running smoothly. Also, performance changes over time.
After couple of weeks of testing and troubleshooting. I finally got the answer and I shall share my findings with the rest of the DevOps or anyone who facing the same issue as me
Correct this statement if needed, Docker Container was started off with Linux, Microsoft join the container bandwagon late and in order to for the container works (with Linux), the DevOps team need to install Linux WSL2 in Windows. And that cost extra overheads which resultant in the process speed.
So to improve the performance speed with containers, the setup should be in Linux OS instead of Windows OS. (and yes the speed reduce drastically)

On VSCode-remote, why is python jedi language server extension is taking up excessive RAM even when application is closed?

.
My machine has 64GB of RAM. I use vscode-remote to access it over ssh. I note that RAM usage shoots up and remains even when I have closed the session.
In the image below are the results of the htop command. There are run-jedi-language-server.py scripts running with high RAM usage.
The vscode-remote server is running on a single repository. This screenshot was taken after I had closed the vscode session.
This prevents me from starting up other applications. What explains this, and how can I get rid of this problem?

CentOS 7 is starting >70000 processes after startup

we have two identical (hardware-wise) workstations running CentOS 7. On one of the machines (the "bad" one), we observe lags and it's generally less responsive:
After rebooting both machines, both machines showed no difference in cpu/memory load, but the latest PID (by checking ps -A) is extremely different. The "good" machine has a PID around 6.000, whereas the "bad" machine is above 70.000. I checked the "bad" machine directly after startup with ps -A and top, and "dracut" and possible related processes (gzip, xy) pops up a couple of hundred times.
What I think is, that on the "bad" machine some configuration is wrong, leading to the startup of these subprocesses, which resulted in an overall less responsive system.
My questions now:
How can I precisely log the startup on the machines?
How to solve eventually the issue?
And if this is dracut related, how can I check the dracut configurations and compare them between both machines?
Thank you very much.
christian

Can I configure icecream (icecc) to do zero local jobs

I'm trying to build a project on a rather underpowered system (intel compute stick with 1GB of RAM). Some of the compilation steps run out of memory. I've configured icecc so that it can send some jobs to a more powerful machine, but it seems that icecc will always do at least one job on the local machine.
I've tried setting ICECC_MAX_JOBS="0" in /etc/icecc/icecc.conf (and restarting iceccd), but the comments in this file say:
# Note: a value of "0" is actually interpreted as "1", however it
# also sets ICECC_ALLOW_REMOTE="no".
I also tried disabling the icecc daemon on the compute stick by running /etc/init.d/icecc stop. However, it seems that icecc is still putting one job on the local machine (perhaps if the daemon is off it's putting all jobs on the local machine?).
The project is makefile based and it appears that I'm stuck on a bottleneck step where calling make with -j > 1 still only issues one job, and this compilation is expiring the system memory.
The only work around I can think of is to actually compile on a different system and then ship the binaries back over but I expect to enter a tweak/build/evaluate cycle on this platform so I'd like to be able to work from the compute stick directly.
Both systems are running ubuntu 14.04 if that helps.
I believe it is not supported since if there are network issues, icecc resorts to compiling on the host machine itself. Best solution would be to compile on the remote machine and copy back the resulting binary.
Have you tried setting ICECC_TEST_REMOTEBUILD in client's terminal (where you run make)?
export ICECC_TEST_REMOTEBUILD=1
In my tests this always forces all sources to be compiled remotely.
Just remember that linking is always done on local machine.