Thanks for stopping by!
So I just bought a Pi desktop kit for my RaspberryPi 3B v1.2, which features an add-on module with an mSATA disk slot, real-time clock and power control. I installed the latest raspbian stretch (kernel version 4.9.59-v7+) on the mSATA SSD, and are now booting Raspbian from it with no SD card in the onboard card reader.
A kworker process is now constantly hogging between 8.0-13.5% CPU usage, which I think seems quite unnecessary, and it has annoying consequences, fx lagging videos with Kodi. This has never happened before I added the module.
I then tried perf (inspiration from this thread) by running sudo perf record -D 1000 -g -a sleep 20 and then sudo perf report to figure out which kernel tasks might be responsible:
But I can't figure out how to go on from there to reduce the workload. Could it be caused by the real-time clock embedded in the add-on board as __timer_delay, arch_timer_read_counter_long, and arch_counter_get_cntpct seem to have a high CPU usage? Other tasks with high load are finish_task_switch and _raw_spin_unlock_irqrestore tasks, but I can't guess what that's about.
Am I right that this is unnecessary work load of the CPU and if so, how can I reduce it?
Many thanks in advance!
I had the same issue and found the root cause was that I didn't insert SD card into my Raspberry Pi. When SD card is missing, the kernel frequently tries to scan the SD card slot, which causes high CPU usage.
Download sdtweak.dtbo and replace the existing one under /boot/overlays/ with the new one, then add dtoverlay=sdtweak,poll_once into /boot/config.txt and reboot the machine. It worked for me.
See also: https://github.com/raspberrypi/linux/issues/2567
You can install iotop for looking at load.
For high load /etc/sysctl.conf :
vm.vfs_cache_pressure=500
vm.swappiness=10
vm.dirty_background_ratio=1
vm.dirty_ratio=50
then sysctl --system
Related
I did an experiment by running a python app that is writing 2000 records into mongoDB.
The details of my setup of the experiment as follows:
Test 1: Local PC - Python App running on Local PC with mongoDB on Local PC (baseline)
Test 2: Docker - Python App on Linux Container with mongoDB on Linux Container with persist volume
Test 3: Docker - Python App on Linux Container with mongoDB on Linux Container without persist volume
I’ve generated the result in chart - on average writing data on local PC is about 30 secs. Where else on Docker, it takes about 80plus secs. Hence it seems like writing on Docker is almost 3 times slower than writing on local PC itself.
Should I want to improve the write speed or performance of the mongoDB in docker container, what is the recommended practice? Or should I put the mongoDB as a external volume without docker?
Thank you!
graph
Your system is not consistent in many ways - dynamic storage and CPU performance, other processes, dynamic system settings etc. There are a LOT of underlying things under storage only.
60 sec tests are not enough for anything
Simple operations are not good enough for baseline comparisons
There is ZERO performance impact with storage and CPU in case of containers, there is an impact in networking, but i assume, this is not applicable here
Databases and database management systems must be optimized in special ways, there is no "install and run" approach. We, sysadmins/db admins usually need days to have it running smoothly. Also, performance changes over time.
After couple of weeks of testing and troubleshooting. I finally got the answer and I shall share my findings with the rest of the DevOps or anyone who facing the same issue as me
Correct this statement if needed, Docker Container was started off with Linux, Microsoft join the container bandwagon late and in order to for the container works (with Linux), the DevOps team need to install Linux WSL2 in Windows. And that cost extra overheads which resultant in the process speed.
So to improve the performance speed with containers, the setup should be in Linux OS instead of Windows OS. (and yes the speed reduce drastically)
We have a Data ware house server running on Debian linux ,We are using PostgreSQL , Jenkins and Python.
It's been few day the memory of the CPU is consuming a lot by jenkins and Postgres.tried to find and check all the ways from google but the issue is still there.
Anyone can give me a lead on how to reduce this memory consumption,It will be very helpful.
below is the output from free -m
total used free shared buff/cache available
Mem: 63805 9152 429 16780 54223 37166
Swap: 0 0 0
below is the postgresql.conf file
Below is the System configurations,
Results from htop
Please don't post text as images. It is hard to read and process.
I don't see your problem.
Your machine has 64 GB RAM, 16 GB are used for PostgreSQL shared memory like you configured, 9 GB are private memory used by processes, and 37 GB are free (the available entry).
Linux uses available memory for the file system cache, which boosts PostgreSQL performance. The low value for free just means that the cache is in use.
For Jenkins, run it with these JAVA Options
JAVA_OPTS=-Xms200m -Xmx300m -XX:PermSize=68m -XX:MaxPermSize=100m
For postgres, start it with option
-c shared_buffers=256MB
These values are the one I use on a small homelab of 8GB memory, you might want to increase these to match your hardware
I have a 300 GB external drive connected to a Raspberry Pi. I changed the file boot/cmdline.txt in order to use the SD card only to boot up the system. Then the partition /root is located in /dev/sda2 (external drive). How can I increase the size of the root partition? I want to merge all the unallocated space with /sda/dev2.
I tried by using GParted, but it is necessary to unmount before merging:
sudo umount /dev/sda2
umount: /: device is busy.
Gparted Image:
I used fdisk on Ubuntu to achieve something similar. Rasbian is very similar to Ubuntu and should work the same.
You have to actually remove the original partition and the extended partition before creating a new, larger partition. It doesn't loose any data as long as you don't write changes back to the disk. Still, I would take a backup. :)
I wrote a wiki on it here: http://headstation.com/archives/resize-ubuntu-filesystem/
Given that a mongod process will consume quite a bit of the available RAM, is there a method to 'protect' a certain amount of RAM for the CentOS OS's use?
Or is this really even necessary from the OS's perspective...I assume like most operating systems, CentOS will take what it needs regardless.
I understand that if you are seeing this in practice, it's time to scale out/up...this is a purely theoretical question at this point as I am learning CentOS.
MongoDB doesn't manage memory. It delegates the responsibility to OS. It's beautifully explained in the below link.
MongoDB Memory Management
I was looking for an answear but didn't find one.
I'm trying to create a new VM to develop a web application. What would be the optimal processor settings?
I have i7 (6th gen) with hyperthreading.
Host OS: Windows 10. Guest OS: CentOS.
Off topic: RAM that should I give to VM should be 50% of my memory? Would it be ok? (I have 16GB RAM)
Thanks!
This is referred to as 'right-sizing' a vm, and it is dependent on the application workload that will run inside it. Ideally, you want to provide the VM with the minimum amount of resources the app requires to run correctly. "Correctly" is subjective based upon your expectations.
Inside your VM (CentOS) you can run top to see how much memory and cpu % is being used. You can also install htop which you may find friendlier than top.
RAM
If you see a low % of RAM being used, you can probably reduce what you're giving the VM. If you are seeing any swap memory used (paging to disk), you may want to increase the RAM. Start with 2GB and see how the app behaves.
CPU
You'll may want to start with no more than 2vCPUs, checkout top to see how utilized the application is under load, and then make an assessment for more/less vCPUs.
The way a hosted hypervisor (VMware Workstation) handles guest CPU usage is through a CPU scheduler. When you give a vm x number of vCPUs, the VM will need to wait till that many cores are free on the CPU to do 'work'. The more vCPUs you give it, the more difficult (slower) it will be to schedule. It's more complicated than this, but I'm trying to keep it high level. CPU scheduling deep dive.