When doing
bitbake core-image-xxxx
the build task will auto select 8 threads ( since my CPU is 8 cores) to build the image.
my system is 72GB ram, can I force bitbake to run with more threads?
or any way to ask bitbake to use more ram?
To increase threads usage:
You add following to your local.conf inside the build/conf directory. Replace x and y with your wanted configuration
PARALLEL_MAKE = "-j x"
BB_NUMBER_THREADS = "y"
PARALLEL_MAKE defines how many threads should be used when using make -j command during do_compile.
BB_NUMBER_THREADS defines number of threads for bitbake.
I do not know about increasing memory usage, if you want to increase the speed of the build you could to it with a ramdisk.
https://www.linuxbabe.com/command-line/create-ramdisk-linux
https://www.yoctoproject.org/docs/latest/ref-manual/ref-manual.html#var-PARALLEL_MAKE
https://www.yoctoproject.org/docs/latest/ref-manual/ref-manual.html#var-BB_NUMBER_THREADS
Related
When building some packages, I found OOM in dmesg.
The build process was killed and terminated.
Anyway to set up memory usage limitations?
Easiest way for me is to specify the number of concurrent tasks running when building with
$ BB_NUMBER_THREADS=2 bitbake <target>
Where 2 is the number of concurrent build processes running.
You can also set this in your local.conf. Here's another answer on the topic.
You can limit the parallel make jobs.
Set a PARALLEL_MAKE yout local.conf
PARALLEL_MAKE ?= "-j 1"
I'm using Microsoft HPC Pack 2012 to run video processing jobs on a Windows cluster. A run is organized as a single job with hundreds of independent tasks. If a single task is scheduled on a node, it uses all cores, but not at nearly 100%. One way to increase CPU utilization is to run more than one task at a time per node. I believe in my use case, running each task on every core would achieve the best CPU utilization. However, after lots of trying I have not been able to achieve it. Is it possible?
I have been able to run multiple tasks on the same node on separate cores. I achieved this by setting the job UnitType to Node, setting the job and task types to IsExclusive = False, and setting the MaximumNumberOfCores on a job to something less than the number of cores on the machine. For simplicity, I would like to one run task per core, but typically this would exhaust the memory budget. So, I have set EstimatedProcessMemory to the typical memory usage.
This works, but every set of parameters I have tried leaves resources on the table. For instance, let's say I have a machine with 12 cores, 15GB of free RAM, and each task consumes 2GB. Then I can run 7 tasks on this machine. If I set task MaximumNumberOfCores to 1, I only use 7 of my 12 cores. If I set it to 2, suppose I set EstimatedProcessMemory to 2048. HPC interprets this as the memory PER CORE, so I only run 3 tasks on 2 cores and 3 tasks on 1 core, so 9 of my 12 cores. And so on.
Is it possible to simply run as many tasks as will fit in memory, each running on all of the cores? Or to dynamically assign the number of cores per task in a way that doesn't have the shortcomings mentioned above?
From the bind9 man page, I understand that the named process starts one worker thread per CPU if it was able to determine the number of CPUs. If its unable to determine, a single worker thread is started.
My question is how does it calculate the number of CPUs? I presume by CPU, it means cores. The Linux machine I work is customized and has kernel 2.6.34 and does not support lscpu or nproc utilities. named is starting a single thread even if i give -n 4 option. Is there any other way to force named to start multiple threads?
Thanks in advance.
In FreeBSD kernel, how can I first stop all the cores, and then run my code (can be a kernel module) on all the cores? Also, when finished, I can let them restore contexts and continue executing.
Linux has APIs like this, I believe FreeBSD also has a set of APIs to do this.
edit:
Most likely I did not clarify what I want to do. First, the machine is x86_64 SMP.
I set a timer, when the time is over; to stop all the threads (including kernel threads) on all cores; save context; run my code on one core to do some kernel stuff; when finished, restore the context and let them continue running; periodically. The other kernel threads and processes are not affected (without changing their relative priority).
I assume that your "code" (the kernel module) actually takes advantage of SMP inherently already.
So, one approach you can do is:
Set the affinity of all your processes/threads to your desired cpus (sched_setaffinity)
Set each of your threads to use Real-Time (RT) scheduling.
If it is a kernel module, you can do this manually in your module (I believe), by changing the scheduling policy for your task_struct to SCHED_RR (or SCHED_FIFO) after pinning each process to a core.
In userspace, you can use the FreeBSD rtprio command (http://www.freebsd.org/cgi/man.cgi?query=rtprio&sektion=1):
rtprio, idprio -- execute, examine or modify a utility's or process's
realtime or idletime scheduling priority
The effect will be: Your code will run first before any other non-essential process in the system, until your code finishes.
I've setted a BOINC on my Ubuntu 12.04 Server, but it runs only 1 task at the same time.
How to set limit of tasks, running in each time?
Thanks
As stated by zero323, each tasks runs in one CPU. The default available CPU number is 1, so you will have to change this in BOINC preferences.
Go to BOINC manager > Tools > Computing preferences > Processor usage
Find On multiprocessor systems, use at most and replace the value to your desirable percentage of processors (If you want them all, use 100%. If you have a quad-core machine, you will have 4 tasks running at a time)
I also recommend you to take a look at Use at most option, since each CPU will have this percentage of usage. Keep an eye on your CPUs core temperature using the command sensors