BOINC: Run several tasks in each time, ubuntu/debian - boinc

I've setted a BOINC on my Ubuntu 12.04 Server, but it runs only 1 task at the same time.
How to set limit of tasks, running in each time?
Thanks

As stated by zero323, each tasks runs in one CPU. The default available CPU number is 1, so you will have to change this in BOINC preferences.
Go to BOINC manager > Tools > Computing preferences > Processor usage
Find On multiprocessor systems, use at most and replace the value to your desirable percentage of processors (If you want them all, use 100%. If you have a quad-core machine, you will have 4 tasks running at a time)
I also recommend you to take a look at Use at most option, since each CPU will have this percentage of usage. Keep an eye on your CPUs core temperature using the command sensors

Related

How do you run as many tasks as will fit in memory, each running all all cores, in Windows HPC?

I'm using Microsoft HPC Pack 2012 to run video processing jobs on a Windows cluster. A run is organized as a single job with hundreds of independent tasks. If a single task is scheduled on a node, it uses all cores, but not at nearly 100%. One way to increase CPU utilization is to run more than one task at a time per node. I believe in my use case, running each task on every core would achieve the best CPU utilization. However, after lots of trying I have not been able to achieve it. Is it possible?
I have been able to run multiple tasks on the same node on separate cores. I achieved this by setting the job UnitType to Node, setting the job and task types to IsExclusive = False, and setting the MaximumNumberOfCores on a job to something less than the number of cores on the machine. For simplicity, I would like to one run task per core, but typically this would exhaust the memory budget. So, I have set EstimatedProcessMemory to the typical memory usage.
This works, but every set of parameters I have tried leaves resources on the table. For instance, let's say I have a machine with 12 cores, 15GB of free RAM, and each task consumes 2GB. Then I can run 7 tasks on this machine. If I set task MaximumNumberOfCores to 1, I only use 7 of my 12 cores. If I set it to 2, suppose I set EstimatedProcessMemory to 2048. HPC interprets this as the memory PER CORE, so I only run 3 tasks on 2 cores and 3 tasks on 1 core, so 9 of my 12 cores. And so on.
Is it possible to simply run as many tasks as will fit in memory, each running on all of the cores? Or to dynamically assign the number of cores per task in a way that doesn't have the shortcomings mentioned above?

Is there a way to set CPU affinity for PostgreSQL processes through its configuration files?

I am using PostgreSQL 9.5 with a Java 8 Application on Windows OS (System: i5 2nd Generation). I noticed that when my application is in execution, there are several PostgreSQL processes / sub-processes that are created/removed dynamically.
These PostgreSQL processes use almost all of the CPU (>95%), due to which problem arises with other applications installed at my system.
I recently came to know about CPU affinity. For the time being, I am executing PowerShell script (outside of my Java application) which checks periodically and sets the desired value of cpu affinity for all PostgreSQL processes in execution.
I am looking for a way where I don't need to execute an external script(s) and/or there is one-time configuration required.
Is there a configuration supported by PostgreSQL 9.5 through which we can set max CPU cores to be used by PostgreSQL processes?
I looked for the solution, but could not find any.
There is no way to set this in the PostgreSQL configuration.
But you can start your PostgreSQL server from cmd.exe with:
start /affinity 3 C:\path\to\postgresql.exe C:\path\to\data\directory
That would allow PostgreSQL to run only on the twp “first” cores.
The cores are numbered 1, 2, 4, 8, 16 and so on, and you use the sum of the cores where you want PostgreSQL to run as argument to /affinity. For example, if you only want it to run on the third and fourth core, you would use /affinity 12.
This should work, since the Microsoft documentation says:
Process affinity is inherited by any child process or newly instantiated local process.

What are the Server Requirements for BluePrism tool with 5 Bots?

The Company that I work for is planning to purchase BluePrism tool, we are in-need of the server requirements (RAM size,HD size etc) for the functioning 5 Bots.
Thanks in Advance for your help.
The Automation Platform required depends of the nature of your automations.
You can run everything in a single node and 8G of RAM and 4 Cores will be enough (if you are not automating applications that require many resources) and if you do not need to run anyhing concurrently.
On the other hand for Production purposes:
1 Blue Prism Application Server 2012 R2 (This is used to Orchestrate
your Platform)... regularly I put at least 8G RAM and 4 Cores, HDD
used for BP is insifnificant 10G after OS installation should be
good.
1 SQL Server (where the structure of your automation, and the execution logs will be allocated) at least 8G RAM and HDD depends of
your transacctions, logging and other configurations that you will
choose when you automate.
1 Interactive Client, (where you will monitor your platform) 4G Ram 50G HDD
1 Runtime Resource (where your automations will run)... here you have to take in consideration all the resources that will be used by
your automation, here you have to deal with that HDD Space, and
Processor Needs. You can only run 1 automation at time, so if you
need to run 5 different automations (what I think that you called
robots) concurrently, you need 5 runtime resources, each one must
have installed the required applications (used to run its
automation)

KVM CPU share / priority / overselling

i have question about KVM i could not find any satisfying answer in the net about.
Lets say i want to create 3 virtual machines on a host with 2 CPUs. I am assigning 1 cpu to 1 virtual machines. The other 2 virtual machines should be sharing 1 cpu. If it is possible i want to give 1 vm 30 % and the other one 70% of the cpu.
I know this does not make much sense but i am curious and want to test is :-)
I know that hypervisors like onapp can do that. But how do they do it?
KVM represents each virtual CPU as a thread in the host Linux system, actually as a thread in the QEMU process. So scheduling of guest VCPUs is controlled by the Linux scheduler.
On Linux, you can use taskset to force specific threads onto specific CPUs. So that will let you assign one VCPU to one physical CPU and two VCPUs to another. See, for example, https://groups.google.com/forum/#!topic/linuxkernelnewbies/qs5IiIA4xnw.
As far as controlling what percent of the CPU each VM gets, Linux has several scheduling policies available, but I'm not familiar with them. Any information you can find on how to control scheduling of Linux processes will apply to KVM.
The answers to this question may help: https://serverfault.com/questions/313333/kvm-and-virtual-to-physical-cpu-mapping. (Also that forum may be a better place for this question, since this one is intended for programming questions.)
If you search for "KVM virtual CPU scheduling" and "Linux CPU scheduling" (without the quotes), you should find plenty of additional information.

Setting up the optimal number of processors/cores per processor virtual machine (VMware)

I was looking for an answear but didn't find one.
I'm trying to create a new VM to develop a web application. What would be the optimal processor settings?
I have i7 (6th gen) with hyperthreading.
Host OS: Windows 10. Guest OS: CentOS.
Off topic: RAM that should I give to VM should be 50% of my memory? Would it be ok? (I have 16GB RAM)
Thanks!
This is referred to as 'right-sizing' a vm, and it is dependent on the application workload that will run inside it. Ideally, you want to provide the VM with the minimum amount of resources the app requires to run correctly. "Correctly" is subjective based upon your expectations.
Inside your VM (CentOS) you can run top to see how much memory and cpu % is being used. You can also install htop which you may find friendlier than top.
RAM
If you see a low % of RAM being used, you can probably reduce what you're giving the VM. If you are seeing any swap memory used (paging to disk), you may want to increase the RAM. Start with 2GB and see how the app behaves.
CPU
You'll may want to start with no more than 2vCPUs, checkout top to see how utilized the application is under load, and then make an assessment for more/less vCPUs.
The way a hosted hypervisor (VMware Workstation) handles guest CPU usage is through a CPU scheduler. When you give a vm x number of vCPUs, the VM will need to wait till that many cores are free on the CPU to do 'work'. The more vCPUs you give it, the more difficult (slower) it will be to schedule. It's more complicated than this, but I'm trying to keep it high level. CPU scheduling deep dive.