KVM CPU share / priority / overselling - virtualization

i have question about KVM i could not find any satisfying answer in the net about.
Lets say i want to create 3 virtual machines on a host with 2 CPUs. I am assigning 1 cpu to 1 virtual machines. The other 2 virtual machines should be sharing 1 cpu. If it is possible i want to give 1 vm 30 % and the other one 70% of the cpu.
I know this does not make much sense but i am curious and want to test is :-)
I know that hypervisors like onapp can do that. But how do they do it?

KVM represents each virtual CPU as a thread in the host Linux system, actually as a thread in the QEMU process. So scheduling of guest VCPUs is controlled by the Linux scheduler.
On Linux, you can use taskset to force specific threads onto specific CPUs. So that will let you assign one VCPU to one physical CPU and two VCPUs to another. See, for example, https://groups.google.com/forum/#!topic/linuxkernelnewbies/qs5IiIA4xnw.
As far as controlling what percent of the CPU each VM gets, Linux has several scheduling policies available, but I'm not familiar with them. Any information you can find on how to control scheduling of Linux processes will apply to KVM.
The answers to this question may help: https://serverfault.com/questions/313333/kvm-and-virtual-to-physical-cpu-mapping. (Also that forum may be a better place for this question, since this one is intended for programming questions.)
If you search for "KVM virtual CPU scheduling" and "Linux CPU scheduling" (without the quotes), you should find plenty of additional information.

Related

Pin Kubernetes pods/deployments/replica sets/daemon sets to run on specific cpu only

I need to restrict an app/deployment to run on specific cpus only (say 0-3 or just 1 or 2 etc.) I found out about CPU Manager and tried implement it with static policy but not able to achieve what I intend to.
I tried the following so far:
Enabled cpu manager static policy on kubelet and verified that it is enabled
Reserved the cpu with --reserved-cpus=0-3 option in the kubelet
Ran a sample nginx deployment with limits equal to requests and cpu of integer value i.e. QoS of guaranteed is ensured and able to validate the cpu affinity with taskset -c -p $(pidof nginx)
So, this makes my nginx app to be restricted to run on all cpus other than reserved cpus (0-3), i.e. if my machine has 32 cpus, the app can run on any of the 4-31 cpus. And so can any other apps/deployments that will run. As I understand, the reserved cpus 0-3 will be reserved for system daemons, OS daemons etc.
My questions-
Using the Kubernetes CPU Manager features, is it possible to pin certain cpu to an app/pod (in this case, my nginx app) to run on a specific cpu only (say 2 or 3 or 4-5)? If yes, how?
If point number 1 is possible, can we perform the pinning at container level too i.e. say Pod A has two containers Container B and Container D. Is it possible to pin cpu 0-3 to Container B and cpu 4 to Container B?
If none of this is possible using Kubernetes CPU Manager, what are the alternatives that are available at this point of time, if any?
As I understand your question, you want to set up your dedicated number of CPU for each app/pod. As I've searched.
I am only able to find some documentation that might help. The other one is a Github topic I think this is a workaround to your problem.
This is a disclaimer, based from what I've read, searched and understand there is no direct solution for this issue, only workarounds. I am still searching further for this.

Setting up the optimal number of processors/cores per processor virtual machine (VMware)

I was looking for an answear but didn't find one.
I'm trying to create a new VM to develop a web application. What would be the optimal processor settings?
I have i7 (6th gen) with hyperthreading.
Host OS: Windows 10. Guest OS: CentOS.
Off topic: RAM that should I give to VM should be 50% of my memory? Would it be ok? (I have 16GB RAM)
Thanks!
This is referred to as 'right-sizing' a vm, and it is dependent on the application workload that will run inside it. Ideally, you want to provide the VM with the minimum amount of resources the app requires to run correctly. "Correctly" is subjective based upon your expectations.
Inside your VM (CentOS) you can run top to see how much memory and cpu % is being used. You can also install htop which you may find friendlier than top.
RAM
If you see a low % of RAM being used, you can probably reduce what you're giving the VM. If you are seeing any swap memory used (paging to disk), you may want to increase the RAM. Start with 2GB and see how the app behaves.
CPU
You'll may want to start with no more than 2vCPUs, checkout top to see how utilized the application is under load, and then make an assessment for more/less vCPUs.
The way a hosted hypervisor (VMware Workstation) handles guest CPU usage is through a CPU scheduler. When you give a vm x number of vCPUs, the VM will need to wait till that many cores are free on the CPU to do 'work'. The more vCPUs you give it, the more difficult (slower) it will be to schedule. It's more complicated than this, but I'm trying to keep it high level. CPU scheduling deep dive.

bind9 (named) does not start in multi-threaded mode

From the bind9 man page, I understand that the named process starts one worker thread per CPU if it was able to determine the number of CPUs. If its unable to determine, a single worker thread is started.
My question is how does it calculate the number of CPUs? I presume by CPU, it means cores. The Linux machine I work is customized and has kernel 2.6.34 and does not support lscpu or nproc utilities. named is starting a single thread even if i give -n 4 option. Is there any other way to force named to start multiple threads?
Thanks in advance.

a guestOS process occupies VCPU at any given time?

Recently i`ve been studying something about hardware-supported virtualization.
I read about 3 states of host cpu ,thus the most common userspace,kernelspace and A New Guest State.And as i can see from the ps command,there is a process for the vm i started,and some 'sub'-threads for each cpu owned by the virtual machine.Also i noticed when the vm runs some io related program,some more threads will be created on the host,which i guess might be the responses of qemu for hardware emulation.
So here comes my question:For any certain time(time in guest state,not the other two),does a vcpu thread represent a guestOS process running(i mean 'occupy' and 'exclusively')?just the same as a physical cpu,for any given time in userspace,a user process is running on it.
This may sound a little stupid,I just want to figure it out for further research.
To make this question simple:
is the vcpu thread which runs on host machine associated with some guestOS process at any given time?
To further simplify it:
is it right when i say the guestOS processes are actually running on the host CPU directly and scheduled as ordinary host-processes?the difference between the two kinds of process being what we called virtualization?
Maybe i need another threads to solve some questions about guestOS process switching,but before that,hope you guys can help me with this one.
sincerely
MeNok
I posted this question on LQ and got the answer.
http://www.linuxquestions.org/questions/linux-virtualization-90/a-guestos-process-occupies-vcpu-at-any-given-time-4175419271/
VCPU is not a thread in host. KVM allows guest to run directly on a physical CPU with a less privilege guest mode. A timer interrupt will cause CPU back from guest mode to host mode and return to KVM. Since KVM is scheduled in kernel mode, a guest should be also scheduled in the host as well.

How does running one virtual machine on a host system will reduce system performance?

I am running one virtual machine on a host system, with 50% memory and 50% CPU allocated to it.
Will this reduce the system performance by half?
Give me your comments and suggestions.
Alex is correct. The reason that it could take less than half of your system performance, is because most virtualization systems will not dedicate precisely that amount of CPU and memory to your virtual machine if the software running inside the VM is not demanding that much. If the VM is running a demanding workload though, this will not be the case.
The reason that it could take more than half of your system performance is because any virtualization system has its own overhead, just in order to provide the virtualization to the VM. Some memory is consumed in tracking the memory and resources used by the virtual machine, and some CPU is consumed in handling the needs of the VM (interrupts from network traffic, etc.).