How many virtual processors can be assign to virtual machines in Hyper-V manager - virtualization

can you easily explain, how many virtual processors I can assign to virtual machines in Hyper-V manager?
Example:
I have computer with 4 cores and 8 logical processors. I have created 4 virtual machines.
Can I only assign 2 virtual processors per virtual machine ?
It will be 2+2+2+2 = 8
Or can I assign 8 virtual processors per virtual machine ?
It will be 8+8+8+8 = 32
Thank you in advance.

Related

I have 3 virtual servers with a lot of storage - can I use the storage in a reliable way somehow?

Here's the deal:
I have 3 virtual servers (K8S worker nodes) with 160 GB of virtual SSD each. They form together a private network and I would like to utilize the space for persistent volume claims.
Preferably in a way similar to RAID 1 or RAID 5.
Installing rook.io is not possible because it's not bare metal.
Is there some way to have "RAID over NFS" or something in K8S? I'm running RKE.

Sharing hw resources with more blades

In my architecture I have two blades in a cluster, let's say with a CPU of 32 cores.
All the blades have several VMs managed by vCenter.
I want to install a VM for a software that require at least 40 cores of CPU to run without problems.
Are the resources of the two blades shared and so I can install the VM forgetting of the individual blade hw details?
You won't be able to create a VM with more CPUs than the underlying hardware has logical processors.

KVM CPU share / priority / overselling

i have question about KVM i could not find any satisfying answer in the net about.
Lets say i want to create 3 virtual machines on a host with 2 CPUs. I am assigning 1 cpu to 1 virtual machines. The other 2 virtual machines should be sharing 1 cpu. If it is possible i want to give 1 vm 30 % and the other one 70% of the cpu.
I know this does not make much sense but i am curious and want to test is :-)
I know that hypervisors like onapp can do that. But how do they do it?
KVM represents each virtual CPU as a thread in the host Linux system, actually as a thread in the QEMU process. So scheduling of guest VCPUs is controlled by the Linux scheduler.
On Linux, you can use taskset to force specific threads onto specific CPUs. So that will let you assign one VCPU to one physical CPU and two VCPUs to another. See, for example, https://groups.google.com/forum/#!topic/linuxkernelnewbies/qs5IiIA4xnw.
As far as controlling what percent of the CPU each VM gets, Linux has several scheduling policies available, but I'm not familiar with them. Any information you can find on how to control scheduling of Linux processes will apply to KVM.
The answers to this question may help: https://serverfault.com/questions/313333/kvm-and-virtual-to-physical-cpu-mapping. (Also that forum may be a better place for this question, since this one is intended for programming questions.)
If you search for "KVM virtual CPU scheduling" and "Linux CPU scheduling" (without the quotes), you should find plenty of additional information.

Allocating more than 4 gigs of RAM to a VM causes the VM to not start

I'm currently using a combination of qemu-kvm, libvirtd, and virt-manager to host some virtual RHEL 6 machines. When I go ahead and attempt to bump the ram usage up above 4gb, the machines fail to start. What would even be the cause of this? Any information is helpful.
I'm running a 10 core, 3GHz Xeon processor, with 64 GB of ram.

Do VMMs use Virtual Memory on the hosts?

I am trying to understand how virtualization was performed in the past using shadow page tables. The articles I've read all talk about about the translation from Guest Virtual Memory to Host Physical Memory. I understand how the Shadow Page tables eliminate the need for a Guest Virtual to Guest Physical Translation. My question is, what happened to the Host Virtual to Host Physical step. (HVA --> HPA).
Do the Virtual Machine Managers in the cited articles, not use virtual memory in the host at all? Are they assumed to have direct access to the Physical memory of the host system? Is it even possible? I thought the TLB cache translation is implemented in hardware by the MMU and and every instruction's addresses are translated from virtual to physical by the MMU itself. But then again, I am not sure how kernel code works with TLB? Do kernel instructions not go through TLB?
I am not sure if I got your point accurately, I'm trying my best to answer your question.
There is no need for HVA->HPA because what guest wants is HPA instead of HVA. Which means HVA is useless for a guest accessing its guest memory region.
So the transfer flow you expected might be without considering shadow page table may be:
GVA -> GPA -> HVA -> HPA
But as most hypervisors are running in kernel mode, who knows how the host's and guest's memory is allocated, so it can map GPA to HPA directly and eliminate the need of HVA:
GVA -> GPA -> HPA
This guest memory translation flow is nothing related to the userspace of hyperviosr, whose flow is HVA -> HPA.
Not sure if above answers your question.
The answer can be yes or no. If yes, the hypervisor maps guest RAM into virtual memory on the host, so the host may swap it in and out of host RAM. If no, the hypervisor maps guest RAM into locked physical memory on the host.
VirtualBox is in the no group. VirtualBox runs a device driver in the host kernel, and uses this driver to allocate locked memory for guest RAM. Each page of guest RAM stays resident at a fixed host physical address, so the host can never swap out the page. Because of this, guest RAM must be smaller than host RAM. VirtualBox's manual says to spare at least 256 MB to 512 MB for the host.
The MMU can only map virtual addresses to physical addresses. In VirtualBox, the guest has an emulated MMU to map guest virtual addresses to guest physical addresses. VirtualBox has its own map of guest physical addresses to host physical addresses, and uses the host MMU to map guest virtual addresses to host physical addresses. Because of locked memory, the host physical addresses never become invalid.
Mac-on-Linux is in the yes group. I once used it to run a guest Mac OS 9 inside a host PowerPC Linux. I gave 256 MB of RAM to Mac OS 9, but my real Linux machine had only 64 MB of RAM. This worked because MOL maps guest RAM into host virtual memory, with an ordinary mmap() call in a user process. MOL then uses a Linux kernel module to control the host MMU.
But the host MMU can only map to host physical addresses, not virtual ones. The guest has an emulated MMU that maps guest virtual to guest physical. MOL adds a base address to translate guest physical to host virtual. MOL's kernel module uses the host map to translate host virtual to host physical, then uses the host MMU to map guest virtual to host physical.
If Linux swaps out a page of guest RAM, then the host physical address becomes invalid, and the guest system might overwrite someone else's memory and crash the host. There must be some way to notify MOL that Linux has swapped out the page. MOL solved this problem by patching an internal Linux kernel function named flush_map_page or flush_map_pages.
KVM is also in the yes group. Linux added a kernel feature called memory management notifiers to support KVM. When QEMU uses KVM for virtualization, it allocates guest RAM in host virtual memory. The MMU notifier tells KVM when the host is swapping out a page.