KVM and libvirt: wrong CPU type in virtual host - virtualization

We us KVM and libvirt on a 6 core (12 HT cores) machine for virtualization.
Problem: wrong CPU type in virtual host.
used KVM, libvirt, kernel version:
libvirt version: 0.9.8
QEMU emulator version 1.0 (qemu-kvm-1.0), Copyright (c) 2003-2008 Fabrice Bellard
Ubuntu 12.04.1 LTS
kernel: 3.2.0-32-generic x86_64
/usr/share/libvirt/cpu_map.xml does not support more recent cpu types than Westmare.
Do I need this kind of virtualisation of the cpu at all? because of some reasons we need maximum cpu-performance in the virtual host. Will be glad to have some cores of the server's i7-3930K CPU#3.20GHz available in my virtual machines.
Maybe we do too muczh virtualization...?
my virtual host's xml looks like: where can I set the cpu -host flag?
<domain type='kvm'>
<name>myVirtualServer</name>
<uuid>2344481d-f455-455e-9558</uuid>
<description>Test-Server</description>
<memory>4194304</memory>
<currentMemory>4194304</currentMemory>
<vcpu>2</vcpu>
<cpu match='exact'>
<model>Westmere</model>
<vendor>Intel</vendor>
</cpu>
<os>
<type arch='x86_64' machine='pc-1.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
$ lscpu of physical Server with 6 (12) cores with HT
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Stepping: 7
CPU MHz: 1200.000
BogoMIPS: 6400.05
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-11
$ lscpu of virtual Server (wrong CPU type, wrong L2-Cache, wrong MHz)
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 15
Stepping: 11
CPU MHz: 3200.012
BogoMIPS: 6400.02
Virtualisation: VT-x
Hypervisor vendor: KVM
Virtualisation type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0,1

in the client's xml
<cpu mode='custom' match='exact'>
<model fallback='allow'>core2duo</model>
<feature policy='require' name='vmx'/>
</cpu>
as an example. virsh edit then restart the guest.
EDIT. Ignore this. I've just re-read your question and you're already doing that.

Related

installing mongodb on ubuntu 22

I am trying to install mongodb (5 and then 4.4) on ubuntu22, but appearently my cpu does not support some instructions. any versions i installed resulted in core-dump error and illegal instruction error. i tried to install it using tar.gz but did not work either.
this is my cpu info:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Pentium(R) CPU G3260 # 3.30GHz
CPU family: 6
Model: 60
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
CPU max MHz: 3300.0000
CPU min MHz: 800.0000
BogoMIPS: 6585.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc
a cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arc
h_perfmon pebs bts rep_good nopl xtopology nonstop_tsc
cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vm
x est tm2 ssse3 sdbg cx16 xtpr pdcm pcid sse4_1 sse4_2
movbe popcnt tsc_deadline_timer xsave rdrand lahf_lm ab
m cpuid_fault epb invpcid_single pti ssbd ibrs ibpb sti
bp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbas
e tsc_adjust erms invpcid xsaveopt dtherm arat pln pts
md_clear flush_l1d
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 64 KiB (2 instances)
L1i: 64 KiB (2 instances)
L2: 512 KiB (2 instances)
L3: 3 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerabilities:
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX conditional cache flushe
s, SMT disabled
Mds: Mitigation; Clear CPU buffers; SMT disabled
Meltdown: Mitigation; PTI
Mmio stale data: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer
sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIB
P disabled, RSB filling
Srbds: Mitigation; Microcode
Tsx async abort: Not affected
my question is: beside changing my cpu is there any way to install mongodb on ubuntu?

bitnami helm chart fails to launch pods,

My VM System has a below config, but when i download any bitnami/dokuwiki from bitnami charts or any other and run the deployment, Pods are getting pending or crashloop back. Can some one help in this regard.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6230R CPU # 2.10GHz
Stepping: 7
CPU MHz: 2095.077
BogoMIPS: 4190.15
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 143 MiB
NUMA node0 CPU(s): 0-3
issue:
I tried applying pv, but still it is not running. I want to run this pods.

High CPU usage of PostgreSQL

I have a PostgreSQL backed complex Ruby on Rails application running on a Ubuntu Virtual Machine. I see the Postgres processes are having very high %CPU values while running "top"commands.
. Periodically the %CPU is going up to 94 and 95.
lscpu
gives the fallowing output
Architecture: i686
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Stepping: 4
CPU MHz: 2100.000
BogoMIPS: 4200.00
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 33792K
top -n1
top -c
I want the know the reason for the High CPU utilization by Postgres.
Any help is appreciated.
Thanks in Advance!!

Webscoket error connection problem in bigblueutton

I just deploy bigbluebutton using amazon lightsail. Everything is working fine except audio call and video. When I'm trying to connect audio call and video call I'm getting webscket error.
I've already disable all the firewall.
I deploy bigbluebutton using bbb-install.sh
https://github.com/bigbluebutton/bbb-install
Here is system specifications
ip-xx.xx.xx.xx
description: Computer
product: HVM domU
vendor: Xen
version: 4.2.amazon
serial: ec222835-0f97-80b8-2cbf-361289de5846
width: 64 bits
capabilities: smbios-2.7 dmi-2.7 vsyscall32
configuration: boot=normal uuid=352822EC-970F-B880-2CBF-361289DE5846
*-core
description: Motherboard
physical id: 0
*-firmware
description: BIOS
vendor: Xen
physical id: 0
version: 4.2.amazon
date: 08/24/2006
size: 96KiB
capabilities: pci edd
*-cpu:0
description: CPU
product: Intel(R) Xeon(R) CPU E5-2686 v4 # 2.30GHz
vendor: Intel Corp.
physical id: 401
bus info: cpu#0
slot: CPU 1
size: 2300MHz
capacity: 2300MHz
width: 64 bits
capabilities: fpu fpu_exception wp vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp x86-64 constant_tsc rep_good nopl xtopology pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single kaiser fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt
*-cpu:1
description: CPU
vendor: Intel
physical id: 402
bus info: cpu#1
slot: CPU 2
size: 2300MHz
capacity: 2300MHz
*-cpu:2
description: CPU
vendor: Intel
physical id: 403
bus info: cpu#2
slot: CPU 3
size: 2300MHz
capacity: 2300MHz
*-cpu:3
description: CPU
vendor: Intel
physical id: 404
bus info: cpu#3
slot: CPU 4
size: 2300MHz
capacity: 2300MHz
*-cpu:4
description: CPU
vendor: Intel
physical id: 405
bus info: cpu#4
slot: CPU 5
size: 2300MHz
capacity: 2300MHz
*-cpu:5
description: CPU
vendor: Intel
physical id: 406
bus info: cpu#5
slot: CPU 6
size: 2300MHz
capacity: 2300MHz
*-cpu:6
description: CPU
vendor: Intel
physical id: 407
bus info: cpu#6
slot: CPU 7
size: 2300MHz
capacity: 2300MHz
*-cpu:7
description: CPU
vendor: Intel
physical id: 408
bus info: cpu#7
slot: CPU 8
size: 2300MHz
capacity: 2300MHz
*-memory
description: System Memory
physical id: 1000
size: 32GiB
*-bank:0
description: DIMM RAM
physical id: 0
slot: DIMM 0
size: 16GiB
width: 64 bits
*-bank:1
description: DIMM RAM
physical id: 1
slot: DIMM 1
size: 16GiB
width: 64 bits
*-pci
description: Host bridge
product: 440FX - 82441FX PMC [Natoma]
vendor: Intel Corporation
physical id: 100
bus info: pci#0000:00:00.0
version: 02
width: 32 bits
clock: 33MHz
*-isa
description: ISA bridge
product: 82371SB PIIX3 ISA [Natoma/Triton II]
vendor: Intel Corporation
physical id: 1
bus info: pci#0000:00:01.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: isa bus_master
configuration: latency=0
*-ide
description: IDE interface
product: 82371SB PIIX3 IDE [Natoma/Triton II]
vendor: Intel Corporation
physical id: 1.1
bus info: pci#0000:00:01.1
version: 00
width: 32 bits
clock: 33MHz
capabilities: ide isa_compatibility_mode-only_controller__supports_bus_mastering bus_master
configuration: driver=ata_piix latency=64
resources: irq:0 ioport:1f0(size=8) ioport:3f6 ioport:170(size=8) ioport:376 ioport:c100(size=16)
*-bridge UNCLAIMED
description: Bridge
product: 82371AB/EB/MB PIIX4 ACPI
vendor: Intel Corporation
physical id: 1.3
bus info: pci#0000:00:01.3
version: 01
width: 32 bits
clock: 33MHz
capabilities: bridge bus_master
configuration: latency=0
*-display UNCLAIMED
description: VGA compatible controller
product: GD 5446
vendor: Cirrus Logic
physical id: 2
bus info: pci#0000:00:02.0
version: 00
width: 32 bits
clock: 33MHz
capabilities: vga_controller bus_master
configuration: latency=0
resources: memory:f0000000-f1ffffff memory:f3000000-f3000fff
*-generic
description: Unassigned class
product: Xen Platform Device
vendor: XenSource, Inc.
physical id: 3
bus info: pci#0000:00:03.0
version: 01
width: 32 bits
clock: 33MHz
capabilities: bus_master
configuration: driver=xen-platform-pci latency=0
resources: irq:28 ioport:c000(size=256) memory:f2000000-f2ffffff
*-network
description: Ethernet interface
physical id: 1
logical name: eth0
serial: 02:0c:36:8d:8e:60
capabilities: ethernet physical
configuration: broadcast=yes driver=vif ip=172.26.9.115 link=yes multicast=yes
After searching here and there i found this one, But i'm also not sure weather this one works properly or not.
https://github.com/bigbluebutton/bigbluebutton/issues/2628#issuecomment-635107717
What is difference between single colon and double colon in this file
WebRtcEndpoint.conf.ini
I'm also little bit confused about some of the terminology used here like internal ip and external ip.
and
stunServerAddress=64.233.177.127 stunServerPort=19302 Just copy from github.
What kind of ip shoud i use here, internal ip or external ip.
`
Check /opt/freeswitch/etc/freeswitch/sip_profiles/external.xml settings firstly.
if your binding is like that:
ws-binding: :5066
wss-binding: :7443
change it to like this:
ws-binding: your_external_ip_address:5066
wss-binding: your_external_ip_address:7443
Your external ip address is same with external_sip_ip address.
Then dont forget to restart bbb (bbb-conf --restart)
I think that this problem has arisen from a firewall issue, Check your firewall and be sure that UDP traffic can pass over the firewall.

Hugepagesize is not increasing to 1G in VM

I am using CentOS VM in ESXi Server. I want to increase the Huge page size to 1G.
I followed the link:
http://dpdk-guide.gitlab.io/dpdk-guide/setup/hugepages.html
I executed the small script to check if the size of 1 GB is supported:
[root#localhost ~]# if grep pdpe1gb /proc/cpuinfo >/dev/null 2>&1; then echo "1GB supported."; fi
1GB supported.
[root#localhost ~]#
I added default_hugepagesz=1GB hugepagesz=1G hugepages=4 to /etc/default/grub.
grub2-mkconfig -o /boot/grub2/grub.cfg
Rebooted the VM.
But still I can see 2048 KB (2MB) for the Huge page size.
[root#localhost ~]# cat /proc/meminfo | grep -i huge
AnonHugePages: 8192 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
**Hugepagesize: 2048 kB**
[root#localhost ~]#
The following are details of VM:
[root#localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root#localhost ~]#
[root#localhost ~]# cat /proc/cpuinfo | grep -i flags
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid
[root#localhost ~]#
8GB of Memory and 2 CPUs are allocated to VM.
CPU flag of 1gb hugepage support and guest OS support/enabling are not enough to get 1 gb hugepages working in virtualized environment.
The idea of huge pages both on PMD (2MB or 4 MB before PAE and x86_64) and on PUD level (1 GB) is to create mapping from aligned virtual region of huge size into some huge region of physical memory (As I understand, it should be aligned too). With additional virtualization level of hypervisor there are now three (or four) memory levels: virtual memory of app in guest OS, some memory which is considered as physical by guest OS (it is the memory managed by virtualization solution: ESXi, Xen, KVM, ....), and the real physical memory. It's reasonable to assume that hugepage idea should have the same size of huge region in all three levels to be useful (generate less TLB miss, use less Page Table structures to describe a lot of memory - grep "Need bigger than 4KB pages" in the DickSites's "Datacenter Computers: modern challenges in CPU design", Google, Feb2015).
So, to use huge page of some level inside Guest OS, you should already have same sized huge pages in physical memory (in your Host OS) and in your virtualization solution. You can't effectively use huge page inside Guest when they are not available for you host OS and Virtualization software. (Some like qemu or bochs may emulate them but this will be from slow to very slow.) And when you want both 2 MB and 1 GB huge pages: Your CPU, Host OS, Virtual System and Guest OS all should support them (and host system should have enough aligned continuous physical memory to allocate 1 GB page, you probably can't split this page over several Sockets in NUMA).
Don't know about ESXi, but there as some links for
RedHat and some(?) linux virtualization solutions (with libvirtd). In "Virtualization Tuning and Optimization Guide" manual hugepages are listed for Host OS: ⁠8.3.3.3. Enabling 1 GB huge pages for guests at boot or runtime
:
Procedure 8.2. Allocating 1 GB huge pages at boot time
To allocate different sizes of huge pages at boot, use the following command, specifying the number of huge pages. This example allocates 4 1 GB huge pages and 1024 2 MB huge pages: 'default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024' Change this command line to specify a different number of huge pages to be allocated at boot.
Note The next two steps must also be completed the first time you allocate 1 GB huge pages at boot time.
Mount the 2 MB and 1 GB huge pages on the host:
# mkdir /dev/hugepages1G
# mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G
# mkdir /dev/hugepages2M
# mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M
Restart libvirtd to enable the use of 1 GB huge pages on guests:
# service restart libvirtd
1 GB huge pages are now available for guests.
For Ubuntu and KVM: "KVM - Using Hugepages"
By increasing the page size, you reduce the page table and reduce the pressure on the TLB cache. ... vm.nr_hugepages = 256 ... Reboot the system (note: this is about physical reboot of host machine and host OS) ... Set up Libvirt to use Huge Pages KVM_HUGEPAGES=1 ... Setting up a guest to use Huge Pages
For Fedora and KVM (old manual about 2MB pages): https://fedoraproject.org/wiki/Features/KVM_Huge_Page_Backed_Memory
ESXi 5 had support of 2MB pages, which should be manually enabled: How to Modify Large Memory Page Settings on ESXi
For "VMware’s ESX server" of unknown version, from March 2015 paper: BQ Pham, "Using TLB Speculation to Overcome Page Splintering in Virtual Machines", Rutgers University Technical Report DCS-TR-713, March 2015:
Lack of hypervisor support for large pages: Finally, hypervisor vendors can take a few production cycles before fully adopting large pages. For example, VMware’s ESX server currently has no support for 1GB large pages in the hypervisor,
even though guests on x86-64 systems can use them.
Newer paper, no direct conclusion about 1GB pages: https://rucore.libraries.rutgers.edu/rutgers-lib/49279/PDF/1/
We find that large pages are conflicted with lightweight memory management across a range of hypervisors (e.g., ESX, KVM) across architectures (e.g., ARM, x86-64) and container-based technologies.
Old pdf from VMWare: "Large Page Performance. ESX Server 3.5 and ESX Server 3i v3.5". https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/large_pg_performance.pdf --- only 2MB huge pages are listed as supported
VMware ESX Server 3.5 and VMware ESX Server 3i v3.5 introduce 2MB large page support to the virtualized
environment. In earlier versions of ESX Server, guest operating system large pages were emulated using small
pages. This meant that, even if the guest operating system was using large pages, it did not get the
performance benefit of reducing TLB misses. The enhanced large page support in ESX Server 3.5 and ESX
Server 3i v3.5 enables 32‐bit virtual machines in PAE mode and 64‐bit virtual machines to make use of large
pages.
Passthrough host cpu to VM work for me, which gives VM pdpe1gb cpu flag.
I use Qemu + libvirt, enable 1G hugepagesz on host.
Maybe it is helpful. set the cpu fuature in the xml described the vm as following:
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>Broadwell</model>
<feature policy='force' name='pdpe1gb'/>
</cpu>