installing mongodb on ubuntu 22 - mongodb

I am trying to install mongodb (5 and then 4.4) on ubuntu22, but appearently my cpu does not support some instructions. any versions i installed resulted in core-dump error and illegal instruction error. i tried to install it using tar.gz but did not work either.
this is my cpu info:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 39 bits physical, 48 bits virtual
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Vendor ID: GenuineIntel
Model name: Intel(R) Pentium(R) CPU G3260 # 3.30GHz
CPU family: 6
Model: 60
Thread(s) per core: 1
Core(s) per socket: 2
Socket(s): 1
Stepping: 3
CPU max MHz: 3300.0000
CPU min MHz: 800.0000
BogoMIPS: 6585.15
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mc
a cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss
ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arc
h_perfmon pebs bts rep_good nopl xtopology nonstop_tsc
cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vm
x est tm2 ssse3 sdbg cx16 xtpr pdcm pcid sse4_1 sse4_2
movbe popcnt tsc_deadline_timer xsave rdrand lahf_lm ab
m cpuid_fault epb invpcid_single pti ssbd ibrs ibpb sti
bp tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbas
e tsc_adjust erms invpcid xsaveopt dtherm arat pln pts
md_clear flush_l1d
Virtualization features:
Virtualization: VT-x
Caches (sum of all):
L1d: 64 KiB (2 instances)
L1i: 64 KiB (2 instances)
L2: 512 KiB (2 instances)
L3: 3 MiB (1 instance)
NUMA:
NUMA node(s): 1
NUMA node0 CPU(s): 0,1
Vulnerabilities:
Itlb multihit: KVM: Mitigation: VMX disabled
L1tf: Mitigation; PTE Inversion; VMX conditional cache flushe
s, SMT disabled
Mds: Mitigation; Clear CPU buffers; SMT disabled
Meltdown: Mitigation; PTI
Mmio stale data: Not affected
Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
and seccomp
Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer
sanitization
Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIB
P disabled, RSB filling
Srbds: Mitigation; Microcode
Tsx async abort: Not affected
my question is: beside changing my cpu is there any way to install mongodb on ubuntu?

Related

bitnami helm chart fails to launch pods,

My VM System has a below config, but when i download any bitnami/dokuwiki from bitnami charts or any other and run the deployment, Pods are getting pending or crashloop back. Can some one help in this regard.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6230R CPU # 2.10GHz
Stepping: 7
CPU MHz: 2095.077
BogoMIPS: 4190.15
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 143 MiB
NUMA node0 CPU(s): 0-3
issue:
I tried applying pv, but still it is not running. I want to run this pods.

Import and Run Tensorflow 2 on linux machine that does not support AVX instructions

I am on Red Hat Enterprise Linux Server release 7.7 and have installed TensorFlow 2.1.0 on this machine.
Whenever I try to import TensorFlow as follows:
import tensorFlow as tf
It gives the following error:
Illegal instruction (core dumped)
I have done some research and figured that it happens because my machine does not support AVX.
I found a link that solves similar issue on a windows machine. I was wondering if there is any way to solve it on a Linux machine?
I used more /proc/cpuinfo | grep flags to get the flags supported by my CPU. Followings are the flags supported on my machine:
fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx lm constant_tsc arch_perfmon pebs bts rep_good nopl aperfmperf eagerfpu pni dtes64 monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 lahf_lm tpr_shadow vnmi flexpriority dtherm
I know that the problem will be gone if I use tensorflow version 1.5, but at this point I am cannot downgrade it to 1.5.
Is there any way to import and run tensorflow 2.1.0 on a machine that does not support AVX instructions?

number of cores for python program vs number of cpu

I have a CPU with 32 processors and each has 16 cores. Here is the truncated output for cat /proc/cpuinfo for 32'nd processor.
processor : 31
vendor_id : GenuineIntel
cpu family : 6
model : 79
model name : Intel(R) Xeon(R) CPU E5-2686 v4 # 2.30GHz
stepping : 1
microcode : 0xb000037
cpu MHz : 2700.787
cache size : 46080 KB
physical id : 0
siblings : 32
core id : 15
cpu cores : 16
apicid : 31
initial apicid : 31
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc aperfmperf pni pclmulqdq monitor est ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single kaiser fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm rdseed adx xsaveopt ida
bugs : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass l1tf
bogomips : 4600.08
clflush size : 64
cache_alignment : 64
address sizes : 46 bits physical, 48 bits virtual
power management:
What does it mean for OS? Can it run 32*16=512 processes completely in parallel?
However when I run the following python code I still get 32 as output.
import multiprocessing
print("Number of cpu : ", multiprocessing.cpu_count())
So can python only run 32 processes completely in parallel?
Your processor has 16 cores that allow you to run 32 threads in parallel. This means that python can parallelize (Multi-thread) across those 32 threads.
From the Intel Website:
A Thread, or thread of execution, is a software term for the basic
ordered sequence of instructions that can be passed through or
processed by a single CPU core.
Plainly, yes you can "only" run 32 threads in parallel.

Hugepagesize is not increasing to 1G in VM

I am using CentOS VM in ESXi Server. I want to increase the Huge page size to 1G.
I followed the link:
http://dpdk-guide.gitlab.io/dpdk-guide/setup/hugepages.html
I executed the small script to check if the size of 1 GB is supported:
[root#localhost ~]# if grep pdpe1gb /proc/cpuinfo >/dev/null 2>&1; then echo "1GB supported."; fi
1GB supported.
[root#localhost ~]#
I added default_hugepagesz=1GB hugepagesz=1G hugepages=4 to /etc/default/grub.
grub2-mkconfig -o /boot/grub2/grub.cfg
Rebooted the VM.
But still I can see 2048 KB (2MB) for the Huge page size.
[root#localhost ~]# cat /proc/meminfo | grep -i huge
AnonHugePages: 8192 kB
HugePages_Total: 1024
HugePages_Free: 1024
HugePages_Rsvd: 0
HugePages_Surp: 0
**Hugepagesize: 2048 kB**
[root#localhost ~]#
The following are details of VM:
[root#localhost ~]# uname -a
Linux localhost.localdomain 3.10.0-514.10.2.el7.x86_64 #1 SMP Fri Mar 3 00:04:05 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
[root#localhost ~]#
[root#localhost ~]# cat /proc/cpuinfo | grep -i flags
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts nopl xtopology tsc_reliable nonstop_tsc aperfmperf pni pclmulqdq vmx ssse3 cx16 pcid sse4_1 sse4_2 x2apic popcnt aes xsave avx hypervisor lahf_lm ida arat epb pln pts dtherm tpr_shadow vnmi ept vpid
[root#localhost ~]#
8GB of Memory and 2 CPUs are allocated to VM.
CPU flag of 1gb hugepage support and guest OS support/enabling are not enough to get 1 gb hugepages working in virtualized environment.
The idea of huge pages both on PMD (2MB or 4 MB before PAE and x86_64) and on PUD level (1 GB) is to create mapping from aligned virtual region of huge size into some huge region of physical memory (As I understand, it should be aligned too). With additional virtualization level of hypervisor there are now three (or four) memory levels: virtual memory of app in guest OS, some memory which is considered as physical by guest OS (it is the memory managed by virtualization solution: ESXi, Xen, KVM, ....), and the real physical memory. It's reasonable to assume that hugepage idea should have the same size of huge region in all three levels to be useful (generate less TLB miss, use less Page Table structures to describe a lot of memory - grep "Need bigger than 4KB pages" in the DickSites's "Datacenter Computers: modern challenges in CPU design", Google, Feb2015).
So, to use huge page of some level inside Guest OS, you should already have same sized huge pages in physical memory (in your Host OS) and in your virtualization solution. You can't effectively use huge page inside Guest when they are not available for you host OS and Virtualization software. (Some like qemu or bochs may emulate them but this will be from slow to very slow.) And when you want both 2 MB and 1 GB huge pages: Your CPU, Host OS, Virtual System and Guest OS all should support them (and host system should have enough aligned continuous physical memory to allocate 1 GB page, you probably can't split this page over several Sockets in NUMA).
Don't know about ESXi, but there as some links for
RedHat and some(?) linux virtualization solutions (with libvirtd). In "Virtualization Tuning and Optimization Guide" manual hugepages are listed for Host OS: ⁠8.3.3.3. Enabling 1 GB huge pages for guests at boot or runtime
:
Procedure 8.2. Allocating 1 GB huge pages at boot time
To allocate different sizes of huge pages at boot, use the following command, specifying the number of huge pages. This example allocates 4 1 GB huge pages and 1024 2 MB huge pages: 'default_hugepagesz=1G hugepagesz=1G hugepages=4 hugepagesz=2M hugepages=1024' Change this command line to specify a different number of huge pages to be allocated at boot.
Note The next two steps must also be completed the first time you allocate 1 GB huge pages at boot time.
Mount the 2 MB and 1 GB huge pages on the host:
# mkdir /dev/hugepages1G
# mount -t hugetlbfs -o pagesize=1G none /dev/hugepages1G
# mkdir /dev/hugepages2M
# mount -t hugetlbfs -o pagesize=2M none /dev/hugepages2M
Restart libvirtd to enable the use of 1 GB huge pages on guests:
# service restart libvirtd
1 GB huge pages are now available for guests.
For Ubuntu and KVM: "KVM - Using Hugepages"
By increasing the page size, you reduce the page table and reduce the pressure on the TLB cache. ... vm.nr_hugepages = 256 ... Reboot the system (note: this is about physical reboot of host machine and host OS) ... Set up Libvirt to use Huge Pages KVM_HUGEPAGES=1 ... Setting up a guest to use Huge Pages
For Fedora and KVM (old manual about 2MB pages): https://fedoraproject.org/wiki/Features/KVM_Huge_Page_Backed_Memory
ESXi 5 had support of 2MB pages, which should be manually enabled: How to Modify Large Memory Page Settings on ESXi
For "VMware’s ESX server" of unknown version, from March 2015 paper: BQ Pham, "Using TLB Speculation to Overcome Page Splintering in Virtual Machines", Rutgers University Technical Report DCS-TR-713, March 2015:
Lack of hypervisor support for large pages: Finally, hypervisor vendors can take a few production cycles before fully adopting large pages. For example, VMware’s ESX server currently has no support for 1GB large pages in the hypervisor,
even though guests on x86-64 systems can use them.
Newer paper, no direct conclusion about 1GB pages: https://rucore.libraries.rutgers.edu/rutgers-lib/49279/PDF/1/
We find that large pages are conflicted with lightweight memory management across a range of hypervisors (e.g., ESX, KVM) across architectures (e.g., ARM, x86-64) and container-based technologies.
Old pdf from VMWare: "Large Page Performance. ESX Server 3.5 and ESX Server 3i v3.5". https://www.vmware.com/content/dam/digitalmarketing/vmware/en/pdf/techpaper/large_pg_performance.pdf --- only 2MB huge pages are listed as supported
VMware ESX Server 3.5 and VMware ESX Server 3i v3.5 introduce 2MB large page support to the virtualized
environment. In earlier versions of ESX Server, guest operating system large pages were emulated using small
pages. This meant that, even if the guest operating system was using large pages, it did not get the
performance benefit of reducing TLB misses. The enhanced large page support in ESX Server 3.5 and ESX
Server 3i v3.5 enables 32‐bit virtual machines in PAE mode and 64‐bit virtual machines to make use of large
pages.
Passthrough host cpu to VM work for me, which gives VM pdpe1gb cpu flag.
I use Qemu + libvirt, enable 1G hugepagesz on host.
Maybe it is helpful. set the cpu fuature in the xml described the vm as following:
<cpu mode='custom' match='exact' check='partial'>
<model fallback='allow'>Broadwell</model>
<feature policy='force' name='pdpe1gb'/>
</cpu>

KVM and libvirt: wrong CPU type in virtual host

We us KVM and libvirt on a 6 core (12 HT cores) machine for virtualization.
Problem: wrong CPU type in virtual host.
used KVM, libvirt, kernel version:
libvirt version: 0.9.8
QEMU emulator version 1.0 (qemu-kvm-1.0), Copyright (c) 2003-2008 Fabrice Bellard
Ubuntu 12.04.1 LTS
kernel: 3.2.0-32-generic x86_64
/usr/share/libvirt/cpu_map.xml does not support more recent cpu types than Westmare.
Do I need this kind of virtualisation of the cpu at all? because of some reasons we need maximum cpu-performance in the virtual host. Will be glad to have some cores of the server's i7-3930K CPU#3.20GHz available in my virtual machines.
Maybe we do too muczh virtualization...?
my virtual host's xml looks like: where can I set the cpu -host flag?
<domain type='kvm'>
<name>myVirtualServer</name>
<uuid>2344481d-f455-455e-9558</uuid>
<description>Test-Server</description>
<memory>4194304</memory>
<currentMemory>4194304</currentMemory>
<vcpu>2</vcpu>
<cpu match='exact'>
<model>Westmere</model>
<vendor>Intel</vendor>
</cpu>
<os>
<type arch='x86_64' machine='pc-1.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
$ lscpu of physical Server with 6 (12) cores with HT
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Stepping: 7
CPU MHz: 1200.000
BogoMIPS: 6400.05
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-11
$ lscpu of virtual Server (wrong CPU type, wrong L2-Cache, wrong MHz)
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 15
Stepping: 11
CPU MHz: 3200.012
BogoMIPS: 6400.02
Virtualisation: VT-x
Hypervisor vendor: KVM
Virtualisation type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0,1
in the client's xml
<cpu mode='custom' match='exact'>
<model fallback='allow'>core2duo</model>
<feature policy='require' name='vmx'/>
</cpu>
as an example. virsh edit then restart the guest.
EDIT. Ignore this. I've just re-read your question and you're already doing that.