bitnami helm chart fails to launch pods, - kubernetes

My VM System has a below config, but when i download any bitnami/dokuwiki from bitnami charts or any other and run the deployment, Pods are getting pending or crashloop back. Can some one help in this regard.
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 40 bits physical, 48 bits virtual
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6230R CPU # 2.10GHz
Stepping: 7
CPU MHz: 2095.077
BogoMIPS: 4190.15
Hypervisor vendor: VMware
Virtualization type: full
L1d cache: 128 KiB
L1i cache: 128 KiB
L2 cache: 4 MiB
L3 cache: 143 MiB
NUMA node0 CPU(s): 0-3
issue:
I tried applying pv, but still it is not running. I want to run this pods.

Related

High CPU usage of PostgreSQL

I have a PostgreSQL backed complex Ruby on Rails application running on a Ubuntu Virtual Machine. I see the Postgres processes are having very high %CPU values while running "top"commands.
. Periodically the %CPU is going up to 94 and 95.
lscpu
gives the fallowing output
Architecture: i686
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 4
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Stepping: 4
CPU MHz: 2100.000
BogoMIPS: 4200.00
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 33792K
top -n1
top -c
I want the know the reason for the High CPU utilization by Postgres.
Any help is appreciated.
Thanks in Advance!!

Spring Batch Partition Threads Not Executing together

We have spring batch job reading XML files and inserting to DB. Each batch slave partition is taking 10000 XML's and writing to DB.
Below is the Thread pool configuration
#Bean
public ThreadPoolTaskExecutor taskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setMaxPoolSize(80);
taskExecutor.setCorePoolSize(50);
taskExecutor.setQueueCapacity(30);
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.afterPropertiesSet();
return taskExecutor;
}
We are partitioning into 30 chunks and commit of 100 for each threads. All the 30 threads are inserting into table BATCH_STEP_EXECUTION says starting. Few completes in seconds with same number of records and others are waiting and takes 3 to 7 minutes. Application configuration is 6GB memory and Linux 64 bit having 96 CPU.
Server Configuration
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 2
Core(s) per socket: 24
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Platinum 8160M CPU # 2.10GHz
Stepping: 4

Ceph luminous rbd map hangs forever

Running a 1 node ceph cluster, and using the ceph-client from another node. Qemu is working fine with the RBD mounting. When I try to mount a RBD block device on the ceph-client I get an indefinite hang with no output. How to diagnose whats wrong?
System is ubuntu 16.04 server, and Ceph Luminous.
sudo ceph tell osd.* version
{
"version": "ceph version 12.2.2 (cf0baeeeeba3b47f9427c6c97e2144b094b7e5ba) luminous (stable)"
}
ceph -s
cluster:
id: 4bfcc109-e432-4ac0-ba9d-bf81243aea
health: HEALTH_OK
services:
mon: 1 daemons, quorum gcmaster
mgr: gcmaster(active)
osd: 1 osds: 1 up, 1 in
data:
pools: 1 pools, 128 pgs
objects: 1512 objects, 5879 MB
usage: 7356 MB used, 216 GB / 223 GB avail
pgs: 128 active+clean
rbd info gcbase
rbd image 'gcbase':
size 512 MB in 128 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.376974b0dc51
format: 2
features: layering, exclusive-lock, object-map, fast-diff, deep-flatten
flags:
create_timestamp: Fri Dec 29 17:58:02 2017
This hangs forever
rbd map gcbase --pool rbd
As does this
rbd map typo_gcbase --pool rbd
dmesg shows
Dec 29 13:27:32 cephclient1 kernel: [85798.195468] libceph: mon0 192.168.1.55:6789 feature set mismatch, my 106b84a842a42 < server's 40106b84a842a42, missing 400000000000000
Dec 29 13:27:32 cephclient1 kernel: [85798.222070] libceph: mon0 192.168.1.55:6789 missing required protocol features
The dmesg output tells what's going on: The cluster requires a feature bit that is not supported by the libceph kernel module.
The feature bit in question is either CEPH_FEATURE_CRUSH_TUNABLES5, CEPH_FEATURE_NEW_OSDOPREPLY_ENCODING or CEPH_FEATURE_FS_FILE_LAYOUT_V2 (they are overlapping because they were introduced at the same time) which only became available on kernel 4.5, whereas Ubuntu 16.04 uses a 4.4 kernel.
A similar question (although related to CephFS) came up on the mailing list with a possible solution:
Yes, you should be able to set your CRUSH tunables profile to hammer
with "ceph osd crush tunables hammer".
This will disable some features, but should make the older kernel compatible with the cluster.
Alternatively you could upgrade to a mainline kernel or to a newer OS release.

strange G-WAN response speed differences

I had just implement G-WAN web server and test for my code, however, it is very strange that my server response very fast sometimes (20 ms), and sometimes over few seconds (6–7 s) or even timeout...
I try to simplify my code, and return a string to clients, the problem still occurs...
Beside, I had log the time consume by my code, it never over 1 sec, so what cause the problem?!
I guess this cause by network delay, and test the network speed of the same server, it very fast, any idea? (Will the problem caused by include some 3rd party library like MySQL?)
Here is my G-WAN log:
*------------------------------------------------
*G-WAN 4.3.14 64-bit (Mar 14 2013 07:33:12)
* ------------------------------------------------
* Local Time: Mon, 29 Jul 2013 10:09:05 GMT+8
* RAM: (918.46 MiB free + 0 shared + 222.81 MiB buffers) / 1.10 GiB total
* Physical Pages: 918.46 MiB / 1.10 GiB
* DISK: 3.27 GiB free / 6.46 GiB total
* Filesystem Type Size Used Avail Use% Mounted on
* /dev/mapper/vg_centos6-root
* ext4 6.5G 3.2G 3.0G 52% /
* tmpfs tmpfs 1004M 8.2M 995M 1% /dev/shm
* /dev/xvda1 ext4 485M 129M 331M 28% /boot
* 105 processes, including pid:10874 '/opt/gwan/gwan'
* Page-size:4,096 Child-max:65,535 Stream-max:16
* CPU: 1x Intel(R) Xeon(R) CPU E5506 # 2.13GHz
* 0 id: 0 0
* Cores: possible:0-14 present:0 online:0
* L1d cache: 32K line:64 0
* L1i cache: 32K line:64 0
* L2 cache: 256K line:64 0
* L3 cache: 4096K line:64 0
* NUMA node #1 0
* CPU(s):1, Core(s)/CPU:0, Thread(s)/Core:2
* Bogomips: 4,256.14
* Hypervisor: XenVMMXenVMM
* using 1 workers 0[1]0
* among 2 threads 0[]1
* 64-bit little-endian (least significant byte first)
* CentOS release 6.3 (Final) (3.5.5-1.) 64-bit
* user: root (uid:0), group: root (uid:0)
* system fd_max: 65,535
* program fd_max: 65,535
* updated fd_max: 500,000
* Available network interfaces (3):
* 127.0.0.1
* 192.168.0.1
* xxx.xxx.xxx.xxx
* memory footprint: 1.39 MiB.
* Host /opt/gwan/0.0.0.0_8080/#0.0.0.0
* loaded index.c 3.46 MiB MD5:afb6c263-791c706a-598cc77b-e0873517
* memory footprint: 3.40 MiB.
If I use -g mode, and increase the number of workers up to the number of CPUs of the server, this problem seem to be resolved
Then, it seems to be a CPU detection issue. Please dump the relevant part of your gwan.log file header (CPU detection) so ew can have a look.
When G-WAN has to re-compile a servlet using external libraires that must be searched and linked, this may take time (especially if there's only one worker and other requests are pending).
UPDATE: following your gwan.log file dump, here is what's important:
CPU: 1x Intel(R) Xeon(R) CPU E5506 # 2.13GHz
0 id: 0 0
Cores: possible:0-14 present:0 online:0
CPU(s):1, Core(s)/CPU:0, Thread(s)/Core:2
Hypervisor: XenVMMXenVMM
using 1 workers 0[1]0
among 2 threads 0[]1
The Intel E5506 is a 4-Core CPU... but the Xen Hypervisor is reporting 1 CPU and 0 Cores (and hyperthreading enabled, which makes no sense without any CPU Core).
Why Xen finds it a priority to corrupt the genuine and correct information about the CPU with complete nonsense is beyond the purpose of this discussion.
All I can say is that this is the cause of the issue experienced by 'moriya' (hence the 'fix' with ./gwan -g -w 4 to bypass the wrong information reported by the corrupted Linux kernel /proc and the CPUID instruction).
I can only suggest to avoid using brain-damaged hypervisors which prevent multicore software (like G-WAN) from running correctly by sabotaging the two standard ways to detect CPU topologies: the Linux kernel /proc structure and the CPUID instruction.

KVM and libvirt: wrong CPU type in virtual host

We us KVM and libvirt on a 6 core (12 HT cores) machine for virtualization.
Problem: wrong CPU type in virtual host.
used KVM, libvirt, kernel version:
libvirt version: 0.9.8
QEMU emulator version 1.0 (qemu-kvm-1.0), Copyright (c) 2003-2008 Fabrice Bellard
Ubuntu 12.04.1 LTS
kernel: 3.2.0-32-generic x86_64
/usr/share/libvirt/cpu_map.xml does not support more recent cpu types than Westmare.
Do I need this kind of virtualisation of the cpu at all? because of some reasons we need maximum cpu-performance in the virtual host. Will be glad to have some cores of the server's i7-3930K CPU#3.20GHz available in my virtual machines.
Maybe we do too muczh virtualization...?
my virtual host's xml looks like: where can I set the cpu -host flag?
<domain type='kvm'>
<name>myVirtualServer</name>
<uuid>2344481d-f455-455e-9558</uuid>
<description>Test-Server</description>
<memory>4194304</memory>
<currentMemory>4194304</currentMemory>
<vcpu>2</vcpu>
<cpu match='exact'>
<model>Westmere</model>
<vendor>Intel</vendor>
</cpu>
<os>
<type arch='x86_64' machine='pc-1.0'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
<pae/>
</features>
$ lscpu of physical Server with 6 (12) cores with HT
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 45
Stepping: 7
CPU MHz: 1200.000
BogoMIPS: 6400.05
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 12288K
NUMA node0 CPU(s): 0-11
$ lscpu of virtual Server (wrong CPU type, wrong L2-Cache, wrong MHz)
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 1
Core(s) per socket: 1
Socket(s): 2
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 15
Stepping: 11
CPU MHz: 3200.012
BogoMIPS: 6400.02
Virtualisation: VT-x
Hypervisor vendor: KVM
Virtualisation type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 4096K
NUMA node0 CPU(s): 0,1
in the client's xml
<cpu mode='custom' match='exact'>
<model fallback='allow'>core2duo</model>
<feature policy='require' name='vmx'/>
</cpu>
as an example. virsh edit then restart the guest.
EDIT. Ignore this. I've just re-read your question and you're already doing that.