I've deployed a slightly greenplum cluster on kubernetes.
Everything seems to be up and running:
$ kubectl get pods:
NAME READY STATUS RESTARTS AGE
greenplum-operator-588d8fcfd8-nmgjp 1/1 Running 0 40m
svclb-greenplum-krdtd 1/1 Running 0 39m
svclb-greenplum-k28bv 1/1 Running 0 39m
svclb-greenplum-25n7b 1/1 Running 0 39m
segment-a-0 1/1 Running 0 39m
master-0 1/1 Running 0 39m
Nevertheless, something seems to be wrong since cluster state is Pending:
$ kubectl describe greenplumclusters.greenplum.pivotal.io my-greenplum
Name: my-greenplum
Namespace: default
Labels: <none>
Annotations: <none>
API Version: greenplum.pivotal.io/v1
Kind: GreenplumCluster
Metadata:
Creation Timestamp: 2020-09-23T08:31:04Z
Finalizers:
stopcluster.greenplumcluster.pivotal.io
Generation: 2
Managed Fields:
API Version: greenplum.pivotal.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:masterAndStandby:
.:
f:antiAffinity:
f:cpu:
f:hostBasedAuthentication:
f:memory:
f:standby:
f:storage:
f:storageClassName:
f:workerSelector:
f:segments:
.:
f:antiAffinity:
f:cpu:
f:memory:
f:mirrors:
f:primarySegmentCount:
f:storage:
f:storageClassName:
f:workerSelector:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2020-09-23T08:31:04Z
API Version: greenplum.pivotal.io/v1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:finalizers:
f:status:
.:
f:instanceImage:
f:operatorVersion:
f:phase:
Manager: greenplum-operator
Operation: Update
Time: 2020-09-23T08:31:11Z
Resource Version: 590
Self Link: /apis/greenplum.pivotal.io/v1/namespaces/default/greenplumclusters/my-greenplum
UID: 72ed72a8-4dd9-48fb-8a48-de2229d88a24
Spec:
Master And Standby:
Anti Affinity: no
Cpu: 0.5
Host Based Authentication: # host all gpadmin 0.0.0.0/0 trust
Memory: 800Mi
Standby: no
Storage: 1G
Storage Class Name: local-path
Worker Selector:
Segments:
Anti Affinity: no
Cpu: 0.5
Memory: 800Mi
Mirrors: no
Primary Segment Count: 1
Storage: 2G
Storage Class Name: local-path
Worker Selector:
Status:
Instance Image: registry.localhost:5000/greenplum-for-kubernetes:v2.2.0
Operator Version: registry.localhost:5000/greenplum-operator:v2.2.0
Phase: Pending
Events: <none>
As you can see:
Phase: Pending
I've took a look on operator logs:
{"level":"DEBUG","ts":"2020-09-23T09:12:18.494Z","logger":"PodExec","msg":"master-0 is not active master","namespace":"default","error":"command terminated with exit code 2"}
{"level":"DEBUG","ts":"2020-09-23T09:12:18.497Z","logger":"PodExec","msg":"master-1 is not active master","namespace":"default","error":"pods \"master-1\" not found"}
{"level":"DEBUG","ts":"2020-09-23T09:12:18.497Z","logger":"controllers.GreenplumCluster","msg":"current active master","greenplumcluster":"default/my-greenplum","activeMaster":""}
I don't quite figure out what they mean...
I mean, It seems it's looking for a two masters: master-0 and master-1. As you can see bellow, I've only deploying a single master with one segment.
greenplum cluster manifest is:
apiVersion: "greenplum.pivotal.io/v1"
kind: "GreenplumCluster"
metadata:
name: my-greenplum
spec:
masterAndStandby:
hostBasedAuthentication: |
# host all gpadmin 0.0.0.0/0 trust
memory: "800Mi"
cpu: "0.5"
storageClassName: local-path
storage: 1G
workerSelector: {}
segments:
primarySegmentCount: 1
memory: "800Mi"
cpu: "0.5"
storageClassName: local-path
storage: 2G
workerSelector: {}
Master is logging this:
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Starting Master instance master-0 directory /greenplum/data-1
20200923:11:29:12:001380 gpstart:master-0:gpadmin-[INFO]:-Command pg_ctl reports Master master-0 instance active
20200923:11:29:12:001380 gpstart:master-0:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
20200923:11:29:27:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 1/4
20200923:11:29:42:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 2/4
20200923:11:29:57:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 3/4
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 4/4
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[WARNING]:-Failed to connect to template1
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[INFO]:-No standby master configured. skipping...
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[INFO]:-Check status of database with gpstate utility
20200923:11:30:12:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
In short:
Timeout expired connecting to template1
Complete master-0 logs:
*******************************
Initializing Greenplum for Kubernetes Cluster
*******************************
*******************************
Generating gpinitsystem_config
*******************************
{"level":"INFO","ts":"2020-09-23T11:28:58.394Z","logger":"startGreenplumContainer","msg":"initializing Greenplum Cluster"}
Sub Domain for the cluster is: agent.greenplum-1.svc.cluster.local
*******************************
Running gpinitsystem
*******************************
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Checking configuration parameters, please wait...
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Locale has not been set in , will set to default value
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Locale set to en_US.utf8
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[WARN]:-ARRAY_NAME variable not set, will provide default value
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[WARN]:-Master hostname master-0.agent.greenplum-1.svc.cluster.local does not match hostname output
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Checking to see if master-0.agent.greenplum-1.svc.cluster.local can be resolved on this host
Warning: Permanently added the RSA host key for IP address '10.42.2.5' to the list of known hosts.
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Can resolve master-0.agent.greenplum-1.svc.cluster.local to this host
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-No DATABASE_NAME set, will exit following template1 updates
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[WARN]:-CHECK_POINT_SEGMENTS variable not set, will set to default value
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[WARN]:-ENCODING variable not set, will set to default UTF-8
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-MASTER_MAX_CONNECT not set, will set to default value 250
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Detected a single host GPDB array build, reducing value of BATCH_DEFAULT from 60 to 4
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Checking configuration parameters, Completed
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Checking Master host
20200923:11:28:58:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Checking new segment hosts, please wait...
Warning: Permanently added the RSA host key for IP address '10.42.1.5' to the list of known hosts.
{"level":"DEBUG","ts":"2020-09-23T11:28:59.038Z","logger":"DNS resolver","msg":"resolved DNS entry","host":"segment-a-0"}
{"level":"INFO","ts":"2020-09-23T11:28:59.038Z","logger":"keyscanner","msg":"starting keyscan","host":"segment-a-0"}
20200923:11:28:59:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Checking new segment hosts, Completed
{"level":"INFO","ts":"2020-09-23T11:28:59.064Z","logger":"keyscanner","msg":"keyscan successful","host":"segment-a-0"}
20200923:11:28:59:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Building the Master instance database, please wait...
20200923:11:29:02:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Found more than 1 instance of shared_preload_libraries in /greenplum/data-1/postgresql.conf, will append
20200923:11:29:02:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Starting the Master in admin mode
20200923:11:29:03:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Commencing parallel build of primary segment instances
20200923:11:29:03:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Spawning parallel processes batch [1], please wait...
.
20200923:11:29:03:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Waiting for parallel processes batch [1], please wait...
......
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:------------------------------------------------
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Parallel process exit status
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:------------------------------------------------
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Total processes marked as completed = 1
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Total processes marked as killed = 0
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Total processes marked as failed = 0
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:------------------------------------------------
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Deleting distributed backout files
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Removing back out file
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-No errors generated from parallel processes
20200923:11:29:09:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Restarting the Greenplum instance in production mode
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Starting gpstop with args: -a -l /home/gpadmin/gpAdminLogs -m -d /greenplum/data-1
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Gathering information and validating the environment...
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Obtaining Segment details from master...
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 6.10.1 build commit:efba04ce26ebb29b535a255a5e95d1f5ebfde94e'
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Master segment instance directory=/greenplum/data-1
20200923:11:29:09:001357 gpstop:master-0:gpadmin-[INFO]:-Stopping master segment and waiting for user connections to finish ...
server shutting down
20200923:11:29:10:001357 gpstop:master-0:gpadmin-[INFO]:-Attempting forceful termination of any leftover master process
20200923:11:29:10:001357 gpstop:master-0:gpadmin-[INFO]:-Terminating processes for segment /greenplum/data-1
20200923:11:29:10:001380 gpstart:master-0:gpadmin-[INFO]:-Starting gpstart with args: -a -l /home/gpadmin/gpAdminLogs -d /greenplum/data-1
20200923:11:29:10:001380 gpstart:master-0:gpadmin-[INFO]:-Gathering information and validating the environment...
20200923:11:29:10:001380 gpstart:master-0:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 6.10.1 build commit:efba04ce26ebb29b535a255a5e95d1f5ebfde94e'
20200923:11:29:10:001380 gpstart:master-0:gpadmin-[INFO]:-Greenplum Catalog Version: '301908232'
20200923:11:29:10:001380 gpstart:master-0:gpadmin-[INFO]:-Starting Master instance in admin mode
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Obtaining Segment details from master...
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Setting new master era
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Master Started...
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Shutting down master
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Commencing parallel segment instance startup, please wait...
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Process results...
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-----------------------------------------------------
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:- Successful segment starts = 1
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:- Failed segment starts = 0
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:- Skipped segment starts (segments are marked down in configuration) = 0
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-----------------------------------------------------
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Successfully started 1 of 1 segment instances
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-----------------------------------------------------
20200923:11:29:11:001380 gpstart:master-0:gpadmin-[INFO]:-Starting Master instance master-0 directory /greenplum/data-1
20200923:11:29:12:001380 gpstart:master-0:gpadmin-[INFO]:-Command pg_ctl reports Master master-0 instance active
20200923:11:29:12:001380 gpstart:master-0:gpadmin-[INFO]:-Connecting to dbname='template1' connect_timeout=15
20200923:11:29:27:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 1/4
20200923:11:29:42:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 2/4
20200923:11:29:57:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 3/4
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[WARNING]:-Timeout expired connecting to template1, attempt 4/4
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[WARNING]:-Failed to connect to template1
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[INFO]:-No standby master configured. skipping...
20200923:11:30:12:001380 gpstart:master-0:gpadmin-[INFO]:-Check status of database with gpstate utility
20200923:11:30:12:000095 gpinitsystem:master-0:gpadmin-[INFO]:-Completed restart of Greenplum instance in production mode
Any ideas?
I deployed greenplum on kubernetes these days.
My problem is the permission on cgroup directory. When I look into the files under /greenplum/data1/pg_log/ in the Pod, I found it print errors like 'can't access directory '/sys/fs/cgroup/memory/gpdb/'. Because the Pod used hostPath.
My advice is to get the error printed in the files under /greenplum/data1/pg_log/.
The Pod's log is not the whole fact.
BTW, I used v0.8.0 at last. I choice v2.3.0 first, but the master is killed quickly when it is ready, maybe by Docker. The log is like 'received fast shutdown request.
ic-proxy-server: received signal 15'
Related
I have problems with connecting volume per iSCSI from Kubernetes. When I try with iscisiadm from worker node, it works. This is what I get from kubectl description pod.
Normal Scheduled <unknown> default-scheduler Successfully assigned default/iscsipd to k8s-worker-2
Normal SuccessfulAttachVolume 4m2s attachdetach-controller AttachVolume.Attach succeeded for volume "iscsipd-rw"
Warning FailedMount 119s kubelet, k8s-worker-2 Unable to attach or mount volumes: unmounted volumes=[iscsipd-rw], unattached volumes=[iscsipd-rw default-token-d5glz]: timed out waiting for the condition
Warning FailedMount 105s (x9 over 3m54s) kubelet, k8s-worker-2 MountVolume.WaitForAttach failed for volume "iscsipd-rw" : failed to get any path for iscsi disk, last err seen:iscsi: failed to attach disk: Error: iscsiadm: No records found(exit status 21)
I'm just using iscsi.yaml file from kubernetes.io!
---
apiVersion: v1
kind: Pod
metadata:
name: iscsipd
spec:
containers:
- name: iscsipd-rw
image: kubernetes/pause
volumeMounts:
- mountPath: "/mnt/iscsipd"
name: iscsipd-rw
volumes:
- name: iscsipd-rw
iscsi:
targetPortal: 192.168.34.32:3260
iqn: iqn.2020-07.int.example:sql
lun: 0
fsType: ext4
readOnly: true
Open-iscsi is installed on all worker nodes(just two of them).
● iscsid.service - iSCSI initiator daemon (iscsid)
Loaded: loaded (/lib/systemd/system/iscsid.service; enabled; vendor preset: e
Active: active (running) since Fri 2020-07-03 10:24:26 UTC; 4 days ago
Docs: man:iscsid(8)
Process: 20507 ExecStart=/sbin/iscsid (code=exited, status=0/SUCCESS)
Process: 20497 ExecStartPre=/lib/open-iscsi/startup-checks.sh (code=exited, st
Main PID: 20514 (iscsid)
Tasks: 2 (limit: 4660)
CGroup: /system.slice/iscsid.service
├─20509 /sbin/iscsid
└─20514 /sbin/iscsid
ISCSI Target is created on the IBM Storwize V7000. Without CHAP.
I tried to connect with iscsiadm from worker node and it works.
sudo iscsiadm -m discovery -t sendtargets -p 192.168.34.32
192.168.34.32:3260,1 iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1
192.168.34.34:3260,1 iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1
sudo iscsiadm -m node --login
Logging in to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.32,3260] (multiple)
Logging in to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.34,3260] (multiple)
Login to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.32,3260] successful.
Login to [iface: default, target: iqn.1986-03.com.ibm:2145.hq-v7000.hq-v7000-rz1-c1, portal: 192.168.34.34,3260] successful.
Disk /dev/sdb: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 32768 bytes / 32768 bytes
Disklabel type: dos
Disk identifier: 0x5b3d0a3a
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 209715199 209713152 100G 83 Linux
Is anyone facing the same problem?
Remember to not use a hostname for the target. Use the IP. For some reason, if the target is a hostname, it barfs with the error about requesting a duplicate session. If the target is an IP, it works fine. I now have multiple iSCSI targets mounted in various pods, and I am absolutely ecstatic.
You may also have authentication issue to your iscsi target.
If you don't use CHAP authentication yet, you still have to disable authentication. For example, if you use targetcli, you can run below commands to disable it.
$ sudo targetcli
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute authentication=0 # will disable auth
/> /iscsi/iqn.2003-01.org.xxxx/tpg1 set attribute generate_node_acls=1 # will force to use tpg1 auth mode by default
If this doesn't help you, please share your iscsi target configuration, or guide that you followed.
What is important check if all of your nodes have the open-iscsi-package installed.
Take a look: kubernetes-iSCSI, volume-failed-iscsi-disk, iscsi-into-container-fails.
Using VirtualBox and 4 x Centos7 OS installs.
Following a basic Single cluster kubernetes install:
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
[root#k8s-master cockroach]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-master Ready master 41m v1.13.2
k8s-slave1 Ready <none> 39m v1.13.2
k8s-slave2 Ready <none> 39m v1.13.2
k8s-slave3 Ready <none> 39m v1.13.2
I have created 3 x NFS PV's on master for my slaves to pick up as part of the cockroachdb-statefulset.yaml as described here:
https://www.cockroachlabs.com/blog/running-cockroachdb-on-kubernetes/
However my cockroach PODs just continually fail to communicate with each other.
[root#k8s-slave1 kubernetes]# kubectl get pods
NAME READY STATUS RESTARTS AGE
cockroachdb-0 0/1 CrashLoopBackOff 6 8m47s
cockroachdb-1 0/1 CrashLoopBackOff 6 8m47s
cockroachdb-2 0/1 CrashLoopBackOff 6 8m47s
[root#k8s-slave1 kubernetes]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
datadir-cockroachdb-0 Bound cockroachdbpv0 10Gi RWO 17m
datadir-cockroachdb-1 Bound cockroachdbpv2 10Gi RWO 17m
datadir-cockroachdb-2 Bound cockroachdbpv1 10Gi RWO 17m
...the cockroach pod logs do not really tell me why...
[root#k8s-slave1 kubernetes]# kubectl logs cockroachdb-0
++ hostname -f
+ exec /cockroach/cockroach start --logtostderr --insecure --advertise-host cockroachdb-0.cockroachdb.default.svc.cluster.local --http-host 0.0.0.0 --join cockroachdb-0.cockroachdb,cockroachdb-1.cockroachdb,cockroachdb-2.cockroachdb --cache 25% --max-sql-memory 25%
W190113 17:00:46.589470 1 cli/start.go:1055 RUNNING IN INSECURE MODE!
- Your cluster is open for any client that can access <all your IP addresses>.
- Any user, even root, can log in without providing a password.
- Any user, connecting as root, can read or write any data in your cluster.
- There is no network encryption nor authentication, and thus no confidentiality.
Check out how to secure your cluster: https://www.cockroachlabs.com/docs/v2.1/secure-a-cluster.html
I190113 17:00:46.595544 1 server/status/recorder.go:609 available memory from cgroups (8.0 EiB) exceeds system memory 3.7 GiB, using system memory
I190113 17:00:46.600386 1 cli/start.go:1069 CockroachDB CCL v2.1.3 (x86_64-unknown-linux-gnu, built 2018/12/17 19:15:31, go1.10.3)
I190113 17:00:46.759727 1 server/status/recorder.go:609 available memory from cgroups (8.0 EiB) exceeds system memory 3.7 GiB, using system memory
I190113 17:00:46.759809 1 server/config.go:386 system total memory: 3.7 GiB
I190113 17:00:46.759872 1 server/config.go:388 server configuration:
max offset 500000000
cache size 947 MiB
SQL memory pool size 947 MiB
scan interval 10m0s
scan min idle time 10ms
scan max idle time 1s
event log enabled true
I190113 17:00:46.759896 1 cli/start.go:913 using local environment variables: COCKROACH_CHANNEL=kubernetes-insecure
I190113 17:00:46.759909 1 cli/start.go:920 process identity: uid 0 euid 0 gid 0 egid 0
I190113 17:00:46.759919 1 cli/start.go:545 starting cockroach node
I190113 17:00:46.762262 22 storage/engine/rocksdb.go:574 opening rocksdb instance at "/cockroach/cockroach-data/cockroach-temp632709623"
I190113 17:00:46.803749 22 server/server.go:851 [n?] monitoring forward clock jumps based on server.clock.forward_jump_check_enabled
I190113 17:00:46.804168 22 storage/engine/rocksdb.go:574 opening rocksdb instance at "/cockroach/cockroach-data"
I190113 17:00:46.828487 22 server/config.go:494 [n?] 1 storage engine initialized
I190113 17:00:46.828526 22 server/config.go:497 [n?] RocksDB cache size: 947 MiB
I190113 17:00:46.828536 22 server/config.go:497 [n?] store 0: RocksDB, max size 0 B, max open file limit 60536
W190113 17:00:46.838175 22 gossip/gossip.go:1499 [n?] no incoming or outgoing connections
I190113 17:00:46.838260 22 cli/start.go:505 initial startup completed, will now wait for `cockroach init`
or a join to a running cluster to start accepting clients.
Check the log file(s) for progress.
I190113 17:00:46.841243 22 server/server.go:1402 [n?] no stores bootstrapped and --join flag specified, awaiting init command.
W190113 17:01:16.841095 89 cli/start.go:535 The server appears to be unable to contact the other nodes in the cluster. Please try:
- starting the other nodes, if you haven't already;
- double-checking that the '--join' and '--listen'/'--advertise' flags are set up correctly;
- running the 'cockroach init' command if you are trying to initialize a new cluster.
If problems persist, please see https://www.cockroachlabs.com/docs/v2.1/cluster-setup-troubleshooting.html.
I190113 17:01:31.357765 1 cli/start.go:756 received signal 'terminated'
I190113 17:01:31.359529 1 cli/start.go:821 initiating graceful shutdown of server
initiating graceful shutdown of server
I190113 17:01:31.361064 1 cli/start.go:872 too early to drain; used hard shutdown instead
too early to drain; used hard shutdown instead
...any ideas how to debug this further?
I have gone through *.yaml file at https://github.com/cockroachdb/cockroach/blob/master/cloud/kubernetes/cockroachdb-statefulset.yaml
I noticed that towards the bottom there is no storageClassName mentioned which means that during the volume claim process, pods are going to look for standard storage class.
I am not sure if you used below annotation while provisioning 3 NFS volumes -
storageclass.kubernetes.io/is-default-class=true
You should be able to check the same using -
kubectl get storageclass
If the output does not show Standard storage class then I would suggest either readjusting persistent volumes definitions by adding annotation or add empty string as storageClassName towards the end of the cockroach-statefulset.yaml file
More logs can be viewed using -
kubectl describe cockroachdb-{statefulset}
OK it came down to the fact I had NAT as my virtualbox external facing network adaptor. I changed it to Bridged and it all started working perfectly. If anyone can tell me why, that would be awesome :)
In my case, using helm chart, like below:
$ helm install stable/cockroachdb \
-n cockroachdb \
--namespace cockroach \
--set Storage=10Gi \
--set NetworkPolicy.Enabled=true \
--set Secure.Enabled=true
After wait to finish adding csr's for cockroach:
$ watch kubectl get csr
Several csr's are pending:
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
cockroachdb.client.root 130m system:serviceaccount:cockroachdb:cockroachdb-cockroachdb Pending
cockroachdb.node.cockroachdb-cockroachdb-0 130m system:serviceaccount:cockroachdb:cockroachdb-cockroachdb Pending
cockroachdb.node.cockroachdb-cockroachdb-1 129m system:serviceaccount:cockroachdb:cockroachdb-cockroachdb Pending
cockroachdb.node.cockroachdb-cockroachdb-2 130m system:serviceaccount:cockroachdb:cockroachdb-cockroachdb Pending
To approve that run follow command:
$ kubectl get csr -o json | \
jq -r '.items[] | select(.metadata.name | contains("cockroach.")) | .metadata.name' | \
xargs -n 1 kubectl certificate approve
Is it possible to specify CPU ID list to the Kubernetes cpumanager? The goal is to make sure pods get CPUs from a single socket (0). I brought all the CPUs on the peer socket offline as mentioned here, for example:
$ echo 0 > /sys/devices/system/cpu/cpu5/online
After doing this, the Kubernetes master indeed sees the remaining online CPUs
kubectl describe node foo
Capacity:
cpu: 56 <<< socket 0 CPU count
ephemeral-storage: 958774760Ki
hugepages-1Gi: 120Gi
memory: 197524872Ki
pods: 110
Allocatable:
cpu: 54 <<< 2 system reserved CPUs
ephemeral-storage: 958774760Ki
hugepages-1Gi: 120Gi
memory: 71490952Ki
pods: 110
System Info:
Machine ID: 1155420082478559980231ba5bc0f6f2
System UUID: 4C4C4544-0044-4210-8031-C8C04F584B32
Boot ID: 7fa18227-748f-496c-968c-9fc82e21ecd5
Kernel Version: 4.4.13
OS Image: Ubuntu 16.04.4 LTS
Operating System: linux
Architecture: amd64
Container Runtime Version: docker://17.3.3
Kubelet Version: v1.11.1
Kube-Proxy Version: v1.11.1
However, cpumanager still seems to think there are 112 CPUs (socket0 + socket1).
cat /var/lib/kubelet/cpu_manager_state
{"policyName":"static","defaultCpuSet":"0-111"}
As a result, the kubelet system pods are throwing the following error:
kube-system kube-proxy-nk7gc 0/1 rpc error: code = Unknown desc = failed to update container "eb455f81a61b877eccda0d35eea7834e30f59615346140180f08077f64896760": Error response from daemon: Requested CPUs are not available - requested 0-111, available: 0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,58,60,62,64,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110 762 36d <IP address> foo <none>
I was able to get this working. Posting this as an answer so that someone in need might benefit.
It appears the CPU set is read from /var/lib/kubelet/cpu_manager_state file and it is not updated across kubelet restarts. So this file needs to be removed before restarting kubelet.
The following worked for me:
# On a running worker node, bring desired CPUs offline. (run as root)
$ cpu_list=`lscpu | grep "NUMA node1 CPU(s)" | awk '{print $4}'`
$ chcpu -d $cpu_list
$ rm -f /var/lib/kubelet/cpu_manager_state
$ systemctl restart kubelet.service
# Check the CPU set seen by the CPU manager
$ cat /var/lib/kubelet/cpu_manager_state
# Try creating pods and check the syslog:
Dec 3 14:36:05 k8-2-w1 kubelet[8070]: I1203 14:36:05.122466 8070 state_mem.go:84] [cpumanager] updated default cpuset: "0,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46,48,50,52,54,56,66,68,70,72,74,76,78,80,82,84,86,88,90,92,94,96,98,100,102,104,106,108,110"
Dec 3 14:36:05 k8-2-w1 kubelet[8070]: I1203 14:36:05.122643 8070 policy_static.go:198] [cpumanager] allocateCPUs: returning "2,4,6,8,58,60,62,64"
Dec 3 14:36:05 k8-2-w1 kubelet[8070]: I1203 14:36:05.122660 8070 state_mem.go:76] [cpumanager] updated desired cpuset (container id: 356939cdf32d0f719e83b0029a018a2ca2c349fc0bdc1004da5d842e357c503a, cpuset: "2,4,6,8,58,60,62,64")
I have reported a bug here as I think the CPU set should be updated after kubelet restarts.
I am trying alluxio 1.7.1 with docker 1.13.1, kubernetes 1.9.6, 1.10.1
I created the alluxio docker image as per the instructions on https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Docker.html
Then I followed the https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Kubernetes.html guide to run alluxio on kubernetes. I was able to bring up the alluxio master pod properly, but when I try to bring up alluxio worker I get the error that Address in use. I have not modified anything in the yamls which I downloaded from alluxio git. Only change I did was for alluxio docker image name and api version in yamls for k8s to match properly.
I checked ports being used in my k8s cluster setup, and even on the nodes also. There are no ports that alluxio wants being used by any other process, but I still get address in use error. I am unable to understand what I can do to debug further or what I should change to make this work. I don't have any other application running on my k8s cluster setup. I tried with single node k8s cluster setup and multi node k8s cluster setup also. I tried k8s version 1.9 and 1.10 also.
There is definitely some issue from alluxio worker side which I am unable to debug.
This is the log that I get from worker pod:
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# kubectl logs po/alluxio-worker-knqt4
Formatting Alluxio Worker # vm-sushil-scrum1-08062018-alluxio-1
2018-06-08 10:09:55,723 INFO Configuration - Configuration file /opt/alluxio/conf/alluxio-site.properties loaded.
2018-06-08 10:09:55,845 INFO Format - Formatting worker data folder: /alluxioworker/
2018-06-08 10:09:55,845 INFO Format - Formatting Data path for tier 0:/dev/shm/alluxioworker
2018-06-08 10:09:55,856 INFO Format - Formatting complete
2018-06-08 10:09:56,357 INFO Configuration - Configuration file /opt/alluxio/conf/alluxio-site.properties loaded.
2018-06-08 10:09:56,549 INFO TieredIdentityFactory - Initialized tiered identity TieredIdentity(node=10.194.11.7, rack=null)
2018-06-08 10:09:56,866 INFO BlockWorkerFactory - Creating alluxio.worker.block.BlockWorker
2018-06-08 10:09:56,866 INFO FileSystemWorkerFactory - Creating alluxio.worker.file.FileSystemWorker
2018-06-08 10:09:56,942 WARN StorageTier - Failed to verify memory capacity
2018-06-08 10:09:57,082 INFO log - Logging initialized #1160ms
2018-06-08 10:09:57,509 INFO AlluxioWorkerProcess - Domain socket data server is enabled at /opt/domain.
Exception in thread "main" java.lang.RuntimeException: java.lang.RuntimeException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address in use
at alluxio.worker.AlluxioWorkerProcess.<init>(AlluxioWorkerProcess.java:164)
at alluxio.worker.WorkerProcess$Factory.create(WorkerProcess.java:45)
at alluxio.worker.WorkerProcess$Factory.create(WorkerProcess.java:37)
at alluxio.worker.AlluxioWorker.main(AlluxioWorker.java:56)
Caused by: java.lang.RuntimeException: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address in use
at alluxio.util.CommonUtils.createNewClassInstance(CommonUtils.java:224)
at alluxio.worker.DataServer$Factory.create(DataServer.java:45)
at alluxio.worker.AlluxioWorkerProcess.<init>(AlluxioWorkerProcess.java:159)
... 3 more
Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Address in use
at io.netty.channel.unix.Errors.newIOException(Errors.java:117)
at io.netty.channel.unix.Socket.bind(Socket.java:259)
at io.netty.channel.epoll.EpollServerDomainSocketChannel.doBind(EpollServerDomainSocketChannel.java:75)
at io.netty.channel.AbstractChannel$AbstractUnsafe.bind(AbstractChannel.java:504)
at io.netty.channel.DefaultChannelPipeline$HeadContext.bind(DefaultChannelPipeline.java:1226)
at io.netty.channel.AbstractChannelHandlerContext.invokeBind(AbstractChannelHandlerContext.java:495)
at io.netty.channel.AbstractChannelHandlerContext.bind(AbstractChannelHandlerContext.java:480)
at io.netty.channel.DefaultChannelPipeline.bind(DefaultChannelPipeline.java:973)
at io.netty.channel.AbstractChannel.bind(AbstractChannel.java:213)
at io.netty.bootstrap.AbstractBootstrap$2.run(AbstractBootstrap.java:354)
at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:399)
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:305)
at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:131)
at java.lang.Thread.run(Thread.java:748)
-----------------------
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# kubectl get all
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
ds/alluxio-worker 1 1 0 1 0 <none> 42m
ds/alluxio-worker 1 1 0 1 0 <none> 42m
NAME DESIRED CURRENT AGE
statefulsets/alluxio-master 1 1 44m
NAME READY STATUS RESTARTS AGE
po/alluxio-master-0 1/1 Running 0 44m
po/alluxio-worker-knqt4 0/1 Error 12 42m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/alluxio-master ClusterIP None <none> 19998/TCP,19999/TCP 44m
svc/kubernetes ClusterIP 10.254.0.1 <none> 443/TCP 1h
---------------------
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# kubectl describe po/alluxio-worker-knqt4
Name: alluxio-worker-knqt4
Namespace: default
Node: vm-sushil-scrum1-08062018-alluxio-1/10.194.11.7
Start Time: Fri, 08 Jun 2018 10:09:05 +0000
Labels: app=alluxio
controller-revision-hash=3081903053
name=alluxio-worker
pod-template-generation=1
Annotations: <none>
Status: Running
IP: 10.194.11.7
Controlled By: DaemonSet/alluxio-worker
Containers:
alluxio-worker:
Container ID: docker://40a1eff2cd4dff79d9189d7cb0c4826a6b6e4871fbac65221e7cdd341240e358
Image: alluxio:1.7.1
Image ID: docker://sha256:b080715bd53efc783ee5f54e7f1c451556f93e7608e60e05b4615d32702801af
Ports: 29998/TCP, 29999/TCP, 29996/TCP
Command:
/entrypoint.sh
Args:
worker
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Fri, 08 Jun 2018 11:01:37 +0000
Finished: Fri, 08 Jun 2018 11:02:02 +0000
Ready: False
Restart Count: 14
Limits:
cpu: 1
memory: 2G
Requests:
cpu: 500m
memory: 2G
Environment Variables from:
alluxio-config ConfigMap Optional: false
Environment:
ALLUXIO_WORKER_HOSTNAME: (v1:status.hostIP)
Mounts:
/dev/shm from alluxio-ramdisk (rw)
/opt/domain from alluxio-domain (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-7xlz7 (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
alluxio-ramdisk:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium: Memory
alluxio-domain:
Type: HostPath (bare host directory volume)
Path: /tmp/domain
HostPathType: Directory
default-token-7xlz7:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-7xlz7
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/disk-pressure:NoSchedule
node.kubernetes.io/memory-pressure:NoSchedule
node.kubernetes.io/not-ready:NoExecute
node.kubernetes.io/unreachable:NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulMountVolume 56m kubelet, vm-sushil-scrum1-08062018-alluxio-1 MountVolume.SetUp succeeded for volume "alluxio-domain"
Normal SuccessfulMountVolume 56m kubelet, vm-sushil-scrum1-08062018-alluxio-1 MountVolume.SetUp succeeded for volume "alluxio-ramdisk"
Normal SuccessfulMountVolume 56m kubelet, vm-sushil-scrum1-08062018-alluxio-1 MountVolume.SetUp succeeded for volume "default-token-7xlz7"
Normal Pulled 53m (x5 over 56m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Container image "alluxio:1.7.1" already present on machine
Normal Created 53m (x5 over 56m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Created container
Normal Started 53m (x5 over 56m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Started container
Warning BackOff 1m (x222 over 55m) kubelet, vm-sushil-scrum1-08062018-alluxio-1 Back-off restarting failed container
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :19999 | grep LISTEN
java 8949 root 29u IPv4 12518521 0t0 TCP *:dnp-sec (LISTEN)
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :19998 | grep LISTEN
java 8949 root 19u IPv4 12520458 0t0 TCP *:iec-104-sec (LISTEN)
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :29998 | grep LISTEN
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :29999 | grep LISTEN
[root#vm-sushil-scrum1-08062018-alluxio-1 kubernetes]# lsof -n -i :29996 | grep LISTEN
The alluxio-worker container is always restarting and failing again and again for the same error.
Please guide me what I can do to solve this.
Thanks
The problem was short circuit unix domain socket path. I was using whatever was present by default in alluxio git. In the default integration/kubernetes/conf/alluxio.properties.template the address for ALLUXIO_WORKER_DATA_SERVER_DOMAIN_SOCKET_ADDRESS was not complete. This is properly explained in https://www.alluxio.org/docs/1.7/en/Running-Alluxio-On-Docker.html for enabling short circuit reads in alluxio worker containers using unix domain sockets.
Just because of a missing complete path for unix domain socket the alluxio worker was not able to come up in kubernetes when short circuit read was enabled for alluxio worker.
When I corrected the path in integration/kubernetes/conf/alluxio.properties for ALLUXIO_WORKER_DATA_SERVER_DOMAIN_SOCKET_ADDRESS=/opt/domain/d
Then things started wokring properly. Now also some tests are failing but alteast the alluxio setup is properly up. Now I will debug why some tests are failing.
I have submitted this fix in alluxio git for them to merge it in master branch.
https://github.com/Alluxio/alluxio/pull/7376
On the node where your worker is running, it seems that you have a port already in use.
Try to find which process is using it:
sudo lsof -n -i :80 | grep LISTEN
I read the alluxio configuration files: try with ports 19998, 19999, 29996, 29998, 29999 substituting 80 in the above command.
I am running Kubernetes cluster which is configured with a master and 3 nodes.
#kubectl get nodes
NAME STATUS AGE
minion-1 Ready 46d
minion-2 Ready 46d
minion-3 Ready 46d
I have launched couple of pods in the cluster and found that the pods are in pending state.
# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
httpd 0/1 Pending 0 10m <none>
nginx 0/1 Pending 0 11m <none>
One of the pod "httpd" yaml file:
# cat http.yaml
apiVersion: v1
kind: Pod
metadata:
name: httpd
labels:
env: test
spec:
containers:
- name: httpd
image: httpd
While debugging the reason for failure found that the couple of nodes configured are not ready. Only one node is reachable from master.
# ping minion-1
PING minion-1 (172.31.24.204) 56(84) bytes of data.
64 bytes from minion-1 (172.31.24.204): icmp_seq=1 ttl=64 time=0.575 ms
Whereas other nodes are not reachable:
# ping minion-2
PING minion-2 (172.31.29.95) 56(84) bytes of data.
From master (172.31.16.204) icmp_seq=1 Destination Host Unreachable
# ping minion-3
PING minion-3 (172.31.17.252) 56(84) bytes of data.
From master (172.31.16.204) icmp_seq=1 Destination Host Unreachable
The queries that I have here is
1) Why does Kubernetes showing the nodes as ready even if they are not
reachable from master?
2) Why are the pods creation failing?
Is it because of unavailability of nodes or any configuration issue in yaml file?
# kubectl describe pod httpd
Name: httpd
Namespace: default
Node: /
Labels: env=test
Status: Pending
IP:
Controllers: <none>
Containers:
httpd:
Image: httpd
Port:
Volume Mounts: <none>
Environment Variables: <none>
No volumes.
QoS Class: BestEffort
Tolerations: <none>
No events.
Following are the Kubernetes and etcd versions.
]# kubectl --version
Kubernetes v1.5.2
[root#raghavendar1 ~]# et
etcd etcdctl ether-wake ethtool
[root#raghavendar1 ~]# etcd --version
etcd Version: 3.2.5
Git SHA: d0d1a87
Go Version: go1.8.3
Go OS/Arch: linux/amd64
Kubernetes do not use ICMP protocol to check nodes master node connectivity.
Nodes become Ready when the communication node -> api-server works and this is done via https protocol.
You can read more about about node - master connectivity in kubernetes documentation https://kubernetes.io/docs/concepts/architecture/master-node-communication/
Why pod isn't scheduled?
The answer to this question is in the master logs probably, check kube-apiserver.log, kube-scheduler.log. The reason is cluster misconfiguration.
For start run it in a single network to get a grip of things and double check routing.