Why does starting daskdev/dask into a Pod fail? - kubernetes

Why does kubectl run dask --image daskdev/dask fail?
# starting the container with docker to make sure it basically works
➜ ~ docker run --rm -it --entrypoint bash daskdev/dask:latest
(base) root#5b34ce038eb3:/# python
Python 3.8.0 (default, Nov 6 2019, 21:49:08)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dask
>>>
>>> exit()
(base) root#5b34ce038eb3:/# exit
exit
# now trying to fire up the container on a minikube cluster
➜ ~ kubectl run dask --image daskdev/dask
pod/dask created
# let's see what's going on with the Pod
➜ ~ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
dask 0/1 CrashLoopBackOff 1 13s
dask 0/1 Completed 2 24s
dask 0/1 CrashLoopBackOff 2 38s
# not sure why the logs look like something is missing
➜ ~ kubectl logs dask --tail=100
+ '[' '' ']'
+ '[' -e /opt/app/environment.yml ']'
+ echo 'no environment.yml'
+ '[' '' ']'
+ '[' '' ']'
+ exec
no environment.yml

So basically, if you will check result of kubectl describe pod dask, you will see that last state was Terminated with Exit Code 0, that literally means you container was launched successfully, did it job and finished also successfully. What else you expect to happen with pod?
IN addition, when you create pod using kubectl run dask --image daskdev/dask- it creates with the restartPolicy: Always by default!!!!
Always means that the container will be restarted even if it exited with a zero exit code (i.e. successfully).
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 02 Apr 2021 15:06:00 +0000
Finished: Fri, 02 Apr 2021 15:06:00 +0000
Ready: False
Restart Count: 3
Environment: <none>
There is no /opt/app/environment.yml in your container. If im not mistake, you should first configure it with prepare.sh. PLease check more here - DASK
section
#docker run --rm -it --entrypoint bash daskdev/dask:latest
(base) root#431d69bb9a80:/# ls -la /opt/app/
total 12
drwxr-xr-x 2 root root 4096 Mar 27 15:43 .
drwxr-xr-x 1 root root 4096 Mar 27 15:43 ..
not sure why the logs look like something is missing ➜ ~ kubectl logs dask --tail=100
...
exec no environment.yml
There is already prepared helm DASK chart. Use it. It works fine:
helm repo add dask https://helm.dask.org/
helm repo update
helm install raffael-dask-release dask/dask
NAME: raffael-dask-release
LAST DEPLOYED: Fri Apr 2 15:43:38 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing DASK, released at name: raffael-dask-release.
This release includes a Dask scheduler, 3 Dask workers, and 1 Jupyter servers.
The Jupyter notebook server and Dask scheduler expose external services to
which you can connect to manage notebooks, or connect directly to the Dask
cluster. You can get these addresses by running the following:
export DASK_SCHEDULER="127.0.0.1"
export DASK_SCHEDULER_UI_IP="127.0.0.1"
export DASK_SCHEDULER_PORT=8080
export DASK_SCHEDULER_UI_PORT=8081
kubectl port-forward --namespace default svc/raffael-dask-release-scheduler $DASK_SCHEDULER_PORT:8786 &
kubectl port-forward --namespace default svc/raffael-dask-release-scheduler $DASK_SCHEDULER_UI_PORT:80 &
export JUPYTER_NOTEBOOK_IP="127.0.0.1"
export JUPYTER_NOTEBOOK_PORT=8082
kubectl port-forward --namespace default svc/raffael-dask-release-jupyter $JUPYTER_NOTEBOOK_PORT:80 &
echo tcp://$DASK_SCHEDULER:$DASK_SCHEDULER_PORT -- Dask Client connection
echo http://$DASK_SCHEDULER_UI_IP:$DASK_SCHEDULER_UI_PORT -- Dask dashboard
echo http://$JUPYTER_NOTEBOOK_IP:$JUPYTER_NOTEBOOK_PORT -- Jupyter notebook
NOTE: It may take a few minutes for the LoadBalancer IP to be available. Until then, the commands above will not work for the LoadBalancer service type.
You can watch the status by running 'kubectl get svc --namespace default -w raffael-dask-release-scheduler'
NOTE: It may take a few minutes for the URLs above to be available if any EXTRA_PIP_PACKAGES or EXTRA_CONDA_PACKAGES were specified,
because they are installed before their respective services start.
NOTE: The default password to login to the notebook server is `dask`. To change this password, refer to the Jupyter password section in values.yaml, or in the README.md.
If you still want create manually pod, use below... Main idea is set restartPolicy: Never.
apiVersion: v1
kind: Pod
metadata:
name: dask-tesssssst
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: Always
name: dask-tesssssst
Please check DASK KubeCluster official documentation for more examples. Last one I took exactly from there.

Related

powershell pod failing in kubernetes cluster

I need to run powershell as an container in kubernetes
I am using following deployment file sample.yaml
apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
When I run kubectl apply -f sample.yaml
I get the following error on kubectl get pods
powershell 0/1 CrashLoopBackOff 3 (50s ago) 92s
I did check the log kubectl logs powershell
PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> ←[?1h
But when i run same image as a docker container with following command its working
docker run --rm -it mcr.microsoft.com/powershell:latest
If you want to keep a container for running, you should write like this yaml..
apiVersion: v1
kind: Pod
metadata:
name: powershell
spec:
containers:
- name: powershell
image: mcr.microsoft.com/powershell:latest
command: ["pwsh"]
args: ["-Command", "Start-Sleep", "3600"]
[root#master1 ~]# kubectl get pod powershell
NAME READY STATUS RESTARTS AGE
powershell 1/1 Running 0 3m32s
[root#master1 ~]# kubectl exec -it powershell -- pwsh
PowerShell 7.2.6
Copyright (c) Microsoft Corporation.
https://aka.ms/powershell
Type 'help' to get help.
PS /> date
Thu Oct 13 12:50:24 PM UTC 2022
PS />
The docker run the same image by allocating the interactive shell by using flag -it. So that's why the to container keeps running until you exit the command.
To achieve similar in Kubernetes you can use run command.
kubectl run -i --rm --tty power --image=mcr.microsoft.com/powershell:latest
interactive-bash-pod-within-a-kubernetes

Redis sentinel HA on Kubernetes

I am trying to have 1 redis master with 2 redis replicas tied to a 3 Quorum Sentinel on Kubernetes. I am very new to Kubernetes.
My initial plan was to have the master running on a pod tied to 1 Kubernetes SVC and the 2 replicas running on their own pods tied to another Kubernetes SVC. Finally, the 3 Sentinel pods will be tied to their own SVC. The replicas will be tied to the master SVC (because without svc, ip will change). The sentinel will also be configured and tied to master and replica SVCs. But I'm not sure if this is feasible because when master pod crashes, how will one of the replica pods move to the master SVC and become the master? Is that possible?
The second approach I had was to wrap redis pods in a replication controller and the same for sentinel as well. However, I'm not sure how to make one of the pods master and the others replicas with a replication controller.
Would any of the two approaches work? If not, is there a better design that I can adopt? Any leads would be appreciated.
You can deploy Redis Sentinel using the Helm package manager and the Redis Helm Chart.
If you don't have Helm3 installed yet, you can use this documentation to install it.
I will provide a few explanations to illustrate how it works.
First we need to get the values.yaml file from the Redis Helm Chart to customize our installation:
$ wget https://raw.githubusercontent.com/bitnami/charts/master/bitnami/redis/values.yaml
We can configure a lot of parameters in the values.yaml file , but for demonstration purposes I only enabled Sentinel and set the redis password:
NOTE: For a list of parameters that can be configured during installation, see the Redis Helm Chart Parameters documentation.
# values.yaml
global:
redis:
password: redispassword
...
replica:
replicaCount: 3
...
sentinel:
enabled: true
...
Then we can deploy Redis using the configuration from the values.yaml file:
NOTE: It will deploy a three Pod cluster (one master and two slaves) managed by the StatefulSets with a sentinel container running inside each Pod.
$ helm install redis-sentinel bitnami/redis --values values.yaml
Be sure to carefully read the NOTES section of the chart installation output. It contains many useful information (e.g. how to connect to your database from outside the cluster)
After installation, check redis StatefulSet, Pods and Services (headless service can be used for internal access):
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP
redis-sentinel-node-0 2/2 Running 0 2m13s 10.4.2.21
redis-sentinel-node-1 2/2 Running 0 86s 10.4.0.10
redis-sentinel-node-2 2/2 Running 0 47s 10.4.1.10
$ kubectl get sts
NAME READY AGE
redis-sentinel-node 3/3 2m41s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
redis-sentinel ClusterIP 10.8.15.252 <none> 6379/TCP,26379/TCP 2m
redis-sentinel-headless ClusterIP None <none> 6379/TCP,26379/TCP 2m
As you can see, each redis-sentinel-node Pod contains the redis and sentinel containers:
$ kubectl get pods redis-sentinel-node-0 -o jsonpath={.spec.containers[*].name}
redis sentinel
We can check the sentinel container logs to find out which redis-sentinel-node is the master:
$ kubectl logs -f redis-sentinel-node-0 sentinel
...
1:X 09 Jun 2021 09:52:01.017 # Configuration loaded
1:X 09 Jun 2021 09:52:01.019 * monotonic clock: POSIX clock_gettime
1:X 09 Jun 2021 09:52:01.019 * Running mode=sentinel, port=26379.
1:X 09 Jun 2021 09:52:01.026 # Sentinel ID is 1bad9439401e44e749e2bf5868ad9ec7787e914e
1:X 09 Jun 2021 09:52:01.026 # +monitor master mymaster 10.4.2.21 6379 quorum 2
...
1:X 09 Jun 2021 09:53:21.429 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 # mymaster 10.4.2.21 6379
1:X 09 Jun 2021 09:53:21.435 * +slave slave 10.4.1.10:6379 10.4.1.10 6379 # mymaster 10.4.2.21 6379
...
As you can see from the logs above, the redis-sentinel-node-0 Pod is the master and the redis-sentinel-node-1 & redis-sentinel-node-2 Pods are slaves.
For testing, let's delete the master and check if sentinel will switch the master role to one of the slaves:
$ kubectl delete pod redis-sentinel-node-0
pod "redis-sentinel-node-0" deleted
$ kubectl logs -f redis-sentinel-node-1 sentinel
...
1:X 09 Jun 2021 09:55:20.902 # Executing user requested FAILOVER of 'mymaster'
...
1:X 09 Jun 2021 09:55:22.666 # +switch-master mymaster 10.4.2.21 6379 10.4.1.10 6379
...
1:X 09 Jun 2021 09:55:50.626 * +slave slave 10.4.0.10:6379 10.4.0.10 6379 # mymaster 10.4.1.10 6379
1:X 09 Jun 2021 09:55:50.632 * +slave slave 10.4.2.22:6379 10.4.2.22 6379 # mymaster 10.4.1.10 6379
A new master (redis-sentinel-node-2 10.4.1.10) has been selected, so everything works as expected.
Additionally, we can display more information by connecting to one of the Redis nodes:
$ kubectl run --namespace default redis-client --restart='Never' --env REDIS_PASSWORD=redispassword --image docker.io/bitnami/redis:6.2.1-debian-10-r47 --command -- sleep infinity
pod/redis-client created
$ kubectl exec --tty -i redis-client --namespace default -- bash
I have no name!#redis-client:/$ redis-cli -h redis-sentinel-node-1.redis-sentinel-headless -p 6379 -a $REDIS_PASSWORD
Warning: Using a password with '-a' or '-u' option on the command line interface may not be safe.
redis-sentinel-node-1.redis-sentinel-headless:6379> info replication
# Replication
role:slave
master_host:10.4.1.10
master_port:6379
master_link_status:up
...

kubectl get nodes shows NotReady

I have installed two nodes kubernetes 1.12.1 in cloud VMs, both behind internet proxy. Each VMs have floating IPs associated to connect over SSH, kube-01 is a master and kube-02 is a node. Executed export:
no_proxy=127.0.0.1,localhost,10.157.255.185,192.168.0.153,kube-02,192.168.0.25,kube-01
before running kubeadm init, but I am getting the following status for kubectl get nodes:
NAME STATUS ROLES AGE VERSION
kube-01 NotReady master 89m v1.12.1
kube-02 NotReady <none> 29s v1.12.2
Am I missing any configuration? Do I need to add 192.168.0.153 and 192.168.0.25 in respective VM's /etc/hosts?
Looks like pod network is not installed yet on your cluster . You can install weave for example with below command
kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
After a few seconds, a Weave Net pod should be running on each Node and any further pods you create will be automatically attached to the Weave network.
You can install pod networks of your choice . Here is a list
after this check
$ kubectl describe nodes
check all is fine like below
Conditions:
Type Status
---- ------
OutOfDisk False
MemoryPressure False
DiskPressure False
Ready True
Capacity:
cpu: 2
memory: 2052588Ki
pods: 110
Allocatable:
cpu: 2
memory: 1950188Ki
pods: 110
next ssh to the pod which is not ready and observe kubelet logs. Most likely errors can be of certificates and authentication.
You can also use journalctl on systemd to check kubelet errors.
$ journalctl -u kubelet
Try with this
Your coredns is in pending state check with the networking plugin you have used and check the proper addons are added
check kubernates troubleshooting guide
https://kubernetes.io/docs/setup/independent/troubleshooting-kubeadm/#coredns-or-kube-dns-is-stuck-in-the-pending-state
https://kubernetes.io/docs/concepts/cluster-administration/addons/
And install the following with those
And check
kubectl get pods -n kube-system
On the off chance it might be the same for someone else, in my case, I was using the wrong AMI image to create the nodegroup.
Run
journalctl -u kubelet
Then check at node logs, if you get below error, disable the sawp using swapoff -a
"Failed to run kubelet" err="failed to run Kubelet: running with swap on is not supported, please disable swap! or set --fa
Main process exited, code=exited, status=1/FAILURE

error: You must be logged in to the server - the server has asked for the client to provide credentials - "kubectl logs" command gives error

We had setup kubernetes 1.10.1 on CoreOS with three nodes.
Setup is successfull
NAME STATUS ROLES AGE VERSION
node1.example.com Ready master 19h v1.10.1+coreos.0
node2.example.com Ready node 19h v1.10.1+coreos.0
node3.example.com Ready node 19h v1.10.1+coreos.0
NAMESPACE NAME READY STATUS RESTARTS AGE
default pod-nginx2-689b9cdffb-qrpjn 1/1 Running 0 16h
kube-system calico-kube-controllers-568dfff588-zxqjj 1/1 Running 0 18h
kube-system calico-node-2wwcg 2/2 Running 0 18h
kube-system calico-node-78nzn 2/2 Running 0 18h
kube-system calico-node-gbvkn 2/2 Running 0 18h
kube-system calico-policy-controller-6d568cc5f7-fx6bv 1/1 Running 0 18h
kube-system kube-apiserver-x66dh 1/1 Running 4 18h
kube-system kube-controller-manager-787f887b67-q6gts 1/1 Running 0 18h
kube-system kube-dns-79ccb5d8df-b9skr 3/3 Running 0 18h
kube-system kube-proxy-gb2wj 1/1 Running 0 18h
kube-system kube-proxy-qtxgv 1/1 Running 0 18h
kube-system kube-proxy-v7wnf 1/1 Running 0 18h
kube-system kube-scheduler-68d5b648c-54925 1/1 Running 0 18h
kube-system pod-checkpointer-vpvg5 1/1 Running 0 18h
But when i tries to see the logs of any pods kubectl gives the following error:
kubectl logs -f pod-nginx2-689b9cdffb-qrpjn error: You must be logged
in to the server (the server has asked for the client to provide
credentials ( pods/log pod-nginx2-689b9cdffb-qrpjn))
And also trying to get inside of the pods (using EXEC command of kubectl) gives following error:
kubectl exec -ti pod-nginx2-689b9cdffb-qrpjn bash error: unable to
upgrade connection: Unauthorized
Kubelet Service File :
Description=Kubelet via Hyperkube ACI
[Service]
EnvironmentFile=/etc/kubernetes/kubelet.env
Environment="RKT_RUN_ARGS=--uuid-file-save=/var/run/kubelet-pod.uuid \
--volume=resolv,kind=host,source=/etc/resolv.conf \
--mount volume=resolv,target=/etc/resolv.conf \
--volume var-lib-cni,kind=host,source=/var/lib/cni \
--mount volume=var-lib-cni,target=/var/lib/cni \
--volume var-log,kind=host,source=/var/log \
--mount volume=var-log,target=/var/log"
ExecStartPre=/bin/mkdir -p /etc/kubernetes/manifests
ExecStartPre=/bin/mkdir -p /etc/kubernetes/cni/net.d
ExecStartPre=/bin/mkdir -p /etc/kubernetes/checkpoint-secrets
ExecStartPre=/bin/mkdir -p /etc/kubernetes/inactive-manifests
ExecStartPre=/bin/mkdir -p /var/lib/cni
ExecStartPre=/usr/bin/bash -c "grep 'certificate-authority-data' /etc/kubernetes/kubeconfig | awk '{print $2}' | base64 -d > /etc/kubernetes/ca.crt"
ExecStartPre=-/usr/bin/rkt rm --uuid-file=/var/run/kubelet-pod.uuid
ExecStart=/usr/lib/coreos/kubelet-wrapper \
--kubeconfig=/etc/kubernetes/kubeconfig \
--config=/etc/kubernetes/config \
--cni-conf-dir=/etc/kubernetes/cni/net.d \
--network-plugin=cni \
--allow-privileged \
--lock-file=/var/run/lock/kubelet.lock \
--exit-on-lock-contention \
--hostname-override=node1.example.com \
--node-labels=node-role.kubernetes.io/master \
--register-with-taints=node-role.kubernetes.io/master=:NoSchedule
ExecStop=-/usr/bin/rkt stop --uuid-file=/var/run/kubelet-pod.uuid
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
KubeletConfiguration File
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
staticPodPath: "/etc/kubernetes/manifests"
clusterDomain: "cluster.local"
clusterDNS: [ "10.3.0.10" ]
nodeStatusUpdateFrequency: "5s"
clientCAFile: "/etc/kubernetes/ca.crt"
We have also specified "--kubelet-client-certificate" and "--kubelet-client-key" flags into kube-apiserver.yaml files:
- --kubelet-client-certificate=/etc/kubernetes/secrets/apiserver.crt
- --kubelet-client-key=/etc/kubernetes/secrets/apiserver.key
So what we are missing here?
Thanks in advance :)
In my case the problem was that somehow context was changed. Checked it by
kubectl config current-context
and then changed it back to the correct one by
kubectl config use-context docker-desktop
This is a quiet common and general error which is related to authentication problems against the API Server.
I beleive many people search for this title so I'll provide a few directions with examples for different types of cases.
1 ) (General) Common to all types of deployments - check if credentials were expired.
2 ) (Pods and service accounts) The authentication is related to one of the pods which is using a service account that has issues like invalid token.
3 ) (IoC or deployment tools) Running with an IoC tool like Terraform and you failed to pass the certificate correctly like in this case.
4 ) (Cloud or other Sass providers) A few cases which I encountered with AWS EKS:
4.A) In case you're not the cluster creator - you might have no permissions to access cluster.
When an EKS cluster is created, the user (or role) that creates the cluster is automatically granted with the system:master permissions in the cluster's RBAC configuration.
Other users or roles that needs the ability to interact with your cluster, need to be added explicitly - Read more in here.
4.B) If you're working on multiple clusters/environments/accounts via the CLI, the current profile that is used needs to be re-authenticated or that there is a mismatch between the cluster that need to be accessed and the values of shell variables like: AWS_DEFAULT_PROFILE or AWS_DEFAULT_REGION.
4.C) New credentials (AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY) were created and exported to the terminal which might contain old values of previous session (AWS_SESSION_TOKEN) and need to be replaced or unset.
Looks like you misconfigured kublet:
You missed the --client-ca-file flag in your Kubelet Service File
That’s why you can get some general information from master, but can’t get access to nodes.
This flag is responsible for certificate; without this flag, you can not get access to the nodes.
In my case, I have noticed this issue on a running cluster which was not touched for long time - the answer is more applicable for searchers on Google as this link is at the top by error experienced in the question.
The issue was expired certificates.
You can check this on Kubernetes master server:
# find /etc/kubernetes/pki/ -type f -name "*.crt" -print | egrep -v 'ca.crt$' | xargs -L 1 -t -i bash -c 'openssl x509 -noout -text -in {}|grep After'
bash -c openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver.crt|grep After
Not After : Jan 19 14:54:15 2022 GMT
bash -c openssl x509 -noout -text -in /etc/kubernetes/pki/apiserver-kubelet-client.crt|grep After
Not After : Nov 13 01:46:12 2021 GMT
bash -c openssl x509 -noout -text -in /etc/kubernetes/pki/front-proxy-client.crt|grep After
Not After : Nov 13 01:46:12 2021 GMT
For me the issue was related to mis-configuration in the ~/.kube/config file , after restoring the configurations using kubectl config view --raw >~/.kube/config it was resolved
In general, many different .kube/config file errors will trigger this error message. In my case it was that I simply specified the wrong cluster name in my config file (and spent MANY hours trying to debug it).
When I specified the wrong cluster name, I received 2 requests for MFA token codes, followed by the error: You must be logged in to the server (the server has asked for the client to provide credentials) message.
Example:
# kubectl create -f scripts/aws-auth-cm.yaml
Assume Role MFA token code: 123456
Assume Role MFA token code: 123456
could not get token: AccessDenied: MultiFactorAuthentication failed with invalid MFA one time pass code.
In my case, I experienced multiple errors while trying to run different kubectl commands like unauthorized, server has asked client to provide credentials, etc. After spending a few hours, I deduced that the sync to my cluster on cloud somehow gets messed up. So I run the following commands to refresh the configuration and it starts to work again:
Unset users:
kubectl config unset users.<full-user-name-as-found-in: kubectl config view>
Remove cluster:
kubectl config delete-cluster <full-cluster-name-as-found-in: kubectl config view>
Remove context:
kubectl config delete-context <full-context-name-as-found-in: kubectl config view>
Default context:
kubectl config use-context contexts
Get fresh cluster config from cloud:
ibmcloud cs cluster config --cluster <cluster-name>
Note: I am using ibmcloud for my cluster so last command could be different in your case

Kubernetes 1.6.2 flannel configuration in centos 7

Using kueadm command I have configured 3 nodes Kubernetes cluster. Unlike earlier version 1.6.2 kubeadm command configures all the Kubernetes process automatically. For flannel I used this yml file kube-flannel.yml. my understanding with Kubernetes is it will create the container and run the process inside the container but I see flannel process running on node itself but /opt/bin/flannel binary not in my node. How Kubernetes running the flannel?
How Kubernetes handles this? Is there right document explains this concepts?
flannel pod running in master node itself.
[root#master01 ~]# kubectl get pods -o wide --namespace=kube-system -l app=flannel
NAME READY STATUS RESTARTS AGE IP NODE
kube-flannel-ds-3694s 2/2 Running 37 3d 192.168.15.101 master01
kube-flannel-ds-mbh9b 2/2 Running 10 3d 192.168.15.102 node-01
kube-flannel-ds-vlm20 2/2 Running 12 3d 192.168.15.103 node-02
I see flanneld process
[root#master01 ~]# ps -fed |grep flan
root 5447 5415 0 May10 ? 00:00:08 /opt/bin/flanneld --ip-masq --kube-subnet-mgr
root 5604 5582 0 May10 ? 00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; done
but flanneld is not in the master node
> [root#master01 ~]# ls -ld /opt/bin/flanneld
> ls: cannot access /opt/bin/flanneld: No such file or directory
Thanks
SR
After some more reading found the answer flanneld run inside the continerd.
here is the run details.
https://github.com/opencontainers/runc
we can extract the flannel docker images like below.
> docker save -o flannel-v0.7.1-amd64.tar
> quay.io/coreos/flannel:v0.7.1-amd64 tar tvf flannel-v0.7.1-amd64.tar