Cannot delete pod in status ContainerCreating - kubernetes

I have a failed pod which is not properly created. I used these steps:
kubernetes#kubernetes1:~$ cd /opt/registry
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
persistentvolume/pv1 created
kubernetes#kubernetes1:/opt/registry$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default private-repository-k8s-6ddbcd9c45-s6dfq 0/1 ContainerCreating 0 2d1h
kube-system calico-kube-controllers-58dbc876ff-dgs77 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-czmzc 1/1 Running 4 (125m ago) 2d13h
kube-system calico-node-q4lxz 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-k94z2 1/1 Running 4 (125m ago) 2d13h
kube-system coredns-565d847f94-nt27m 1/1 Running 4 (125m ago) 2d13h
kube-system etcd-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-apiserver-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-controller-manager-kubernetes1 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-97djs 1/1 Running 5 (125m ago) 2d13h
kube-system kube-proxy-d8bzs 1/1 Running 4 (125m ago) 2d13h
kube-system kube-scheduler-kubernetes1 1/1 Running 5 (125m ago) 2d13h
As you can see the pod is stucked in status ContainerCreating. I tried to delete it:
kubernetes#kubernetes1:/opt/registry$ kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
default private-repository-k8s 0/1 1 0 2d2h
kube-system calico-kube-controllers 1/1 1 1 2d14h
kube-system coredns 2/2 2 2 2d14h
Delete command:
kubernetes#kubernetes1:/opt/registry$ kubectl delete -n default deployment private-repository-k8s
deployment.apps "private-repository-k8s" deleted
kubernetes#kubernetes1:/opt/registry$ kubectl get deployments --all-namespaces
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system calico-kube-controllers 1/1 1 1 2d14h
kube-system coredns 2/2 2 2 2d14h
kubernetes#kubernetes1:/opt/registry$ kubectl create -f private-registry1.yaml
Error from server (AlreadyExists): error when creating "private-registry1.yaml": persistentvolumes "pv1" already exists
private-registry1.yaml configuration:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv1
spec:
capacity:
storage: 256Mi # specify your own size
volumeMode: Filesystem
persistentVolumeReclaimPolicy: Retain
local:
path: /opt/registry # can be any path
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions: # specify the node label which maps to your control-plane node.
- key: kubernetes1
operator: In
values:
- controlplane-1
accessModes:
- ReadWriteOnce # only 1 node will read/write on the path.
# - ReadWriteMany # multiple nodes will read/write on the path
Do you know how I can delete pv1?

You can delete the PV using the following two commands:
kubectl delete pv <pv_name> --grace-period=0 --force
And then deleting the finalizer using:
kubectl patch pv <pv_name> -p '{"metadata": {"finalizers": null}}'
As you have created using a file, you can also use the following command to delete the pv:
kubectl delete -f private-registry1.yaml

Related

MK_ADDON_ENABLE : run callbacks: running callbacks: waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition

I'm trying to enable ingress addon by typing:
minikube addons enable ingress
I get this Error:
`X Fermeture en raison de MK_ADDON_ENABLE : run callbacks: running callbacks: [waiting for app.kubernetes.io/name=ingress-nginx pods: timed out waiting for the condition]`
I did try to see kubectl pods and it shows:
`NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx ingress-nginx-admission-create-6xqc7 0/1 Completed 0 105m
ingress-nginx ingress-nginx-admission-patch-5qxwp 0/1 Completed 1 105m
ingress-nginx ingress-nginx-controller-5959f988fd-wngnn 0/1 ImageInspectError 0 105m
kube-system coredns-565d847f94-kdcf6 1/1 Running 1 (23m ago) 107m
kube-system etcd-minikube 1/1 Running 1 (23m ago) 107m
kube-system kube-apiserver-minikube 1/1 Running 1 (23m ago) 107m
kube-system kube-controller-manager-minikube 1/1 Running 1 (23m ago) 107m
kube-system kube-proxy-zzrwv 1/1 Running 1 (23m ago) 107m
kube-system kube-scheduler-minikube 1/1 Running 1 (23m ago) 107m
kube-system storage-provisioner 1/1 Running 3 (20m ago) 107m`

etcdserver: request timed out

I've backed up my etcd and after restoring it, i can't Create/Update/Delete anything in my cluster!
I've exactly went through the docs
Here are my steps:
Backing up etcd
Save the snapshop
sudo ETCDCTL_API=3 etcdctl snapshot save /tmp/etcd-backup-new.db \
--cacert /etc/kubernetes/pki/etcd/ca.crt \
--cert /etc/kubernetes/pki/etcd/server.crt \
--key /etc/kubernetes/pki/etcd/server.key
Check the status
$ sudo ETCDCTL_API=3 etcdctl snapshot status /tmp/etcd-backup-new.db --write-out=table
+----------+----------+------------+------------+
| HASH | REVISION | TOTAL KEYS | TOTAL SIZE |
+----------+----------+------------+------------+
| d8d0da24 | 7220348 | 874 | 1.9 MB |
+----------+----------+------------+------------+
Restoring etcd
Create Restore Point from backup
sudo ETCDCTL_API=3 etcdctl snapshot restore /tmp/etcd-backup-new.db \
--data-dir /var/lib/etcd-backup
Tell etcd to use new location
sudo vim /etc/kubernetes/manifests/etcd.yaml
- hostPath:
path: /var/lib/etcd-backup # Changed this ONLY!
type: DirectoryOrCreate
name: etcd-data
As far as i know, Kubelet restarts static Pods automatically. So, after a while everything seems good!
$ k get all -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system pod/coredns-6d4b75cb6d-6cmtm 1/1 Running 1 (7d23h ago) 72d
kube-system pod/coredns-6d4b75cb6d-wchss 1/1 Running 1 (7d23h ago) 72d
kube-system pod/etcd-master 1/1 Running 2 (7d23h ago) 72d
kube-system pod/kube-apiserver-master 1/1 Running 1 (7d23h ago) 39d
kube-system pod/kube-controller-manager-master 1/1 Running 4 (7d23h ago) 72d
kube-system pod/kube-proxy-mqzbd 1/1 Running 1 (7d23h ago) 72d
kube-system pod/kube-scheduler-master 1/1 Running 4 (7d23h ago) 72d
kube-system pod/weave-net-4xtwz 2/2 Running 3 (7d23h ago) 49d
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 72d
kube-system service/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 72d
NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
kube-system daemonset.apps/kube-proxy 1 1 1 1 1 kubernetes.io/os=linux 72d
kube-system daemonset.apps/weave-net 1 1 1 1 1 <none> 49d
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
kube-system deployment.apps/coredns 2/2 2 2 72d
NAMESPACE NAME DESIRED CURRENT READY AGE
kube-system replicaset.apps/coredns-6d4b75cb6d 2 2 2 72d
The Probelem
So, it seems everything is fine but its not! e.g.
$ k run test --image nginx
Error from server: etcdserver: request timed out
or
$ k rollout restart daemonset.apps/kube-proxy -n kube-system
error: failed to patch: etcdserver: request timed out
What is my mistake?
P.S: Kubernetes version: v1.24.1

Failed to open topo server on vitess with etcd

I'm running a simple example with Helm. Take a look below at values.yaml file:
cat << EOF | helm install helm/vitess -n vitess -f -
topology:
cells:
- name: 'zone1'
keyspaces:
- name: 'vitess'
shards:
- name: '0'
tablets:
- type: 'replica'
vttablet:
replicas: 1
mysqlProtocol:
enabled: true
authType: secret
username: vitess
passwordSecret: vitess-db-password
etcd:
replicas: 3
vtctld:
replicas: 1
vtgate:
replicas: 3
vttablet:
dataVolumeClaimSpec:
storageClassName: nfs-slow
EOF
Take a look at the output of current pods running below:
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-fb8b8dccf-8f5kt 1/1 Running 0 32m
kube-system coredns-fb8b8dccf-qbd6c 1/1 Running 0 32m
kube-system etcd-master1 1/1 Running 0 32m
kube-system kube-apiserver-master1 1/1 Running 0 31m
kube-system kube-controller-manager-master1 1/1 Running 0 32m
kube-system kube-flannel-ds-amd64-bkg9z 1/1 Running 0 32m
kube-system kube-flannel-ds-amd64-q8vh4 1/1 Running 0 32m
kube-system kube-flannel-ds-amd64-vqmnz 1/1 Running 0 32m
kube-system kube-proxy-bd8mf 1/1 Running 0 32m
kube-system kube-proxy-nlc2b 1/1 Running 0 32m
kube-system kube-proxy-x7cd5 1/1 Running 0 32m
kube-system kube-scheduler-master1 1/1 Running 0 32m
kube-system tiller-deploy-8458f6c667-cx2mv 1/1 Running 0 27m
vitess etcd-global-6pwvnv29th 0/1 Init:0/1 0 16m
vitess etcd-operator-84db9bc774-j4wml 1/1 Running 0 30m
vitess etcd-zone1-zwgvd7spzc 0/1 Init:0/1 0 16m
vitess vtctld-86cd78b6f5-zgfqg 0/1 CrashLoopBackOff 7 16m
vitess vtgate-zone1-58744956c4-x8ms2 0/1 CrashLoopBackOff 7 16m
vitess zone1-vitess-0-init-shard-master-mbbph 1/1 Running 0 16m
vitess zone1-vitess-0-replica-0 0/6 Init:CrashLoopBackOff 7 16m
Running logs I see this error:
$ kubectl logs -n vitess vtctld-86cd78b6f5-zgfqg
++ cat
+ eval exec /vt/bin/vtctld '-cell="zone1"' '-web_dir="/vt/web/vtctld"' '-web_dir2="/vt/web/vtctld2/app"' -workflow_manager_init -workflow_manager_use_election -logtostderr=true -stderrthreshold=0 -port=15000 -grpc_port=15999 '-service_map="grpc-vtctl"' '-topo_implementation="etcd2"' '-topo_global_server_address="etcd-global-client.vitess:2379"' -topo_global_root=/vitess/global
++ exec /vt/bin/vtctld -cell=zone1 -web_dir=/vt/web/vtctld -web_dir2=/vt/web/vtctld2/app -workflow_manager_init -workflow_manager_use_election -logtostderr=true -stderrthreshold=0 -port=15000 -grpc_port=15999 -service_map=grpc-vtctl -topo_implementation=etcd2 -topo_global_server_address=etcd-global-client.vitess:2379 -topo_global_root=/vitess/global
ERROR: logging before flag.Parse: E0422 02:35:34.020928 1 syslogger.go:122] can't connect to syslog
F0422 02:35:39.025400 1 server.go:221] Failed to open topo server (etcd2,etcd-global-client.vitess:2379,/vitess/global): grpc: timed out when dialing
I'm running behind vagrant with 1 master and 2 nodes. I suspect that is a issue with eth1.
The storage are configured to use NFS.
$ kubectl logs etcd-operator-84db9bc774-j4wml
time="2019-04-22T17:26:51Z" level=info msg="skip reconciliation: running ([]), pending ([etcd-zone1-zwgvd7spzc])" cluster-name=etcd-zone1 cluster-namespace=vitess pkg=cluster
time="2019-04-22T17:26:51Z" level=info msg="skip reconciliation: running ([]), pending ([etcd-zone1-zwgvd7spzc])" cluster-name=etcd-global cluster-namespace=vitess pkg=cluster
It appears that etcd is not fully initializing. Note that neither the pod for the global lockserver (etcd-global-6pwvnv29th) nor the local one for cell zone1 (pod etcd-zone1-zwgvd7spzc) are ready.

Deploy GitLab with Helm. Nginx-ingress pods can't start

Call install:
helm install --name gitlab1 -f values.yaml gitlab/gitlab-omnibus
I see Pods can't start.
And I see error:no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"
I think about ABAC/RBAC... But what doing with this...
Logs from nginx pod:
# kubectl logs nginx-ndxhn --namespace nginx-ingress
[dumb-init] Unable to detach from controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] Child spawned with PID 7.
[dumb-init] Unable to attach to controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] setsid complete.
I0530 21:30:23.232676 7 launch.go:105] &{NGINX 0.9.0-beta.11 git-a3131c5 https://github.com/kubernetes/ingress}
I0530 21:30:23.232749 7 launch.go:108] Watching for ingress class: nginx
I0530 21:30:23.233708 7 launch.go:262] Creating API server client for https://10.233.0.1:443
I0530 21:30:23.234080 7 nginx.go:182] starting NGINX process...
F0530 21:30:23.251587 7 launch.go:122] no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"
[dumb-init] Received signal 17.
[dumb-init] A child with PID 7 exited with exit status 255.
[dumb-init] Forwarded signal 15 to children.
[dumb-init] Child exited with status 255. Goodbye.
# kubectl get svc -w --namespace nginx-ingress nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.233.25.0 <pending> 80:32048/TCP,443:31430/TCP,22:31636/TCP 9m
# kubectl describe svc --namespace nginx-ingress nginx
Name: nginx
Namespace: nginx-ingress
Labels: <none>
Annotations: service.beta.kubernetes.io/external-traffic=OnlyLocal
Selector: app=nginx
Type: LoadBalancer
IP: 10.233.25.0
IP: 1.1.1.1
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32048/TCP
Endpoints:
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31430/TCP
Endpoints:
Port: git 22/TCP
TargetPort: 22/TCP
NodePort: git 31636/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default gitlab1-gitlab-75576c4589-lnf56 0/1 Running 2 11m
default gitlab1-gitlab-postgresql-f66555d65-nqvqx 1/1 Running 0 11m
default gitlab1-gitlab-redis-58cf598657-ksptm 1/1 Running 0 11m
default gitlab1-gitlab-runner-55d458ccb7-g442z 0/1 CrashLoopBackOff 6 11m
default glusterfs-9cfcr 1/1 Running 0 1d
default glusterfs-k422g 1/1 Running 0 1d
default glusterfs-tjtvq 1/1 Running 0 1d
default heketi-75dcfb7d44-thxpm 1/1 Running 0 1d
default nginx-nginx-ingress-controller-775b5b9c6d-hhvlr 1/1 Running 0 2h
default nginx-nginx-ingress-default-backend-7bb66746b9-mzgcb 1/1 Running 0 2h
default nginx-pod1 1/1 Running 0 1d
kube-lego kube-lego-58c9f5788d-pdfb5 1/1 Running 0 11m
kube-system calico-node-hq2v7 1/1 Running 3 2d
kube-system calico-node-z4nts 1/1 Running 3 2d
kube-system calico-node-z9r9v 1/1 Running 4 2d
kube-system kube-apiserver-k8s-m1.me 1/1 Running 4 2d
kube-system kube-apiserver-k8s-m2.me 1/1 Running 5 1d
kube-system kube-apiserver-k8s-m3.me 1/1 Running 3 2d
kube-system kube-controller-manager-k8s-m1.me 1/1 Running 4 2d
kube-system kube-controller-manager-k8s-m2.me 1/1 Running 4 1d
kube-system kube-controller-manager-k8s-m3.me 1/1 Running 3 2d
kube-system kube-dns-7bd4d5fbb6-r2rnf 3/3 Running 9 2d
kube-system kube-dns-7bd4d5fbb6-zffvn 3/3 Running 9 2d
kube-system kube-proxy-k8s-m1.me 1/1 Running 3 2d
kube-system kube-proxy-k8s-m2.me 1/1 Running 3 1d
kube-system kube-proxy-k8s-m3.me 1/1 Running 3 2d
kube-system kube-scheduler-k8s-m1.me 1/1 Running 4 2d
kube-system kube-scheduler-k8s-m2.me 1/1 Running 4 1d
kube-system kube-scheduler-k8s-m3.me 1/1 Running 4 2d
kube-system kubedns-autoscaler-679b8b455-pp7jd 1/1 Running 3 2d
kube-system kubernetes-dashboard-55fdfd74b4-6z8qp 1/1 Running 0 1d
kube-system tiller-deploy-75b7d95f5c-8cmxh 1/1 Running 0 1d
nginx-ingress default-http-backend-6679b97b47-w6cx7 1/1 Running 0 11m
nginx-ingress nginx-ndxhn 0/1 CrashLoopBackOff 6 11m
nginx-ingress nginx-nk2jg 0/1 CrashLoopBackOff 6 11m
nginx-ingress nginx-rz7xj 0/1 CrashLoopBackOff 6 11m
Logs on runner:
# kubectl logs gitlab1-gitlab-runner-55d458ccb7-g442z
+ cp /scripts/config.toml /etc/gitlab-runner/
+ /entrypoint register --non-interactive --executor kubernetes
Running in system-mode.
ERROR: Registering runner... failed runner=tQtCbx5U status=couldn't execute POST against http://gitlab1-gitlab.default:8005/api/v4/runners: Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout
PANIC: Failed to register this runner. Perhaps you are having network problems
PVC is fine
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitlab1-gitlab-config-storage Bound pvc-c957bd23-644f-11e8-8f10-4ccc6a60fcbe 1Gi RWO gluster-heketi 13m
gitlab1-gitlab-postgresql-storage Bound pvc-c964e7d0-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gitlab1-gitlab-redis-storage Bound pvc-c96f9146-644f-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 13m
gitlab1-gitlab-registry-storage Bound pvc-c959d377-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gitlab1-gitlab-storage Bound pvc-c9611ab1-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gluster1 Bound pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 1d
# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
I think about ABAC/RBAC... But what doing with this...
You are correct, and the error message explains exactly what is wrong. There are two paths forward: you can fix the Role and RoleBinding for the default ServiceAccount in the nginx-ingress namespace, or you can switch the Deployment to use a ServiceAccount other than default in order to assign that Deployment the specific permissions required. I recommend the latter, but the former may be less typing.
The rough version of the Role and RoleBinding lives in the nginx-ingress repo but may need to be adapted for your needs, including updating the apiVersion away from v1beta1
After that change has taken place, you'll need to delete the nginx-ingress Pods in order for them to pick up their new Role and conduct whatever initialization tasks nginx does during startup.
Separately, you will for sure want to fix this business:
Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout
I can't offer more concrete actions without knowing more about your CNI setup and the state of affairs of the actual GitLab Pod, but an I/O timeout is certainly a very weird error to get for in cluster communication.

What is POD and SERVICE in kubectl commands?

I am probably missing some of the basic. kubectl logs command usage is the following:
"kubectl logs [-f] [-p] POD [-c CONTAINER] [options]"
list of my pods is the following:
ubuntu#master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system etcd-master 1/1 Running 0 24m
kube-system kube-apiserver-master 1/1 Running 0 24m
kube-system kube-controller-manager-master 1/1 Running 0 24m
kube-system kube-discovery-982812725-3kt85 1/1 Running 0 24m
kube-system kube-dns-2247936740-kimly 3/3 Running 0 24m
kube-system kube-proxy-amd64-gwv99 1/1 Running 0 20m
kube-system kube-proxy-amd64-r08h9 1/1 Running 0 24m
kube-system kube-proxy-amd64-szl6w 1/1 Running 0 14m
kube-system kube-scheduler-master 1/1 Running 0 24m
kube-system kubernetes-dashboard-1655269645-x3uyt 1/1 Running 0 24m
kube-system weave-net-4g1g8 1/2 CrashLoopBackOff 7 14m
kube-system weave-net-8zdm3 1/2 CrashLoopBackOff 8 20m
kube-system weave-net-qm3q5 2/2 Running 0 24m
I assume POD for logs command is anything from the second "name" column above. So, I try the following commands.
ubuntu#master:~$ kubectl logs etcd-master
Error from server: pods "etcd-master" not found
ubuntu#master:~$ kubectl logs weave-net-4g1g8
Error from server: pods "weave-net-4g1g8" not found
ubuntu#master:~$ kubectl logs weave-net
Error from server: pods "weave-net" not found
ubuntu#master:~$ kubectl logs weave
Error from server: pods "weave" not found
So, what is the POD in the logs command?
I have got the same question about services as well. How to identify a SERVICE to supply into a command, for example for 'describe' command?
ubuntu#master:~$ kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes 100.64.0.1 <none> 443/TCP 40m
kube-system kube-dns 100.64.0.10 <none> 53/UDP,53/TCP 39m
kube-system kubernetes-dashboard 100.70.83.136 <nodes> 80/TCP 39m
ubuntu#master:~$ kubectl describe service kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
ubuntu#master:~$ kubectl describe services kubernetes-dashboard
Error from server: services "kubernetes-dashboard" not found
Also, is it normal that weave-net-8zdm3 is in CrashLoopBackOff state? It seems I have got one for each connected worker. If it is not normal, how can I fix it? I have found similar question here: kube-dns and weave-net not starting but it does not give any practical answer.
Thanks for your help!
It seems you are running your pods in a different namespace than default.
ubuntu#master:~$ kubectl get pods --all-namespaces returns your pods but ubuntu#master:~$ kubectl logs etcd-masterreturns not found. Try running kubectl logs etcd-master --all-namespaces or if you know your namespace kubectl logs etcd-mastern --namespace=mynamespace.
The same thing goes for your services.