I have a statefulSet with two replicas.
It's headless service name is "gov-svc"
It's ->
.metadata.name: sts
.metadata.namespace: default
.spec.serviceName: gov-svc
.spec.template.spec.subdomain: gov-svc
.spec.replicas: 2
Before running statefulSet
kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-99b9bb8bd-qdnsb 1/1 Running 0 4h
kube-system etcd-minikube 1/1 Running 0 4h
kube-system kube-addon-manager-minikube 1/1 Running 0 4h
kube-system kube-apiserver-minikube 1/1 Running 0 4h
kube-system kube-controller-manager-minikube 1/1 Running 1 4h
kube-system kube-proxy-b9np6 1/1 Running 0 4h
kube-system kube-scheduler-minikube 1/1 Running 0 4h
kube-system kubernetes-dashboard-7db4dc666b-bsk8k 1/1 Running 0 4h
kube-system storage-provisioner
After running both of the pods of this statefulSet, from pod sts-0, ping results:
$ ping sts-0.gov-svc.default.svc.cluster.local
PING sts-0.gov-svc.default.svc.cluster.local (172.17.0.11): 56 data bytes
64 bytes from 172.17.0.11: seq=0 ttl=64 time=0.051 ms
64 bytes from 172.17.0.11: seq=1 ttl=64 time=0.444 ms
^C
--- redis-cluster-exp-0-0.redis-cluster-exp.default.svc.cluster.local ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.051/0.247/0.444 ms
But when I try to ping sts-1 from sts-0, it says:
$ ping sts-1.gov-svc.default.svc.cluster.local
ping: bad address 'sts-1.gov-svc.default.svc.cluster.local'
I need to ping other pods successfully by hostname. How can I do it?
You need to create headless service to be able to ping replicas from each to other within a StatefulSet. Something like:
apiVersion: v1
kind: Service
metadata:
name: gov-svc-headless
labels:
your_label: your_value
spec:
selector:
your_label: your_value
ports:
- port: your_port
name: transport
protocol: TCP
clusterIP: None <---
Note:
With selectors For headless services that define selectors, the
endpoints controller creates Endpoints records in the API, and
modifies the DNS configuration to return A records (addresses) that
point directly to the Pods backing the Service.
Note:
or headless services that do not define selectors, the endpoints
controller does not create Endpoints records. However, the DNS system
looks for and configures either:
CNAME records for ExternalName-type services. A records for any
Endpoints that share a name with the service, for all other types.
More info: https://kubernetes.io/docs/concepts/services-networking/service/#headless-services
Related
I have Prometheus installed on GCP, and i'm able to do a port-forward and access the Prometheus UI
Prometheus Pods, Events on GCP :
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get pods -n monitoring -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
grafana-5ccfb68647-8fjrz 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
grafana-5ccfb68647-h7vbr 1/1 Running 0 5h24m 10.76.0.9 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-hw6d5 1/1 Running 0 5h24m 10.76.0.4 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-operator-85d84bb848-znjs6 0/1 Terminated 0 28h <none> gke-strimzi-prometheus-default-pool-38ca804d-nfvm <none> <none>
prometheus-prometheus-0 2/2 Running 0 5h24m 10.76.0.10 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-1 2/2 Running 0 5h24m 10.76.0.7 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
prometheus-prometheus-2 2/2 Running 0 5h24m 10.76.0.11 gke-strimzi-prometheus-default-pool-38ca804d-zzl9 <none> <none>
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get endpoints -n monitoring
NAME ENDPOINTS AGE
grafana 10.76.0.9:3000 28h
grafana-lb 10.76.0.9:3000 54m
prometheus-lb 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 155m
prometheus-nodeport 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 149m
prometheus-operated 10.76.0.10:9090,10.76.0.11:9090,10.76.0.7:9090 28h
prometheus-operator 10.76.0.4:8080 29h
I've create a NodePort(port 30900), and also create a firewall allowing ingress to the port 30900
Karans-MacBook-Pro:prometheus-yamls karanalang$ kc get svc -n monitoring | grep prometheus-nodeport
prometheus-nodeport NodePort 10.80.7.195 <none> 9090:30900/TCP 146m
However, when i try to access using http://<node_ip>:30900,
the url is not accessible.
Also, telnet to the host/port is not working
Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11 30900
Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7
PING 10.76.0.7 (10.76.0.7): 56 data bytes
Request timeout for icmp_seq 0
Here is the yaml used to create the NodePort (in monitoring namespace)
apiVersion: v1
kind: Service
metadata:
name: prometheus-nodeport
spec:
type: NodePort
ports:
- name: web
nodePort: 30900
port: 9090
protocol: TCP
targetPort: 9090
selector:
prometheus: prometheus
Any ideas on what the issue is ?
How do i debug/resolve this ?
Karans-MacBook-Pro:prometheus-yamls karanalang$ telnet 10.76.0.11
30900 Trying 10.76.0.11...
Karans-MacBook-Pro:prometheus-yamls karanalang$ ping 10.76.0.7 PING
10.76.0.7 (10.76.0.7): 56 data bytes
The IP that you used above appeared to be in the Pod CIDR range when judged from the EndPoints result in the question. These are not the worker node IP, which means you need to first check if you can reach any of the worker node over the network that you reside now (home? vpn? internet?), and the worker node already has the correct port (30900) opened.
I could not access my application from the k8s cluster.
With nodePort everything works. If I use ingress controller, I could see that it is created successfully. I am able to ping IP. If I try to telnet, it says connection refused. I am also unable to access the application. What do i miss? I do not see any exception in the ingress pod.
kubectl get ing -n test
NAME CLASS HOSTS ADDRESS PORTS AGE
web-ingress <none> * 192.168.0.102 80 44m
ping 192.168.0.102
PING 192.168.0.102 (192.168.0.102) 56(84) bytes of data.
64 bytes from 192.168.0.102: icmp_seq=1 ttl=64 time=0.795 ms
64 bytes from 192.168.0.102: icmp_seq=2 ttl=64 time=0.860 ms
64 bytes from 192.168.0.102: icmp_seq=3 ttl=64 time=0.631 ms
^C
--- 192.168.0.102 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2038ms
rtt min/avg/max/mdev = 0.631/0.762/0.860/0.096 ms
telnet 192.168.0.102 80
Trying 192.168.0.102...
telnet: Unable to connect to remote host: Connection refused
kubectl get all -n ingress-nginx
shows this
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-admission-create-htvkh 0/1 Completed 0 99m
pod/ingress-nginx-admission-patch-cf8gj 0/1 Completed 0 99m
pod/ingress-nginx-controller-7fd7d8df56-kll4v 1/1 Running 0 99m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller NodePort 10.102.220.87 <none> 80:31692/TCP,443:32736/TCP 99m
service/ingress-nginx-controller-admission ClusterIP 10.106.159.230 <none> 443/TCP 99m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 99m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-7fd7d8df56 1 1 1 99m
NAME COMPLETIONS DURATION AGE
job.batch/ingress-nginx-admission-create 1/1 7s 99m
job.batch/ingress-nginx-admission-patch 1/1 8s 99m
Answer
The IP from kubectl get ing -n test is not an externally accessible address that you should be using.
Your NGINX Ingress Controller Deployment has a Service deployed alongside it. You can use the external IP of this Service (if it has one) to hit your Ingress Controller.
Because your Service is of NodePort type (and does not show an external IP), you must address the Ingress Controller Pods through your cluster's Node IPs. You would need to track which Node the Pod is on, then find the Node's IP. Here is an example of doing this:
NODE=$(kubectl get pods -o wide | grep "ingress-nginx-controller" | awk {'print $8'})
NODE_IP=$(kubectl get nodes "$NODE" -o wide | grep Ready | awk {'print $7'})
More Info
If your cluster is managed (i.e. GKE/Azure/AWS), you can use a LoadBalancer Service to provide an external IP to hit the Ingress Controller.
I am working on a tutorial on Create a Kubernetes service to point to the Ambassador deployment.
Tutorial: https://www.bogotobogo.com/DevOps/Docker/Docker-Envoy-Ambassador-API-Gateway-for-Kubernetes.php
On running the command
curl $(minikube service --url ambassador)/httpbin/ip
I'm getting error
curl: (7) Failed to connect to 192.168.99.100 port 30790: Connection refused
curl: (3) <url> malformed
I can actually remove the error of
curl: (3) <url> malformed
by running
minikube service --url ambassador
http://192.168.99.100:30790
and then
curl http://192.168.99.100:30790/httpbin/ip
I've already tried this answer curl: (7) Failed to connect to 192.168.99.100 port 31591: Connection refused also the step mentioned in this answer is already in the blog, and it didn't work.
This is the code from the blog, for ambassador-svc.yaml
---
apiVersion: v1
kind: Service
metadata:
labels:
service: ambassador
name: ambassador
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: httpbin_mapping
prefix: /httpbin/
service: httpbin.org:80
host_rewrite: httpbin.org
spec:
type: LoadBalancer
ports:
- name: ambassador
port: 80
targetPort: 80
selector:
service: ambassador
Can this be a problem related to VM?
Also, I tried to work on this tutorial first but unfortunately, got the same error.
Let me know if anything else is needed from my side.
Edit:
1.As asked in the comment here is the output of
kubectl get pods --namespace=kube-system
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-qkxwm 1/1 Running 0 5h16m
coredns-fb8b8dccf-rrn4f 1/1 Running 0 5h16m
etcd-minikube 1/1 Running 0 5h15m
kube-addon-manager-minikube 1/1 Running 4 5h15m
kube-apiserver-minikube 1/1 Running 0 5h15m
kube-controller-manager-minikube 1/1 Running 0 3h17m
kube-proxy-wfbxs 1/1 Running 0 5h16m
kube-scheduler-minikube 1/1 Running 0 5h15m
storage-provisioner 1/1 Running 0 5h16m
after running
kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-78f8f67c4d-zqtl2 1/1 Running 0 65s
calico-node-27lcq 1/1 Running 0 65s
coredns-fb8b8dccf-qkxwm 1/1 Running 2 22h
coredns-fb8b8dccf-rrn4f 1/1 Running 2 22h
etcd-minikube 1/1 Running 1 22h
kube-addon-manager-minikube 1/1 Running 5 22h
kube-apiserver-minikube 1/1 Running 1 22h
kube-controller-manager-minikube 1/1 Running 0 8m27s
kube-proxy-wfbxs 1/1 Running 1 22h
kube-scheduler-minikube 1/1 Running 1 22h
storage-provisioner 1/1 Running 2 22h
kubectl get pods --namespace=kube-system should have the network service pod
So you have not set up networking policy to used for DNS.
Try using Network Policy Calico
by using command
kubectl apply -f https://docs.projectcalico.org/v3.7/manifests/calico.yaml
check now kubectl get pods --namespace=kube-system
You should get output like this :-
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-kube-controllers-6ff88bf6d4-tgtzb 1/1 Running 0 2m45s
kube-system calico-node-24h85 1/1 Running 0 2m43s
kube-system coredns-846jhw23g9-9af73 1/1 Running 0 4m5s
kube-system coredns-846jhw23g9-hmswk 1/1 Running 0 4m5s
kube-system etcd-jbaker-1 1/1 Running 0 6m22s
kube-system kube-apiserver-jbaker-1 1/1 Running 0 6m12s
kube-system kube-controller-manager-jbaker-1 1/1 Running 0 6m16s
kube-system kube-proxy-8fzp2 1/1 Running 0 5m16s
kube-system kube-scheduler-jbaker-1 1/1 Running 0 5m41s
Here is a checklist to troubleshoot the problem:
Have you created at least one Listener (CRD) ?
If you don't create at least one Listener the edge-stack pod won't respond on any ports for a request. So create the Listeners object as reported in the official doc: https://www.getambassador.io/docs/edge-stack/latest/tutorials/getting-started/
Note that the default port defined in the Ambassador Deployment and in turn in the edge-stack pod are 8080 and 8443 so avoid to change them unless you know what you are doing. You can check if any existing Listener is defined with this command: kubectl get Listener -n ambassador or just kubectl get Listener if maybe you used the default namespace by mistake.
Have you defined at least one Service of type LoadBalancer which reference the Ambassador service ?
Normally this is not needed because once you deployed Ambassador on your node you should have the Service edge-stack in the Namespace ambassador which is configured as LoadBalancer so it exposes some ports in the range 30000-32767 and redirect the traffic to the 8080 and 8443. If you manually created a Service of type LoadBalancer make sure to use the right ports in the port and targetPort fields. The ports are those used by the edge-stack Deployment which are by default 8080 and 8443)
Check that the edge-stack pod is responding to the request on it's port
The easiest way is to use the web based UI of Kubernates or your cloud provider and exec a bash into the edge-stack pod. With the default Kubernates dashboard you can just reach the pod, click on the vertical three dots -> Exec.
If you are running ambassador locally you can use eval $(minikube docker-env), search the related container with docker ps | grep edge-stack and once you got its id you can exec the bash with the command docker exec -it <id> bash
Finally run curl -Lki http://127.0.0.1:8080 You shoudl get something like this: HTTP/1.1 404 Not Founddate: Tue, XX Xxx XXXX XX:XX:XX XXXserver: envoycontent-length: 0
If curl get a response instead of "connection refused" the service is running successfully on the pod and the problem must be in the Service, Deployment or Listener configuration. You can try to use the Cluster IP of the Service edge-stack instead of 127.0.0.1 but you need to run the curl command from a pod different than edge-stack. You can use your own pod or exec into the other ambassador pods which are: edge-stack-agent and edge-stack-redis.
Call install:
helm install --name gitlab1 -f values.yaml gitlab/gitlab-omnibus
I see Pods can't start.
And I see error:no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"
I think about ABAC/RBAC... But what doing with this...
Logs from nginx pod:
# kubectl logs nginx-ndxhn --namespace nginx-ingress
[dumb-init] Unable to detach from controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] Child spawned with PID 7.
[dumb-init] Unable to attach to controlling tty (errno=25 Inappropriate ioctl for device).
[dumb-init] setsid complete.
I0530 21:30:23.232676 7 launch.go:105] &{NGINX 0.9.0-beta.11 git-a3131c5 https://github.com/kubernetes/ingress}
I0530 21:30:23.232749 7 launch.go:108] Watching for ingress class: nginx
I0530 21:30:23.233708 7 launch.go:262] Creating API server client for https://10.233.0.1:443
I0530 21:30:23.234080 7 nginx.go:182] starting NGINX process...
F0530 21:30:23.251587 7 launch.go:122] no service with name nginx-ingress/default-http-backend found: services "default-http-backend" is forbidden: User "system:serviceaccount:nginx-ingress:default" cannot get services in the namespace "nginx-ingress"
[dumb-init] Received signal 17.
[dumb-init] A child with PID 7 exited with exit status 255.
[dumb-init] Forwarded signal 15 to children.
[dumb-init] Child exited with status 255. Goodbye.
# kubectl get svc -w --namespace nginx-ingress nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.233.25.0 <pending> 80:32048/TCP,443:31430/TCP,22:31636/TCP 9m
# kubectl describe svc --namespace nginx-ingress nginx
Name: nginx
Namespace: nginx-ingress
Labels: <none>
Annotations: service.beta.kubernetes.io/external-traffic=OnlyLocal
Selector: app=nginx
Type: LoadBalancer
IP: 10.233.25.0
IP: 1.1.1.1
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 32048/TCP
Endpoints:
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 31430/TCP
Endpoints:
Port: git 22/TCP
TargetPort: 22/TCP
NodePort: git 31636/TCP
Endpoints:
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
# kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default gitlab1-gitlab-75576c4589-lnf56 0/1 Running 2 11m
default gitlab1-gitlab-postgresql-f66555d65-nqvqx 1/1 Running 0 11m
default gitlab1-gitlab-redis-58cf598657-ksptm 1/1 Running 0 11m
default gitlab1-gitlab-runner-55d458ccb7-g442z 0/1 CrashLoopBackOff 6 11m
default glusterfs-9cfcr 1/1 Running 0 1d
default glusterfs-k422g 1/1 Running 0 1d
default glusterfs-tjtvq 1/1 Running 0 1d
default heketi-75dcfb7d44-thxpm 1/1 Running 0 1d
default nginx-nginx-ingress-controller-775b5b9c6d-hhvlr 1/1 Running 0 2h
default nginx-nginx-ingress-default-backend-7bb66746b9-mzgcb 1/1 Running 0 2h
default nginx-pod1 1/1 Running 0 1d
kube-lego kube-lego-58c9f5788d-pdfb5 1/1 Running 0 11m
kube-system calico-node-hq2v7 1/1 Running 3 2d
kube-system calico-node-z4nts 1/1 Running 3 2d
kube-system calico-node-z9r9v 1/1 Running 4 2d
kube-system kube-apiserver-k8s-m1.me 1/1 Running 4 2d
kube-system kube-apiserver-k8s-m2.me 1/1 Running 5 1d
kube-system kube-apiserver-k8s-m3.me 1/1 Running 3 2d
kube-system kube-controller-manager-k8s-m1.me 1/1 Running 4 2d
kube-system kube-controller-manager-k8s-m2.me 1/1 Running 4 1d
kube-system kube-controller-manager-k8s-m3.me 1/1 Running 3 2d
kube-system kube-dns-7bd4d5fbb6-r2rnf 3/3 Running 9 2d
kube-system kube-dns-7bd4d5fbb6-zffvn 3/3 Running 9 2d
kube-system kube-proxy-k8s-m1.me 1/1 Running 3 2d
kube-system kube-proxy-k8s-m2.me 1/1 Running 3 1d
kube-system kube-proxy-k8s-m3.me 1/1 Running 3 2d
kube-system kube-scheduler-k8s-m1.me 1/1 Running 4 2d
kube-system kube-scheduler-k8s-m2.me 1/1 Running 4 1d
kube-system kube-scheduler-k8s-m3.me 1/1 Running 4 2d
kube-system kubedns-autoscaler-679b8b455-pp7jd 1/1 Running 3 2d
kube-system kubernetes-dashboard-55fdfd74b4-6z8qp 1/1 Running 0 1d
kube-system tiller-deploy-75b7d95f5c-8cmxh 1/1 Running 0 1d
nginx-ingress default-http-backend-6679b97b47-w6cx7 1/1 Running 0 11m
nginx-ingress nginx-ndxhn 0/1 CrashLoopBackOff 6 11m
nginx-ingress nginx-nk2jg 0/1 CrashLoopBackOff 6 11m
nginx-ingress nginx-rz7xj 0/1 CrashLoopBackOff 6 11m
Logs on runner:
# kubectl logs gitlab1-gitlab-runner-55d458ccb7-g442z
+ cp /scripts/config.toml /etc/gitlab-runner/
+ /entrypoint register --non-interactive --executor kubernetes
Running in system-mode.
ERROR: Registering runner... failed runner=tQtCbx5U status=couldn't execute POST against http://gitlab1-gitlab.default:8005/api/v4/runners: Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout
PANIC: Failed to register this runner. Perhaps you are having network problems
PVC is fine
# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
gitlab1-gitlab-config-storage Bound pvc-c957bd23-644f-11e8-8f10-4ccc6a60fcbe 1Gi RWO gluster-heketi 13m
gitlab1-gitlab-postgresql-storage Bound pvc-c964e7d0-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gitlab1-gitlab-redis-storage Bound pvc-c96f9146-644f-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 13m
gitlab1-gitlab-registry-storage Bound pvc-c959d377-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gitlab1-gitlab-storage Bound pvc-c9611ab1-644f-11e8-8f10-4ccc6a60fcbe 30Gi RWO gluster-heketi 13m
gluster1 Bound pvc-922b5dc0-6372-11e8-8f10-4ccc6a60fcbe 5Gi RWO gluster-heketi 1d
# kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.2", GitCommit:"81753b10df112992bf51bbc2c2f85208aad78335", GitTreeState:"clean", BuildDate:"2018-04-27T09:10:24Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
I think about ABAC/RBAC... But what doing with this...
You are correct, and the error message explains exactly what is wrong. There are two paths forward: you can fix the Role and RoleBinding for the default ServiceAccount in the nginx-ingress namespace, or you can switch the Deployment to use a ServiceAccount other than default in order to assign that Deployment the specific permissions required. I recommend the latter, but the former may be less typing.
The rough version of the Role and RoleBinding lives in the nginx-ingress repo but may need to be adapted for your needs, including updating the apiVersion away from v1beta1
After that change has taken place, you'll need to delete the nginx-ingress Pods in order for them to pick up their new Role and conduct whatever initialization tasks nginx does during startup.
Separately, you will for sure want to fix this business:
Post http://gitlab1-gitlab.default:8005/api/v4/runners: dial tcp 10.233.7.205:8005: i/o timeout
I can't offer more concrete actions without knowing more about your CNI setup and the state of affairs of the actual GitLab Pod, but an I/O timeout is certainly a very weird error to get for in cluster communication.
I have a kubernetes cluster with 5 nodes. When I add a simple nginx pod it will be scheduled to one of the nodes but it will not start up. It will not even pull the image.
This is the nginx.yaml file:
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
when I describe the pod there is one event: Successfully assigned busybox to up02 When I log in to the up02 and check to see if there are any images pulled I see it didn't get pulled so I pulled it manually (I thought maybe it needs some kick start ;) )
The pod will allways stay in the Container creating state. It's not only with this pod, the problem is with any pod I try to add.
There are some pods running on the machine which is necessary for Kubernetes to operate:
up#up01:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default busybox 0/1 ContainerCreating 0 11m
default nginx 0/1 ContainerCreating 0 22m
kube-system dummy-2088944543-n1cd5 1/1 Running 0 5d
kube-system etcd-up01 1/1 Running 0 5d
kube-system kube-apiserver-up01 1/1 Running 0 5d
kube-system kube-controller-manager-up01 1/1 Running 0 5d
kube-system kube-discovery-1769846148-xfpls 1/1 Running 0 5d
kube-system kube-dns-2924299975-5rzz8 4/4 Running 0 5d
kube-system kube-proxy-17bpl 1/1 Running 2 3d
kube-system kube-proxy-3pk63 1/1 Running 0 3d
kube-system kube-proxy-h3wrj 1/1 Running 0 5d
kube-system kube-proxy-wzqv4 1/1 Running 0 3d
kube-system kube-proxy-z3xxx 1/1 Running 0 3d
kube-system kube-scheduler-up01 1/1 Running 0 5d
kube-system kubernetes-dashboard-3203831700-3xfbd 1/1 Running 0 5d
kube-system weave-net-6c0nr 2/2 Running 0 3d
kube-system weave-net-dchhf 2/2 Running 0 5d
kube-system weave-net-hshvg 2/2 Running 4 3d
kube-system weave-net-n684c 2/2 Running 1 3d
kube-system weave-net-r5319 2/2 Running 0 3d
You can do
kubectl describe pods <pod>
to get more info on what's happening.
Can you recreate the nginx pod again in namespace kube-system?
kubectl create --namespace kube-system -f nginx.yaml
this should fix your problem.
Second, do you have proxy in your environment, take a look as well.
make sure that your namespace and service account information is correct. if you've configured your services or deployments to use a namespace or service account, that namespace needs to exist.
If you configured it to use a non - default service account then that has to exist as well, and the service account should be created after the namespace.
you shouldn't be necessarily using the kube system namespace. namespaces exist so there can be more than one of them and control the flow of traffic inside of a cluster.
you should also set potentially set permissions for your namespace. read this here.
https://kubernetes.io/docs/reference/access-authn-authz/rbac/#service-account-permissions