I started the kubernetes cluster using kubeadm on two servers rented from DigitalOcean. I use Flannel as CNI. After launching the cluster, I, following this tutorial, created deployment and service.
$ kubectl describe svc example-service
Name: example-service
Namespace: default
Labels: <none>
Annotations: <none>
Selector: run=load-balancer-example
Type: NodePort
IP: 10.99.217.181
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 31570/TCP
Endpoints: 10.244.1.2:8080,10.244.1.3:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Trying to access to pods from master node(server1)
$ curl 10.244.1.2:8080
curl: (7) Failed to connect to 10.244.1.2 port 8080: Connection timed out
$ curl 10.244.1.3:8080
curl: (7) Failed to connect to 10.244.1.3 port 8080: Connection timed out
$ curl curl 10.99.217.181:8080
curl: (7) Failed to connect to 10.99.217.181 port 8080: Connection timed out
$ curl [server1-ip]:31570
curl: (7) Failed to connect to [server1-ip] port 31570: Connection timed out
$ curl [server2-ip]:31570
curl: (7) Failed to connect to [server2-ip] port 31570: Connection timed out
Trying to access to pods from worker node(server2)
$ curl 10.244.1.2:8080
Hello Kubernetes!
$ curl 10.244.1.3:8080
Hello Kubernetes!
$ curl curl 10.99.217.181:8080
Hello Kubernetes!
$ curl [server1-ip]:31570
curl: (7) Failed to connect to [server1-ip] port 31570: Connection timed out
$ curl [server2-ip]:31570
Hello Kubernetes!
Related
I've deployed an Wazuh API on my kubernetes cluster and I cannot reach the 55000 port outside the pod , it says "curl: (7) Failed to connect to wazuh-master port 55000: Connection refused"
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
wazuh-manager-worker-0 1/1 Running 0 4m47s 172.16.0.231 10.0.0.10 <none> <none>
wazuh-master-0 1/1 Running 0 4m47s 172.16.0.230 10.0.0.10 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
wazuh-cluster ClusterIP None <none> 1516/TCP 53s
wazuh-master ClusterIP 10.247.82.29 <none> 1515/TCP,55000/TCP 53s
wazuh-workers LoadBalancer 10.247.70.29 <pending> 1514:32371/TCP 53s
I've couple of curl test scenarious:
1. curl the name of the pod "wazuh-master-0" - Could not resolve host: wazuh-master-0
curl -u user:pass -k -X GET "https://wazuh-master-0:55000/security/user/authenticate?raw=true"
curl: (6) Could not resolve host: wazuh-master-0
2. curl the name of the service of the pod "wazuh-master" - Failed to connect to wazuh-master port 55000: Connection refused
curl -u user:pass -k -X GET "https://wazuh-master:55000/security/user/authenticate?raw=true"
curl: (7) Failed to connect to wazuh-master port 55000: Connection refused
3. curl the ip of the pod "wazuh-master-0 172.16.0.230" - successfull
curl -u user:pass -k -X GET "https://172.16.0.230:55000/security/user/authenticate?raw=true"
eyJ0eXAiOiJKV1QiLCJhbGciOiJIUasfsdfgfhgsdoIEFQSSBSRVhgfhfghfghgfasasdasNUIiwibmJmIjoxNjYzMTQ3ODcxLCJleHAiOjE2NjMxNDg3NzEsInN1YiI6IndhenV
wazuh-master-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: wazuh-master
labels:
app: wazuh-manager
spec:
type: ClusterIP
selector:
app: wazuh-manager
node-type: LoadBalancer
ports:
- name: registration
port: 1515
targetPort: 1515
- name: api
port: 55000
targetPort: 55000
What i'm doing wrong ?
I've fixed it, i've removed the line below from the spec: selector: node-type
node-type: LoadBalancer
I try to setup pods in Kubernetes but the pods seem highly unstable. But the responses from the pod seem fully random.
This is the output of an Apache pod running in my k3s cluster (1 master, 3 workers).
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
<html><body><h1>It works!</h1></body></html>
sebastian#kobol:~$ curl -k http://192.168.30.13:30081
curl: (7) Failed to connect to 192.168.30.13 port 30081: No route to host
The responses come from one of my worker nodes, but the other two worker nodes behave exactly the same. I would expect to have an Apache instance running on each worker node (the deployment config specifies "replicas: 3")
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: apache-deployment
namespace: webservers
labels:
app: apache
spec:
replicas: 3
selector:
matchLabels:
app: apache
template:
metadata:
labels:
app: apache
spec:
containers:
- name: apache
image: httpd:2.4
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: apache-svc
namespace: webservers
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30081
selector:
app: apache
On top I try to setup an Ingress using Nginx ingress controller for this Apache service. But all I get is "404 page not found" all the time.
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webservers-ingress
namespace: webservers
annotations:
kubernetes.io/ingress.class: nginx # use the shared ingress-nginx
nginx.ingress.kubernetes.io/rewrite-target: /
#nginx.ingress.kubernetes.io/service-upstream: "true"
spec:
rules:
- host: apache.v-kube-cluster.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apache-svc
port:
number: 80
Can anyone give me a hint on how I can stabilize my environment?
UPDATE #1: Some information about my cluster and pods after I deployed applications. The apache- and nginx-pods and services are the interesting ones.
sudo kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get nodes
NAME STATUS ROLES AGE VERSION
v-k3s-worker-1 Ready <none> 13m v1.21.4+k3s1
v-k3s-worker-3 Ready <none> 8m21s v1.21.4+k3s1
v-k3s-worker-2 Ready <none> 10m v1.21.4+k3s1
v-k3s-master Ready control-plane,master 16m v1.21.4+k3s1
kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get pod --all-namespaces -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
NAME STATUS NODE
helm-install-traefik-crd-jf848 Succeeded v-k3s-master
helm-install-traefik-9mhgg Succeeded v-k3s-master
traefik-97b44b794-trz76 Running v-k3s-master
local-path-provisioner-5ff76fc89d-h4hlt Running v-k3s-master
metrics-server-86cbb8457f-c95bt Running v-k3s-master
coredns-7448499f4d-pjg4c Running v-k3s-master
svclb-traefik-jk2vk Running v-k3s-master
svclb-traefik-bnhv9 Running v-k3s-worker-1
svclb-traefik-v8wm6 Running v-k3s-worker-2
svclb-traefik-qrg4r Running v-k3s-worker-3
ingress-nginx-admission-create-hv6r9 Succeeded v-k3s-worker-3
ingress-nginx-admission-patch-dltfr Succeeded v-k3s-worker-2
dashboard-metrics-scraper-856586f554-rsqnx Running v-k3s-worker-3
ingress-nginx-controller-8cf5559f8-5dqb8 Running v-k3s-worker-1
kubernetes-dashboard-67484c44f6-ztgpc Running v-k3s-worker-2
nginx-deployment-757fbf9d6-kzjtg Running v-k3s-worker-1
apache-deployment-5df5f65c6f-zwfqc Running v-k3s-worker-2
apache-deployment-5df5f65c6f-qbbld Running v-k3s-worker-3
nginx-deployment-757fbf9d6-k5rq7 Running v-k3s-worker-3
nginx-deployment-757fbf9d6-h5kln Running v-k3s-master
apache-deployment-5df5f65c6f-p6ckr Running v-k3s-worker-2
sudo kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get services --all-namespaces
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 25m
kube-system kube-dns ClusterIP 10.43.0.10 <none> 53/UDP,53/TCP,9153/TCP 25m
kube-system metrics-server ClusterIP 10.43.121.217 <none> 443/TCP 25m
kube-system traefik LoadBalancer 10.43.124.27 192.168.30.10 80:30776/TCP,443:32653/TCP 24m
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.43.69.180 <none> 443/TCP 5m31s
ingress-nginx ingress-nginx-controller NodePort 10.43.162.128 <none> 80:30953/TCP,443:32458/TCP 5m31s
kubernetes-dashboard kubernetes-dashboard ClusterIP 10.43.89.177 <none> 443/TCP 5m5s
kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.43.136.108 <none> 8000/TCP 5m5s
webservers apache-svc NodePort 10.43.46.218 <none> 80:30081/TCP 4m18s
webservers nginx-svc NodePort 10.43.100.206 <none> 80:30082/TCP 4m8s
sudo kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get ingress --all-namespaces
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
webservers webservers-ingress <none> apache.v-kube-cluster.local,nginx.v-kube-cluster.local 10.0.2.15 80 4m31s
UPDATE #2: More Information about my cluster directly from my nodes. I removed all lines from the outputs that are not related to my apache pods (too much information affect readability).
Information from master node
# Information from Ingress
user#v-k3s-master:~$ curl localhost:80
404 page not found
user#v-k3s-master:~$ curl localhost:443
404 page not found
# exec
user#v-k3s-master:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-qqt4t -- /bin/bash
Error from server (NotFound): pods "apache-deployment-5df5f65c6f-qqt4t" not found
user#v-k3s-master:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-hbbp6 -- /bin/bash
Error from server (NotFound): pods "apache-deployment-5df5f65c6f-hbbp6" not found
user#v-k3s-master:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-x96zm -- /bin/bash
Error from server (NotFound): pods "apache-deployment-5df5f65c6f-x96zm" not found
# pods info
user#v-k3s-master:~$ kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get pods --all-namespaces -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx ingress-nginx-admission-create-pz2m9 0/1 Completed 0 19m 10.42.2.3 v-k3s-worker-2 <none> <none>
ingress-nginx ingress-nginx-admission-patch-xmhnp 0/1 Completed 0 19m 10.42.3.3 v-k3s-worker-3 <none> <none>
ingress-nginx ingress-nginx-controller-8cf5559f8-sqmqz 1/1 Running 0 19m 10.42.1.3 v-k3s-worker-1 <none> <none>
webservers apache-deployment-5df5f65c6f-qqt4t 1/1 Running 0 18m 10.42.2.7 v-k3s-worker-2 <none> <none>
webservers apache-deployment-5df5f65c6f-hbbp6 1/1 Running 0 18m 10.42.2.6 v-k3s-worker-2 <none> <none>
webservers apache-deployment-5df5f65c6f-x96zm 1/1 Running 0 18m 10.42.1.5 v-k3s-worker-1 <none> <none>
# services info
user#v-k3s-master:~$ kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get svc apache-svc
Error from server (NotFound): services "apache-svc" not found
# services info
user#v-k3s-master:~$ kubectl --kubeconfig /etc/rancher/k3s/k3s.yaml get services --all-namespaces -o wide
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx ingress-nginx-controller-admission ClusterIP 10.43.185.36 <none> 443/TCP 30m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
ingress-nginx ingress-nginx-controller NodePort 10.43.124.153 <none> 80:31333/TCP,443:30349/TCP 30m app.kubernetes.io/component=controller,app.kubernetes.io/instance=ingress-nginx,app.kubernetes.io/name=ingress-nginx
webservers apache-svc NodePort 10.43.216.240 <none> 80:30081/TCP 29m app=apache
webservers nginx-svc NodePort 10.43.215.23 <none> 80:30082/TCP 29m app=nginx
Information from worker-1
user#v-k3s-worker-1:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-qqt4t -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
user#v-k3s-worker-1:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-hbbp6 -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
user#v-k3s-worker-1:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-x96zm -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Information from worker-2
user#v-k3s-worker-2:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-qqt4t -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
user#v-k3s-worker-2:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-hbbp6 -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
user#v-k3s-worker-2:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-x96zm -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Information from worker-2
user#v-k3s-worker-3:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-qqt4t -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
user#v-k3s-worker-3:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-hbbp6 -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
user#v-k3s-worker-3:~$ kubectl exec --stdin --tty apache-deployment-5df5f65c6f-x96zm -- /bin/bash
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Sorry if it's a naive question. Please correct me if my understanding is wrong.
Created POD using this command:
kubectl run nginx --image=nginx --port=8888
My understand of this command, nginx (application) container will be exposed/available at port 8888
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 10m 10.244.1.2 node01 <none> <none>
curl -v 10.244.1.2:8888 ===> i am wondering why this failed ?
* Trying 10.244.1.2:8888...
* TCP_NODELAY set
* connect to 10.244.1.2 port 8888 failed: Connection refused
* Failed to connect to 10.244.1.2 port 8888: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.244.1.2 port 8888: Connection refused
curl -v 10.244.1.2 ===> to my surprise this returned 200 success response
* Trying 10.244.1.2:80...
* TCP_NODELAY set
* Connected to 10.244.1.2 (10.244.1.2) port 80 (#0)
> GET / HTTP/1.1
> Host: 10.244.1.2
> User-Agent: curl/7.68.0
> Accept: */*
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
If the application is still referring default 80 port, I am wondering about the significance of container port 8888 ?
OK, so it may be used to expose the POD to the outside world.
Let's see that, I went ahead and created service for the POD:
kubectl expose pod nginx --port=80 --target-port=8888
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx ClusterIP 10.96.214.161 <none> 80/TCP 13m
$ curl -v 10.96.214.161 ==> here default port (80) didn't work
* Trying 10.96.214.161:80...
* TCP_NODELAY set
* connect to 10.96.214.161 port 80 failed: Connection refused
* Failed to connect to 10.96.214.161 port 80: Connection refused
* Closing connection 0
curl: (7) Failed to connect to 10.96.214.161 port 80: Connection refused
$ curl -v 10.96.214.161:8888 ==> target port didn't work either
* Trying 10.96.214.161:8888...
* TCP_NODELAY set
....waiting forever
Which port do I need to use to make it work? Am I missing anything?
By default, nginx server listen to the port 80. You can see it in their docker image ref.
With kubectl run nginx --image=nginx --port=8888 what you have done here is you have expose another port along with 80. But the server is still listening on the 80 port.
So, try with target port 80. For this reason when you tried with other than port 80 it's not working. Try with set --target-port=8888 to --target-port=80.
Or, If you want to change the server port you need to use configmap along with pod to pass custom config to the server.
I would like to access my application via localhost with kubectl port-forward command. But when I run kubectl port-forward road-dashboard-dev-5cdc465475-jwwgz 8082:8080 I received an below error.
> Forwarding from 127.0.0.1:8082 -> 8080 Forwarding from [::1]:8082 ->
> 8080 Handling connection for 8082 Handling connection for 8082 E0124
> 14:15:27.173395 4376 portforward.go:400] an error occurred
> forwarding 8082 -> 8080: error forwarding port 8080 to pod
> 09a76f6936b313e438bbf5a84bd886b3b3db8f499b5081b66cddc390021556d5, uid
> : exit status 1: 2020/01/24 11:15:27 socat[9064] E connect(6, AF=2
> 127.0.0.1:8080, 16): Connection refused
I also try to connect pod in cluster via exec -it but it did not work as well.What might be the missing point that I ignore?
node#road-dashboard-dev-5cdc465475-jwwgz:/usr/src/app$ curl -v localhost:8080
* Rebuilt URL to: localhost:8080/
* Trying ::1...
* TCP_NODELAY set
* connect to ::1 port 8080 failed: Connection refused
* Trying 127.0.0.1...
* TCP_NODELAY set
* connect to 127.0.0.1 port 8080 failed: Connection refused
* Failed to connect to localhost port 8080: Connection refused
* Closing connection 0
curl: (7) Failed to connect to localhost port 8080: Connection refused
kubectl get all out is below.I am sure that Container port value is set 8080.
NAME READY STATUS RESTARTS AGE
pod/road-dashboard-dev-5cdc465475-jwwgz 1/1 Running 0 34m
pod/road-dashboard-dev-5cdc465475-rdk7g 1/1 Running 0 34m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/road-dashboard-dev NodePort 10.254.61.225 <none> 80:41599/TCP 18h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/road-dashboard-dev 2/2 2 2 18h
NAME DESIRED CURRENT READY AGE
replicaset.apps/road-dashboard-dev-5cdc465475 2 2 2 34m
Name: road-dashboard-dev-5cdc465475-jwwgz
Namespace: dev
Priority: 0
PriorityClassName: <none>
Node: c123
Containers:
road-dashboard:
Port: 8080/TCP
Host Port: 0/TCP
State: Running
Started: Fri, 24 Jan 2020 13:42:39 +0300
Ready: True
Restart Count: 0
Environment: <none>
To debug your issue you should let the port forward command tuning in foreground and curl from a second terminal and see what output you get on the port-forward prompt.
$ kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/nginx 1/1 Running 2 112m 10.244.3.43 k8s-node-3 <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 11d <none>
service/nginx NodePort 10.96.130.207 <none> 80:31316/TCP 20m run=nginx
Example :
$ kubectl port-forward nginx 31000:80
Forwarding from 127.0.0.1:31000 -> 80
Forwarding from [::1]:31000 -> 80
Curl from second terminal window curl the port forward you have.
$ curl localhost:31000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
nginx.org.<br/>
Commercial support is available at
nginx.com.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
You should see that on first terminal the portforward promt list that it is handeling a connection like below note new line Handling connection for 31000
$ kubectl port-forward nginx 31000:80
Forwarding from 127.0.0.1:31000 -> 80
Forwarding from [::1]:31000 -> 80
Handling connection for 31000
So if like i have wrong port forwarding as below (note i have mode the port 8080 for nginx container exposing port 80)
$ kubectl port-forward nginx 31000:8080
Forwarding from 127.0.0.1:31000 -> 8080
Forwarding from [::1]:31000 -> 8080
The curl will result clear error on port forward prompt indicating the connection was refused from container when getting to port 8080 as its not correct. and we get a empty reply back.
$ curl -v localhost:31000
* Rebuilt URL to: localhost:31000/
* Trying 127.0.0.1...
* Connected to localhost (127.0.0.1) port 31000 (#0)
> GET / HTTP/1.1
> Host: localhost:31000
> User-Agent: curl/7.47.0
> Accept: */*
>
* Empty reply from server
* Connection #0 to host localhost left intact
curl: (52) Empty reply from server
$ kubectl port-forward nginx 31000:8080
Forwarding from 127.0.0.1:31000 -> 8080
Forwarding from [::1]:31000 -> 8080
Handling connection for 31000
E0124 11:35:53.390711 10791 portforward.go:400] an error occurred forwarding 31000 -> 8080: error forwarding port 8080 to pod 88e4de4aba522b0beff95c3b632eca654a5c34b0216320a29247bb8574ef0f6b, uid : exit status 1: 2020/01/24 11:35:57 socat[15334] E connect(5, AF=2 127.0.0.1:8080, 16): Connection refused
I created a K8s cluster of 5 VMs (1 master and 4 slaves running Ubuntu 16.04.3 LTS) using kubeadm. I used flannel to set up networking in the cluster. I was able to successfully deploy an application. I, then, exposed it via NodePort service. From here things got complicated for me.
Before I started, I disabled the default firewalld service on master and the nodes.
As I understand from the K8s Services doc, the type NodePort exposes the service on all nodes in the cluster. However, when I created it, the service was exposed only on 2 nodes out of 4 in the cluster. I am guessing that's not the expected behavior (right?)
For troubleshooting, here are some resource specs:
root#vm-vivekse-003:~# kubectl get nodes
NAME STATUS AGE VERSION
vm-deepejai-00b Ready 5m v1.7.3
vm-plashkar-006 Ready 4d v1.7.3
vm-rosnthom-00f Ready 4d v1.7.3
vm-vivekse-003 Ready 4d v1.7.3 //the master
vm-vivekse-004 Ready 16h v1.7.3
root#vm-vivekse-003:~# kubectl get pods -o wide -n playground
NAME READY STATUS RESTARTS AGE IP NODE
kubernetes-bootcamp-2457653786-9qk80 1/1 Running 0 2d 10.244.3.6 vm-rosnthom-00f
springboot-helloworld-2842952983-rw0gc 1/1 Running 0 1d 10.244.3.7 vm-rosnthom-00f
root#vm-vivekse-003:~# kubectl get svc -o wide -n playground
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
sb-hw-svc 10.101.180.19 <nodes> 9000:30847/TCP 5h run=springboot-helloworld
root#vm-vivekse-003:~# kubectl describe svc sb-hw-svc -n playground
Name: sb-hw-svc
Namespace: playground
Labels: <none>
Annotations: <none>
Selector: run=springboot-helloworld
Type: NodePort
IP: 10.101.180.19
Port: <unset> 9000/TCP
NodePort: <unset> 30847/TCP
Endpoints: 10.244.3.7:9000
Session Affinity: None
Events: <none>
root#vm-vivekse-003:~# kubectl get endpoints sb-hw-svc -n playground -o yaml
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: 2017-08-09T06:28:06Z
name: sb-hw-svc
namespace: playground
resourceVersion: "588958"
selfLink: /api/v1/namespaces/playground/endpoints/sb-hw-svc
uid: e76d9cc1-7ccb-11e7-bc6a-fa163efaba6b
subsets:
- addresses:
- ip: 10.244.3.7
nodeName: vm-rosnthom-00f
targetRef:
kind: Pod
name: springboot-helloworld-2842952983-rw0gc
namespace: playground
resourceVersion: "473859"
uid: 16d9db68-7c1a-11e7-bc6a-fa163efaba6b
ports:
- port: 9000
protocol: TCP
After some tinkering I realized that on those 2 "faulty" nodes, those services were not available from within those hosts itself.
Node01 (working):
root#vm-vivekse-004:~# curl 127.0.0.1:30847 //<localhost>:<nodeport>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.101.180.19:9000 //<cluster-ip>:<port>
Hello Docker World!!
root#vm-vivekse-004:~# curl 10.244.3.7:9000 //<pod-ip>:<port>
Hello Docker World!!
Node02 (working):
root#vm-rosnthom-00f:~# curl 127.0.0.1:30847
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.101.180.19:9000
Hello Docker World!!
root#vm-rosnthom-00f:~# curl 10.244.3.7:9000
Hello Docker World!!
Node03 (not working):
root#vm-plashkar-006:~# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-plashkar-006:~# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-plashkar-006:~# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Node04 (not working):
root#vm-deepejai-00b:/# curl 127.0.0.1:30847
curl: (7) Failed to connect to 127.0.0.1 port 30847: Connection timed out
root#vm-deepejai-00b:/# curl 10.101.180.19:9000
curl: (7) Failed to connect to 10.101.180.19 port 9000: Connection timed out
root#vm-deepejai-00b:/# curl 10.244.3.7:9000
curl: (7) Failed to connect to 10.244.3.7 port 9000: Connection timed out
Tried netstat and telnet on all 4 slaves. Here's the output:
Node01 (the working host):
root#vm-vivekse-004:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 27808/kube-proxy
root#vm-vivekse-004:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node02 (the working host):
root#vm-rosnthom-00f:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 11842/kube-proxy
root#vm-rosnthom-00f:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
Node03 (the not-working host):
root#vm-plashkar-006:~# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 7791/kube-proxy
root#vm-plashkar-006:~# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Node04 (the not-working host):
root#vm-deepejai-00b:/# netstat -tulpn | grep 30847
tcp6 0 0 :::30847 :::* LISTEN 689/kube-proxy
root#vm-deepejai-00b:/# telnet 127.0.0.1 30847
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection timed out
Addition info:
From the kubectl get pods output, I can see that the pod is actually deployed on slave vm-rosnthom-00f. I am able to ping this host from all the 5 VMs and curl vm-rosnthom-00f:30847 also works from all the VMs.
I can clearly see that the internal cluster networking is messed up, but I am unsure how to resolve it! iptables -L for all the slaves are identical, and even the Local Loopback (ifconfig lo) is up and running for all the slaves. I'm completely clueless as to how to fix it!
Use a service type NodePort and access the NodePort if the Ipadress of your Master node.
The Service obviously knows on which node a Pod is running and redirect the traffic to one of the pods if you have several instances.
Label your pods and use the corrispondent selectors in the service.
If you get still into issues please post your service and deployment.
To check the connectivity i would suggest to use netcat.
nc -zv ip/service port
if network is ok it responds: open
inside the cluster access the containers like so:
nc -zv servicename.namespace.svc.cluster.local port
Consider always that you have 3 kinds of ports.
Port on which your software is running in side your container.
Port on which you expose that port to the pod. (a pod has one ipaddress, the clusterIp address, which is use by a container on a specific port)
NodePort wich allows you to access the pods ipaddress ports from outside the clusters network.
Either your firewall blocks some connections between nodes or your kube-proxy is not working properly. I guess your services work only on nodes where pods are running on.
If you want to reach the service from any node in the cluster you need fine service type as ClusterIP. Since you defined service type as NodePort, you can connect from the node where service is running.
my above answer was not correct, based on documentation we should be able to connect from any NodeIP:Nodeport. but its not working in my cluster also.
https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services---service-types
NodePort: Exposes the service on each Node’s IP at a static port (the
NodePort). A ClusterIP service, to which the NodePort service will
route, is automatically created. You’ll be able to contact the
NodePort service, from outside the cluster, by requesting
:.
One of my node ip forward not set. I was able to connect my service using NodeIP:nodePort
sysctl -w net.ipv4.ip_forward=1