How can I expose a Statefulset with a load balancer? - kubernetes

I currently trying to create a cluster of X pods witch each have a personal persistent volume. To do that I've created a StateFulSet with X replicas and a PersistentVolumeClaimTemplate This part is working.
The problem is that it's seem's to be impossible to expose thoses pods with a LoadBalancer in the same way as a deployment (because of the uniqueness of a pods in a statefulset).
At this moment I've tried to expose it as a simple deployment witch is not working and the only way I've found is to expose each pods one by one (I've not tested it but I saw it on this) but it's not that scalable...
I'm not running kubernetes on any cloud provider platform then please avoid exclusive command line.

The problem is that it's seem's to be impossible to expose thoses pods with a LoadBalancer in the same way as a deployment (because of the uniqueness of a pods in a statefulset).
Why not? Here is my StatefulSet with default Nginx
$ k -n test get statefulset
NAME DESIRED CURRENT AGE
web 2 2 5d
$ k -n test get pods
web-0 1/1 Running 0 5d
web-1 1/1 Running 0 5d
Here is my Service type LoadBalancer which is NodePort (in fact) in case of Minikube
$ k -n test get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx LoadBalancer 10.110.22.74 <pending> 80:32710/TCP 5d
Let's run some pod with curl and do some requests to ClusterIP:
$ kubectl -n test run -i --tty tools --image=ellerbrock/alpine-bash-curl-ssl -- bash
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
bash-4.4$ curl 10.110.22.74 &> /dev/null
Let's check out Nginx logs:
$ k -n test logs web-0
172.17.0.7 - - [18/Apr/2019:23:35:04 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - - [18/Apr/2019:23:35:05 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - - [18/Apr/2019:23:35:17 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
$ k -n test logs web-1
172.17.0.7 - - [18/Apr/2019:23:35:15 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.61.0"
172.17.0.7 - is my pod with curl:
NAME READY STATUS RESTARTS AGE IP NODE
tools-654cfc5cdc-8zttt 1/1 Running 1 5d 172.17.0.7 minikube
Actually ClusterIP is totally enough in case of load balancing between StatefulSet's pods, because you have a list of Endpoints
$ k -n test get endpoints
NAME ENDPOINTS AGE
nginx 172.17.0.5:80,172.17.0.6:80 5d
YAMLs:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: web
spec:
serviceName: "nginx"
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: k8s.gcr.io/nginx-slim:0.8
ports:
- containerPort: 80
name: web
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
name: web
selector:
app: nginx

Related

Communication Between Two Services in Kubernetes Cluster Using Ingress as API Gateway

I am having problems trying to get communication between two services in a kubernetes cluster. We are using a kong ingress object as an 'api gateway' to reroute http
calls from a simple Angular frontend to send it to a .NET Core 3.1 API Controller Interface backend.
In front of these two ClusterIP services sits an ingress controller to take external http(s) calls from our kubernetes cluster to launch the frontend service. This ingress is shown here:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-nginx
namespace: kong
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.***.*******.com << Obfuscated
http:
paths:
- path: /
backend:
serviceName: frontend-service
servicePort: 80
The first service is called 'frontend-service', a simple Angular 9 frontend that allows me to type in http strings and submit those strings to the backend.
The manifest yaml file for this is shown below. Note that the image name is obfuscated for various reasons.
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
namespace: kong
labels:
app: frontend
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
imagePullSecrets:
- name: regcred
containers:
- name: frontend
image: ***********/*******************:**** << Obfuscated
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: kong
name: frontend-service
spec:
type: ClusterIP
selector:
app: frontend
ports:
- port: 80
targetPort: 80
protocol: TCP
The second service is a simple .NET Core 3.1 API interface that prints back some text when the controller is reached. The backend service is called 'dataapi' and has one simple Controller in it called ValuesController.
The manifest yaml file for this is shown below.
replicas: 1
selector:
matchLabels:
app: dataapi
template:
metadata:
labels:
app: dataapi
spec:
imagePullSecrets:
- name: regcred
containers:
- name: dataapi
image: ***********/*******************:**** << Obfuscated
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: dataapi
namespace: kong
labels:
app: dataapi
spec:
ports:
- port: 80
name: http
targetPort: 80
selector:
app: dataapi
We are using a kong ingress as a proxy to redirect incoming http calls to the dataapi service. This manifest file is shown below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kong-gateway
namespace: kong
spec:
ingressClassName: kong
rules:
- http:
paths:
- path: /dataapi
pathType: Prefix
backend:
service:
name: dataapi
port:
number: 80
Performing a 'kubectl get all' produces the following output:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dataapi-dbc8bbb69-mzmdc 1/1 Running 0 2d2h
pod/frontend-5d5ffcdfb7-kqxq9 1/1 Running 0 65m
pod/ingress-kong-56f8f44fd5-rwr9j 2/2 Running 0 6d
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dataapi ClusterIP 10.128.72.137 <none> 80/TCP,443/TCP 2d2h
service/frontend-service ClusterIP 10.128.44.109 <none> 80/TCP 2d
service/kong-proxy LoadBalancer 10.128.246.165 XX.XX.XX.XX 80:31289/TCP,443:31202/TCP 6d
service/kong-validation-webhook ClusterIP 10.128.138.44 <none> 443/TCP 6d
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dataapi 1/1 1 1 2d2h
deployment.apps/frontend 1/1 1 1 2d
deployment.apps/ingress-kong 1/1 1 1 6d
NAME DESIRED CURRENT READY AGE
replicaset.apps/dataapi-dbc8bbb69 1 1 1 2d2h
replicaset.apps/frontend-59bf9c75dc 0 0 0 25h
replicaset.apps/ingress-kong-56f8f44fd5 1 1 1 6d
and 'kubectl get ingresses' gives:
NAME CLASS HOSTS (Obfuscated)
ingress-nginx <none> ***.******.com,**.********.com,**.****.com,**.******.com + 1 more... xx.xx.xxx.xx 80 6d ADDRESS PORTS AGE
kong-gateway kong * xx.xx.xxx.xx 80 2d2h
From the frontend, the expectation is that constructing the http string:
http://kong-proxy/dataapi/api/values
will enter our 'values' controller in the backend and return the text string from that controller.
Both services are running on the same kubernetes cluster, here using Linode. Our thinking is that it is a 'within cluster' communication between two services both of type ClusterIP.
The error reported in the Chrome console is:
zone-evergreen.js:2828 GET http://kong-proxy/dataapi/api/values net::ERR_NAME_NOT_RESOLVED
Note that we had found a similar StackOverflow issue as ours and the suggestion in that result was to add 'default.svc.cluster.local' to the http string as follows:
http://kong-proxy.default.svc.cluster.local/dataapi/api/values
This did not work. We also substituted kong, which is the namespace of the service, for default like this:
http://kong-proxy.kong.svc.cluster.local/dataapi/api/values
yielding the same errors as above.
Is there a critical step I am missing? Any advice is greatly appreciated!
*************** UPDATE From Eric Gagnon's Response(s) **************
Again, thank you Eric for Responding. Here are what my colleague and I have tried per your suggestions
Pod dns misconfiguration: check if pod's first nameserver equals 'kube-dns' svc ip and if search start with kong.svc.cluster.local:
kubectl exec -i -t -n kong frontend-simple-deployment-7b8b9cfb44-f2shk -- cat /etc/resolv.conf
nameserver 10.128.0.10
search kong.svc.cluster.local svc.cluster.local cluster.local members.linode.com
options ndots:5
kubectl get -n kube-system svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.128.0.10 <none> 53/UDP,53/TCP,9153/TCP 55d
kubectl describe -n kube-system svc kube-dns
Name: kube-dns
Namespace: kube-system
Labels: k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: lke.linode.com/caplke-version: v1.19.9-001
prometheus.io/port: 9153
prometheus.io/scrape: true
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.128.0.10
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.2.4.10:53,10.2.4.14:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.2.4.10:53,10.2.4.14:53
Port: metrics 9153/TCP
TargetPort: 9153/TCP
Endpoints: 10.2.4.10:9153,10.2.4.14:9153
Session Affinity: None
Events: <none>
App Not using pod dns: in Node, output dns.getServers() to console
I do not understand where and how to do this. We tried to add DNS directly inside our Angular frontend app, but we found out it is not possible to add this.
Kong-proxy doesn't like something: set logging debug, hit the app a bunch of times, and grep logs.
We have tried two tests here. First, our kong-proxy service reachable from an ingress controller. Note that this is not our simple frontend app. It is nothing more than a proxy that passes an http string to a public gateway we have set up. This does work. We have exposed this through as:
http://gateway.cwg.stratbore.com/test/api/test
["Successfully pinged Test controller!!"]
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
So this works.
But when we try and do it from a simple frontend interface running in the same cluster as our backend:
it does not work with the text shown in the text box. This command does not add anything new:
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
The front end comes back with an error.
But if we do add this http text:
The kong-ingress pod is hit:
kubectl logs -n kong ingress-kong-56f8f44fd5-rwr9j | grep test
10.2.4.11 - - [16/Apr/2021:16:03:42 +0000] "GET /test/api/test HTTP/1.1" 200 52 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
10.2.4.11 - - [17/Apr/2021:16:55:50 +0000] "GET /test/api/test HTTP/1.1" 200 52 "http://app-basic.cwg.stratbore.com/" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.128 Safari/537.36"
but the frontend gets an error back.
So at this point, we have tried a lot of things to get our frontend app to successfully send an http to our backend and get a response back and we are unsuccessful. I have also tried various configurations of our nginx.conf file that is packaged with our frontend app but no luck there either.
I am about to package all of this up in a github project. Thanks.
Chris,
I haven't used linode or kong and don't know what your frontend actually does, so I'll just point out what I can see:
The simplest dns check is to curl (or ping, dig, etc.):
http://[dataapi's pod ip]:80 from a host node
http://[kong-proxy svc's internal ip]/dataapi/api/values from a host node (or another pod - see below)
default path matching on nginx ingress controller is pathPrefix, so your nginx ingress with path: / and nginx.ingress.kubernetes.io/rewrite-target: / actually matches everything and rewrites to /. This may not be an issue if you properly specify all your ingresses so they take priority over "/".
you said 'using a kong ingress as a proxy to redirect incoming', just want to make sure you're proxying (not redirecting the client).
Is chrome just relaying its upstream error from frontend-service? An external client shouldn't be able to resolve the cluster's urls (unless you've joined your local machine to the cluster's network or done some other fancy trick). By default, dns only works within the cluster.
cluster dns generally follows [service name].[namespace name].svc.cluster.local. If dns cluster dns is working, then using curl, ping, wget, etc. from a pod in the cluster and pointing it to that svc will send it to the cluster svc ip, not an external ip.
is your dataapi service configured to respond to /dataapi/api/values or does it not care what the uri is?
If you don't have any network policies restricting traffic within a namespace, you should be able to create a test pod in the same namespace, and curl the service dns and the pod ip's directly:
apiVersion: v1
kind: Pod
metadata:
name: curl-test
namespace: kong
spec:
containers:
- name: curl-test
image: buildpack-deps
imagePullPolicy: Always
command:
- "curl"
- "-v"
- "http://dataapi:80/dataapi/api/values"
#nodeSelector:
# kubernetes.io/hostname: [a more different node's hostname]
The pod should attempt dns resolution from the cluster. So it should find dataapi's svc ip and curl port 80 path /dataapi/api/values. Service IP's are virtual so they aren't actually 'reachable'. Instead, iptables routes them to the pod ip, which has an actual network endpoint and IS addressable.
once it completes, just check the logs: kubectl logs curl-test, and then delete it.
If this fails, the nature of the failure in the logs should tell you if it's a dns or link issue. If it works, then you probably don't have a cluster dns issue. But it's possible you have an inter-node communication issue. To test this, you can run the same manifest as above, but uncomment the node selector field to force it to run on a different node than your kong-proxy pod. It's a manual method, but it's quick for troubleshooting. Just rinse and repeat as needed for other nodes.
Of course, it may not be any of this, but hopefully this helps troubleshoot.
After a lot of help from Eric G (thank you!) on this, and reading this previous StackOverflow, I finally solved the issue. As the answer in this link illustrates, our frontend pod was serving up our application in a web browser which knows NOTHING about Kubernetes clusters.
As the link suggests, we added another rule in our nginx ingress to successfully route our http requests to the proper service
- host: gateway.*******.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: gateway-service
port:
number: 80
Then from our Angular frontend, we sent our HTTP requests as follows:
...
http.get<string>("http://gateway.*******.com/api/name_of_contoller');
...
And we were finally able to communicate with our backend service the way we wanted. Both frontend and backend in the same Kubernetes Cluster.

Error when trying to expose a docker container in minikube

I am using minikube to learn about docker, but I have come across a problem.
I am following along with the examples in Kubernetes in Action, and I am trying to get a pod that I have pulled from my docker hub account, but I cannot make this pod visible.
if I run
kubectl get pod
I can see that the pod is present.
NAME READY STATUS RESTARTS AGE
kubia 1/1 Running 1 6d22h
However when I do the first step to create a service
kubectl expose rc kubia --type=LoadBalancer --name kubia-http service "kubia-http" exposed
I am getting this error returned
Error from server (NotFound): replicationcontrollers "kubia" not found
Error from server (NotFound): replicationcontrollers "service" not found
Error from server (NotFound): replicationcontrollers "kubia-http" not found
Error from server (NotFound): replicationcontrollers "exposed" not found
Any ideas why I am getting this error and what I need to do to correct it?
I am using minikube v1.13.1 on mac Mojave (v10.14.6), and I can't upgrade because I am using a company supplied machine, and all updates are controlled by HQ.
In this book, used command is kubectl run kubia --image=luksa/kubia --port=8080 --generator=run/v1 which used to create ReplicationController back in the days when book was written however this object is currently depracated.
Now kubectl run command creates standalone pod without ReplicationController. So to expose it you should run:
kubectl expose pod kubia --type=LoadBalancer --name kubia-http
In order to create a replication it is recommended to use Deployment. To create it using CLI you can simply run
kubectl create deployment <name_of_deployment> --image=<image_to_be_used>
It will create a deployment and one pod. And then it can be exposed similarly to previous pod exposure:
kubectl expose deployment kubia --type=LoadBalancer --name kubia-http
Replication controllers are older concepts than creating services and deployments in Kubernetes, checkout this answer.
A service template looks like the following:
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: App
ports:
- protocol: tcp
port: 80
targetPort: 8080
Then after saving the service config into a file you do kubectl apply -f <filename>
Checkout more at: https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service
kubia.yaml
https://kubernetes.io/ko/docs/concepts/workloads/controllers/deployment/
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
shell
https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel
kubectl apply -f kubia.yaml
kubectl expose deployment kubia --type=LoadBalancer --port 8080 --name kubia-http
minikube tunnel &
curl 127.0.0.1:8080
if you change replicas
change kubia.yaml (3 -> 5)
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 5
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: seunggab/kubia:latest
ports:
- containerPort: 8080
re-apply
kubectl apply -f kubia.yaml
# as-is
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-qp7wl 1/1 Running 0 20s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 20s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 20s
# to-be
NAME READY STATUS RESTARTS AGE
kubia-5f896dc5d5-fsd49 0/1 ContainerCreating 0 6s
kubia-5f896dc5d5-qp7wl 1/1 Running 0 3m35s
kubia-5f896dc5d5-rqqm5 1/1 Running 0 3m35s
kubia-5f896dc5d5-vqgj9 1/1 Running 0 3m35s
kubia-5f896dc5d5-x84fr 1/1 Running 0 6s
ref: https://github.com/seunggabi/kubernetes-in-action/wiki/2.-Kubernetes

Kubernetes: Not able to communicate within two services (different pod, same namespace)

I am not able to communicate between two services.
post-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector
tier: backend
template:
metadata:
labels:
app: python-web-selector
tier: backend
spec:
containers:
- name: python-web-pod
image: sakshiarora2012/python-backend:v10
ports:
- containerPort: 5000
post-deployment2.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: python-data-deployment2
labels:
spec:
replicas: 1
selector:
matchLabels:
app: python-web-selector2
tier: backend
template:
metadata:
labels:
app: python-web-selector2
tier: backend
spec:
containers:
- name: python-web-pod2
image: sakshiarora2012/python-backend:v8
ports:
- containerPort: 5000
post-service.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service
spec:
selector:
app: python-web-selector
tier: backend
ports:
- port: 5000
nodePort: 30400
type: NodePort
post-service2.yml
apiVersion: v1
kind: Service
metadata:
name: python-data-service2
spec:
selector:
app: python-web-selector2
tier: backend
ports:
- port: 5000
type: ClusterIP
When I go and try to ping from 1 container to another, it is not able to ping
root#python-data-deployment-7bd65dc685-htxmj:/project# ping python-data-service.default.svc.cluster.local
PING python-data-service.default.svc.cluster.local (10.107.11.236) 56(84) bytes of data.
^C
--- python-data-service.default.svc.cluster.local ping statistics ---
7 packets transmitted, 0 received, 100% packet loss, time 139ms
If I see dns entry it is showing
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service.default.svc.cluster.local
Address: 10.107.11.236
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl exec -i -t dnsutils -- nslookup python-data-service2
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: python-data-service2.default.svc.cluster.local
Address: 10.103.97.40
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
dnsutils 1/1 Running 0 5m54s 172.17.0.9 minikube <none> <none>
python-data-deployment-7bd65dc685-htxmj 1/1 Running 0 47m 172.17.0.6 minikube <none> <none>
python-data-deployment2-764744b97d-mc9gm 1/1 Running 0 43m 172.17.0.8 minikube <none> <none>
python-db-deployment-d54f6b657-rfs2b 1/1 Running 0 44h 172.17.0.7 minikube <none> <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service
Name: python-data-service
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service","namespace":"default"},"spec":{"ports":[{"no...
Selector: app=python-web-selector,tier=backend
Type: NodePort
IP: 10.107.11.236
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
NodePort: <unset> 30400/TCP
Endpoints: 172.17.0.6:5000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration % kubectl describe svc python-data-service2
Name: python-data-service2
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"python-data-service2","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=python-web-selector2,tier=backend
Type: ClusterIP
IP: 10.103.97.40
Port: <unset> 5000/TCP
TargetPort: 5000/TCP
Endpoints: 172.17.0.8:5000
Session Affinity: None
Events: <none>
sakshiarora#Sakshis-MacBook-Pro Student_Registration %
I think if in DNS table it show if of range 172,17.0.X then it will work, but not sure why it is not showing in dns entry, Any pointers?
If you want to access python-data-service from outside the cluster using NodePort and you are using minikube you should be able to do so by using curl $(minikube service python-data-service --url) from anywhere outside the cluster i.e from your system
If you want to communicate between two microservice within the cluster then simply use ClusterIP type service instead of NodePort type.
To identity if it's a service issue or pod issue use PODIP directly in the curl command. From the output of kubectl describe svc python-data-service the Pod IP for service python-data-service is 172.17.0.6. So try curl 172.17.0.6:5000/getdata
In order to start debugging your services I would suggest the following steps:
Check that your service 1 is accessible as a Pod:
kubectl run test1 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://172.17.0.6:5000
Check that your service 2 is accessible as a Pod:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 172.17.0.8:5000
Then, check that your service 1 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.107.11.236:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service:5000
Then, check that your service 2 is accessible as a Service using the corresponding cluster IP and then DNS Name:
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - 10.103.97.40:5000
kubectl run test2 -it --rm=true --image=busybox --restart=Never -n default -- wget -O - http://python-data-service2:5000
Then, if needed, check that your service 2 is accessible through your node port (you would need to know the IP Address of the Node where the service has been exposed, for instance in minikube it should work:)
wget -O - http://192.168.99.101:30400
From your Service manifest I can recommend as a good practice to specify both port and targetPort as you can see at
https://canterafonseca.eu/kubernetes/certification/application/developer/cncf/k8s/cloud/native/computing/ckad/deployments/services/preparation-k8s-ckad-exam-part4-services.html#-services
On the other hand if you only need to expose to the outside world one of the services you can create a headless service (see also my blog post above).
Ping doesn't work on services ClusterIP addresses because they are from virtual addresses created by iptables rules that redirect packets to the endpoints(pods).
You should be able to ping a pod, but not a service.
You can use curl or wget
For example wget -qO- POD_IP:80
or You can try
wget -qO- http://your-service-name:port/yourpath
curl POD_IP:port_number
Are you able to connect to your pods maybe try port-forward to see if you can connect & then check the connectivity in two pods.
Last check if there is default deny network policy set there - maybe you have some restrictions at network level.
kubectl get networkpolicy -n <namespace>
Try to look into the logs using kubectl logs PODNAME so that you know what's happening. From first sight, I think you need to expose the ports of both services: kubectl port-forward yourService PORT:PORT.

Kubernetes ingress (hostNetwork=true), can't reach service by node IP - GCP

I am trying to expose deployment using Ingress where DeamonSet has hostNetwork=true which would allow me to skip additional LoadBalancer layer and expose my service directly on the Kubernetes external node IP. Unfortunately I can't reach the Ingress controller from the external network.
I am running Kubernetes version 1.11.16-gke.2 on GCP.
I setup my fresh cluster like this:
gcloud container clusters get-credentials gcp-cluster
kubectl create serviceaccount --namespace kube-system tiller
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
helm init --service-account tiller --upgrade
helm install --name ingress --namespace ingress-nginx --set rbac.create=true,controller.kind=DaemonSet,controller.service.type=ClusterIP,controller.hostNetwork=true stable/nginx-ingress
I run the deployment:
cat <<EOF | kubectl create -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-node
spec:
selector:
matchLabels:
app: hello-node
template:
metadata:
labels:
app: hello-node
spec:
containers:
- name: hello-node
image: gcr.io/google-samples/node-hello:1.0
ports:
- containerPort: 8080
EOF
Then I create service:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: hello-node
EOF
and ingress resource:
cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
name: hello-node-single-ingress
spec:
backend:
serviceName: hello-node
servicePort: 80
EOF
I get the node external IP:
12:50 $ kubectl get nodes -o json | jq '.items[] | .status .addresses[] | select(.type=="ExternalIP") | .address'
"35.197.204.75"
Check if ingress is running:
12:50 $ kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
hello-node-single-ingress * 35.197.204.75 80 8m
12:50 $ kubectl get pods --namespace ingress-nginx
NAME READY STATUS RESTARTS AGE
ingress-nginx-ingress-controller-7kqgz 1/1 Running 0 23m
ingress-nginx-ingress-default-backend-677b99f864-tg6db 1/1 Running 0 23m
12:50 $ kubectl get svc --namespace ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx-ingress-controller ClusterIP 10.43.250.102 <none> 80/TCP,443/TCP 24m
ingress-nginx-ingress-default-backend ClusterIP 10.43.255.43 <none> 80/TCP 24m
Then trying to connect from the external network:
curl 35.197.204.75
Unfortunately it times out
On Kubernetes Github there is a page regarding ingress-nginx (host-netork: true) setup:
https://kubernetes.github.io/ingress-nginx/deploy/baremetal/#via-the-host-network
which mentions:
"This approach does not leverage any Service object to expose the NGINX Ingress controller. If the ingress-nginx Service exists in the target cluster, it is recommended to delete it."
I've tried to follow that and delete ingress-nginx services:
kubectl delete svc --namespace ingress-nginx ingress-nginx-ingress-controller ingress-nginx-ingress-default-backend
but this doesn't help.
Any ideas how to set up the Ingress on the node external IP? What I am doing wrong? The amount of confusion over running Ingress reliably without the LB overwhelms me. Any help much appreciated !
EDIT:
When another service accessing my deployment with NodePort gets created:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node2
spec:
ports:
- port: 80
targetPort: 8080
type: NodePort
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 2m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 6s
I still can't access my service e.g. using: curl 35.197.204.75:31151.
However when I create 3rd service with LoadBalancer type:
cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Service
metadata:
name: hello-node3
spec:
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
selector:
app: hello-node
EOF
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-node ClusterIP 10.47.246.91 <none> 80/TCP 7m
hello-node2 NodePort 10.47.248.51 <none> 80:31151/TCP 4m
hello-node3 LoadBalancer 10.47.250.47 35.189.106.111 80:31367/TCP 56s
I can access my service using the external LB: 35.189.106.111 IP.
The problem was missing firewall rules on GCP.
Found the answer: https://stackoverflow.com/a/42040506/2263395
Running:
gcloud compute firewall-rules create myservice --allow tcp:80,tcp:30301
Where 80 is the ingress port and 30301 is the NodePort port. On production you would probabaly use just the ingress port.

how to get pod to pod ping on different nodes working?

I would like to be able to ping from one pod to another. That works if the pods are on the same host. It does not work if the pods are on different hosts.
$ kubectl get pod,svc -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
pod/bbtest-5949c4d8c5-259wx 1/1 Running 1 2d 192.168.114.158 gordon-dm1.sdsc.edu <none>
pod/busybox-7cd98849ff-m75qv 0/1 Running 0 3m 192.168.78.30 gordon-dm3.sdsc.edu <none>pod/nginx-64f497f8fd-j4qml 1/1 Running 0 20m 192.168.114.163 gordon-dm1.sdsc.edu <none>
pod/nginx-64f497f8fd-tw4vb 1/1 Running 0 22m 192.168.209.32 gordon-dm4.sdsc.edu <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 17d <none>
$ kubectl run busybox --rm -ti --image busybox /bin/sh
/ # ping 192.168.114.163
PING 192.168.114.163 (192.168.114.163): 56 data bytes
^C
--- 192.168.114.163 ping statistics ---
4 packets transmitted, 0 packets received, 100% packet loss
/ #
I set up flannel but it doesn't make a change. I tried felixconfiguration only to get an error : resource does not exist: FelixConfiguration(default)
Any help to get pod to pod communication to work ?
Best practice is to use a service and open the nginx specific ports that require to receive connections and use the service hostname.
Use curl -I <service-name>.<namespace> for testing.
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-nginx
spec:
selector:
matchLabels:
run: my-nginx
replicas: 2
template:
metadata:
labels:
run: my-nginx
spec:
containers:
- name: my-nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-nginx
labels:
run: my-nginx
spec:
ports:
- port: 80
protocol: TCP
selector:
run: my-nginx
Result:
/ # curl -I my-nginx.default
HTTP/1.1 200 OK
Server: nginx/1.19.6
Date: Sun, 03 Jan 2021 17:44:26 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 15 Dec 2020 13:59:38 GMT
Connection: keep-alive
ETag: "5fd8c14a-264"
Accept-Ranges: bytes
P.S. I used kubectl run alpine --rm -ti --image alpine /bin/sh and apk add curl