I have the following headless service in my kubernetes cluster :
apiVersion: v1
kind: Service
metadata:
labels:
app: foobar
name: foobar
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: foobar
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
Behind are running couple of pods managed by a statefulset.
Lets try to reach my pods individually :
Running an alpine pod to contact my pods :
> kubectl run alpine -it --tty --image=alpine -- sh
Adding curl to fetch webpage :
alpine#> add apk curl
I can curl into each of my pods :
alpine#> curl -s pod-1.foobar
hello from pod 1
alpine#> curl -s pod-2.foobar
hello from pod 2
It works just as expected.
Now I want to have a service that will loadbalance between my pods.
Let's try to use that same foobar service :
alpine#> curl -s foobar
hello from pod 1
alpine#> curl -s foobar
hello from pod 2
It works just well. At least almost : In my headless service, I have specified sessionAffinity. As soon as I run a curl to a pod, I should stick to it.
I've tried the exact same test with a normal service (not headless) and this time it works as expected. It load balances between pods at first run BUT then stick to the same pod afterwards.
Why sessionAffinity doesn't work on a headless service ?
The affinity capability is provided by kube-proxy, only connection establish thru the proxy can have the client IP "stick" to a particular pod for a period of time. In case of headless, your client is given a list of pod IP(s) and it is up to your client app. to select which IP to connect. Because the order of IP(s) in the list is not always the same, typical app. that always pick the first IP will result to connect to the backend pod randomly.
Related
Do I still need to expose pod via clusterip service?
There are 3 pods - main, front, api. I need to allow ingress+egress connection to main pod only from the pods- api and frontend. I also created service-main - service that exposes main pod on port:80.
I don't know how to test it, tried:
k exec main -it -- sh
netcan -z -v -w 5 service-main 80
and
k exec main -it -- sh
curl front:80
The main.yaml pod:
apiVersion: v1
kind: Pod
metadata:
labels:
app: main
item: c18
name: main
spec:
containers:
- image: busybox
name: main
command:
- /bin/sh
- -c
- sleep 1d
The front.yaml:
apiVersion: v1
kind: Pod
metadata:
labels:
app: front
name: front
spec:
containers:
- image: busybox
name: front
command:
- /bin/sh
- -c
- sleep 1d
The api.yaml
apiVersion: v1
kind: Pod
metadata:
labels:
app: api
name: api
spec:
containers:
- image: busybox
name: api
command:
- /bin/sh
- -c
- sleep 1d
The main-to-front-networkpolicy.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: front-end-policy
spec:
podSelector:
matchLabels:
app: main
policyTypes:
- Ingress
- Egress
ingress:
- from:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
egress:
- to:
- podSelector:
matchLabels:
app: front
ports:
- port: 8080
What am I doing wrong? Do I still need to expose main pod via service? But should not network policy take care of this already?
Also, do I need to write containerPort:80 in main pod? How to test connectivity and ensure ingress-egress works only for main pod to api, front pods?
I tried the lab from ckad prep course, it had 2 pods: secure-pod and web-pod. There was issue with connectivity, the solution was to create network policy and test using netcat from inside the web-pod's container:
k exec web-pod -it -- sh
nc -z -v -w 1 secure-service 80
connection open
UPDATE: ideally I want answers to these:
a clear explanation of the diff btw service and networkpolicy.
If both service and netpol exist - what is the order of evaluation that the traffic/request goes thru? It first goes thru netpol then service? Or vice versa?
if I want front and api pods to send/receive traffic to main - do I need separate services exposing front and api pods?
Network policies and services are two different and independent Kubernetes resources.
Service is:
An abstract way to expose an application running on a set of Pods as a network service.
Good explanation from the Kubernetes docs:
Kubernetes Pods are created and destroyed to match the state of your cluster. Pods are nonpermanent resources. If you use a Deployment to run your app, it can create and destroy Pods dynamically.
Each Pod gets its own IP address, however in a Deployment, the set of Pods running in one moment in time could be different from the set of Pods running that application a moment later.
This leads to a problem: if some set of Pods (call them "backends") provides functionality to other Pods (call them "frontends") inside your cluster, how do the frontends find out and keep track of which IP address to connect to, so that the frontend can use the backend part of the workload?
Enter Services.
Also another good explanation in this answer.
For production you should use a workload resources instead of creating pods directly:
Pods are generally not created directly and are created using workload resources. See Working with Pods for more information on how Pods are used with workload resources.
Here are some examples of workload resources that manage one or more Pods:
Deployment
StatefulSet
DaemonSet
And use services to make requests to your application.
Network policies are used to control traffic flow:
If you want to control traffic flow at the IP address or port level (OSI layer 3 or 4), then you might consider using Kubernetes NetworkPolicies for particular applications in your cluster.
Network policies target pods, not services (an abstraction). Check this answer and this one.
Regarding your examples - your network policy is correct (as I tested it below). The problem is that your cluster may not be compatible:
For Network Policies to take effect, your cluster needs to run a network plugin which also enforces them. Project Calico or Cilium are plugins that do so. This is not the default when creating a cluster!
Test on kubeadm cluster with Calico plugin -> I created similar pods as you did, but I changed container part:
spec:
containers:
- name: main
image: nginx
command: ["/bin/sh","-c"]
args: ["sed -i 's/listen .*/listen 8080;/g' /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"]
ports:
- containerPort: 8080
So NGINX app is available at the 8080 port.
Let's check pods IP:
user#shell:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
api 1/1 Running 0 48m 192.168.156.61 example-ubuntu-kubeadm-template-2 <none> <none>
front 1/1 Running 0 48m 192.168.156.56 example-ubuntu-kubeadm-template-2 <none> <none>
main 1/1 Running 0 48m 192.168.156.52 example-ubuntu-kubeadm-template-2 <none> <none>
Let's exec into running main pod and try to make request to the front pod:
root#main:/# curl 192.168.156.61:8080
<!DOCTYPE html>
...
<title>Welcome to nginx!</title>
It is working.
After applying your network policy:
user#shell:~$ kubectl apply -f main-to-front.yaml
networkpolicy.networking.k8s.io/front-end-policy created
user#shell:~$ kubectl exec -it main -- bash
root#main:/# curl 192.168.156.61:8080
...
Not working anymore, so it means that network policy is applied successfully.
Nice option to get more information about applied network policy is to run kubectl describe command:
user#shell:~$ kubectl describe networkpolicy front-end-policy
Name: front-end-policy
Namespace: default
Created on: 2022-01-26 15:17:58 +0000 UTC
Labels: <none>
Annotations: <none>
Spec:
PodSelector: app=main
Allowing ingress traffic:
To Port: 8080/TCP
From:
PodSelector: app=front
Allowing egress traffic:
To Port: 8080/TCP
To:
PodSelector: app=front
Policy Types: Ingress, Egress
I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!
you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080
Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube
you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests
I am new to Kubernetes and I'm working on deploying an application within a new Kubernetes cluster.
Currently, the service running has multiple pods that need to communicate with each other. I'm looking for a general approach to go about debugging the issue, rather than getting into the specifies of the service as the question will become much too specific.
The pods within the cluster are throwing an error:
err="Get \"http://testpod.mynamespace.svc.cluster.local:8080/": dial tcp 10.10.80.100:8080: connect: connection refused"
Both pods are in the same cluster.
What are the best steps to take to debug this?
I have tried running:
kubectl exec -it testpod --namespace mynamespace -- cat /etc/resolv.conf
And this returns:
search mynamespace.svc.cluster.local svc.cluster.local cluster.local us-east-2.compute.internal
Which I found here: https://kubernetes.io/docs/concepts/services-networking/dns-pod-service/
First of all, the following pattern:
my-svc.my-namespace.svc.cluster-domain.example
is applicable only to FQDNs of Services, not Pods which have the following form:
pod-ip-address.my-namespace.pod.cluster-domain.example
e.g.:
172-17-0-3.default.pod.cluster.local
So in fact you're querying cluster dns about FQDN of the Service named testpod and not about FQDN of the Pod. Judging by the fact that it's being resolved successfully, such Service already exists in your cluster but most probably is misconfigured. The fact that you're getting the error message connection refused can mean the following:
your Service FQDN testpod.mynamespace.svc.cluster.local has been successfully resolved
(otherwise you would receive something like curl: (6) Could not resolve host: testpod.default.svc.cluster.local)
you've reached successfully your testpod Service
(otherwise, i.e. if it existed but wasn't listening on 8080 port, you're trying to connect to, you would receive timeout e.g. curl: (7) Failed to connect to testpod.default.svc.cluster.local port 8080: Connection timed out)
you've reached the Pod, exposed by testpod Service (you've been sussessfully redirected to it by the testpod Service)
but once reached the Pod, you're trying to connect to incorect port and that's why the connection is being refused by the server
My best guess is that your Pod in fact listens on different port, like 80 but you exposed it via the ClusterIP Service by specifying only --port value e.g. by:
kubectl expose pod testpod --port=8080
In such case both --port (port of the Service) and --targetPort (port of the Pod) will have the same value. In other words you've created a Service like the one below:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 8080
And you probably should've exposed it either this way:
kubectl expose pod testpod --port=8080 --targetPort=80
or with the following yaml manifest:
apiVersion: v1
kind: Service
metadata:
name: testpod
spec:
ports:
- protocol: TCP
port: 8080
targetPort: 80
Of course your targetPort may be different than 80, but connection refused in such case can mean only one thing: target http server (running in a Pod) refuses connection to 8080 port (most probably because it isn't listening on it). You didn't specify what image you're using, whether it's a standard nginx webserver or something based on your custom image. But if it's nginx and wasn't configured differently it listens on port 80.
For further debug, you can attach to your Pod:
kubectl exec -it testpod --namespace mynamespace -- /bin/sh
and if netstat command is not present (the most likely scenario) run:
apt update && apt install net-tools
and then check with netstat -ntlp on which port your container listens on.
I hope this helps you solve your issue. In case of any doubts, don't hesitate to ask.
I have a timeout problem with my site hosted on Kubernetes cluster provided by DigitalOcean.
u#macbook$ curl -L fork.example.com
curl: (7) Failed to connect to fork.example.com port 80: Operation timed out
I have tried everything listed on the Debug Services page. I use a k8s service named df-stats-site.
u#pod$ nslookup df-stats-site
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
It gives the same output when I do it from node:
u#node$ nslookup df-stats-site.deepfork.svc.cluster.local 10.245.0.10
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
With the help of Does the Service work by IP? part of the page, I tried the following command and got the expected output.
u#node$ curl 10.245.16.96
*correct response*
Which should mean that everything is fine with DNS and service. I confirmed that kube-proxy is running with the following command:
u#node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 13:56 /hyperkube proxy --config=...
But I have something wrong with iptables rules:
u#node$ iptables-save | grep df-stats-site
(unfortunately, I was not able to copy the output from node, see the screenshot below)
It is recommended to restart kube-proxy with with the -v flag set to 4, but I don't know how to do it with DigitalOcean provided cluster.
That's the configuration I use:
apiVersion: v1
kind: Service
metadata:
name: df-stats-site
spec:
ports:
- port: 80
targetPort: 8002
selector:
app: df-stats-site
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: df-stats-site
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- fork.example.com
secretName: letsencrypt-prod
rules:
- host: fork.example.com
http:
paths:
- backend:
serviceName: df-stats-site
servicePort: 80
Also, I have a NGINX Ingress Controller set up with the help of this answer.
I must note that it worked fine before. I'm not sure what caused this, but restarting the cluster would be great, though I don't know how to do it without removing all the resources.
The solution for me was to add HTTP and HTTPS inbound rules in the Firewall (these are missing by default).
For DigitalOcean provided Kubernetes cluster, you can open it at https://cloud.digitalocean.com/networking/firewalls/.
UPDATE: Make sure to create a new firewall record rather than editing an existing one. Otherwise, your rules will be automatically removed in a couple of hours/days, because DigitalOcean k8s persists the set of rules in the firewall.
ClusterIP services are only accessible from within the cluster. If you want to access it from outside the cluster, it needs to be configured as NodePort or LoadBalancer.
If you are just trying to test something locally, you can use kubectl port-forward to forward a port on your local machine to a ClusterIP service on a remote cluster. Here's an example of creating a deployment from an image, exposing it as a ClusterIP service, then accessing it via kubectl port-forward:
$ kubectl run --image=rancher/hello-world hello-world --replicas 2
$ kubectl expose deployment hello-world --type=ClusterIP --port=8080 --target-port=80
$ kubectl port-forward svc/hello-world 8080:8080
This service is now accessible from my local computer at http://127.0.0.1:8080
I use minikube to create local kubernetes cluster.
I create ReplicationController via webapp-rc.yaml file.
apiVersion: v1
kind: ReplicationController
metadata:
name: webapp
spec:
replicas: 2
template:
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: tomcat
ports:
- containerPort: 8080
and, I print the pods' ip to stdout:
kubectl get pods -l app=webapp -o yaml | grep podIP
podIP: 172.17.0.18
podIP: 172.17.0.1
and, I want to access pod using curl
curl 172.17.0.18:8080
But, the stdout give me: curl: (52) Empty reply from server
I know I can access my application in docker container in pod via service.
I find this code in a book. But the book does not give the context for executing this code.
Using minikube, how to access pod via pod ip using curl in host machine?
update 1
I find a way using kubectl proxy:
➜ ~ kubectl proxy
Starting to serve on 127.0.0.1:8001
and then I can access pod via curl like this:
curl http://localhost:8001/api/v1/namespaces/default/pods/webapp-jkdwz/proxy/
webapp-jkdwz can be found by command kubectl get pods -l app=webapp
update 2
minikube ssh - log into minikube VM
and then, I can use curl <podIP>:<podPort>, for my case is curl 172.17.0.18:8080
First of all, tomcat image expose port 8080 not 80, so the correct YAML would be:
apiVersion: v1
kind: ReplicationController
metadata:
name: webapp
spec:
replicas: 2
template:
metadata:
name: webapp
labels:
app: webapp
spec:
containers:
- name: webapp
image: tomcat
ports:
- containerPort: 8080
minikube is executed inside a virtual machine, so the curl 172.17.0.18:8080 would only work from inside that virtual machine.
You can always create a service to expose your apps:
kubectl expose rc webapp --type=NodePort
And use the following command to get the URL:
minikube service webapp --url
If you need to query a specific pod, use port forwarding:
kubectl port-forward <POD NAME> 8080
Or just ssh into minikube's virtual machine and query from there.
That command is correct, but it only works from a machine that has access to the overlay network. (In case of minikube the host machine does not have that by default).
You can set up a proxy to your pod with:
kubectl port-forward [name of your pod] [pod port]
Thereafter you can (from another shell):
curl 127.0.0.1:port/path
See also: https://kubernetes.io/docs/tasks/access-application-cluster/port-forward-access-application-cluster/#forward-a-local-port-to-a-port-on-the-pod