Kubernetes Service get Connection Refused - kubernetes

I am trying to create an application in Kubernetes (Minikube) and expose its service to other applications in same clusters, but i get connection refused if i try to access this service in Kubernetes node.
This application just listen on HTTP 127.0.0.1:9897 address and send response.
Below is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: exporter-test
namespace: datenlord-monitoring
labels:
app: exporter-test
spec:
replicas: 1
selector:
matchLabels:
app: exporter-test
template:
metadata:
labels:
app: exporter-test
spec:
containers:
- name: prometheus
image: 34342/hello_world
ports:
- containerPort: 9897
---
apiVersion: v1
kind: Service
metadata:
name: exporter-test-service
namespace: datenlord-monitoring
annotations:
prometheus.io/scrape: 'true'
prometheus.io/port: '9897'
spec:
selector:
app: exporter-test
type: NodePort
ports:
- port: 8080
targetPort: 9897
nodePort: 30001
After I apply this yaml file, the pod and the service deployed correctly, and I am sure this pod works correctly, since when I login the pod by
kubectl exec -it exporter-test-* -- sh, then just run curl 127.0.0.1:9897, I can get the correct response.
Also, if I run kubectl port-forward exporter-test-* -n datenlord-monitoring 8080:9897, I can get correct response from localhost:8080. So this application should work well.
However, when I trying to access this service from other application in same K8s cluster by exporter-test-service.datenlord-monitoring.svc:30001 or just run curl nodeIp:30001 in k8s node or run curl clusterIp:8080 in k8s node, I got Connection refused
Anyone had same issue before? Appreciate for any help! Thanks!

you are mixing two things here. NodePort is the port the application is available from outside your cluster. Inside your cluster you need to access your service via the service port, not the NodePort.
Try changing exporter-test-service.datenlord-monitoring.svc:30001 to exporter-test-service.datenlord-monitoring.svc:8080

Welcome to the community!
There are no issues with behaviour you observed.
In short words kubernetes cluster (which is minikube in this case) has its own isolated network with internal DNS.
One way to access your service on the node: you specified nodePort for your service and this made the service accessible on the localhost:30001. You can check it by running on your host:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service NodePort 10.111.191.159 <none> 8080:30001/TCP 2m45s
# Test:
curl -I localhost:30001
HTTP/1.1 200 OK
Another way to expose service to the host network is to use minikube tunnel (run in the another console). You'll need to change service type from NodePort to LoadBalancer:
$ kubectl get svc -n datenlord-monitoring
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
exporter-test-service LoadBalancer 10.111.191.159 10.111.191.159 8080:30001/TCP 18m
# Test:
$ curl -I 10.111.191.159:8080
HTTP/1.1 200 OK
Why some of options doesn't work.
Connection to the service by its DNS + NodePort. NodePort is used to link host IP and NodePort to service port inside kubernetes cluster. Internal DNS is not accessible outside kubernetes cluster (unless you don't add IPs to /etc/hosts on your host machine)
Inside the cluster you should use internal DNS with internal service port which is 8080 in your case. You can check how this works with a separate container in the same namespace (e.g. image curlimages/curl) and get following:
$ kubectl exec -it curl -n datenlord-monitoring -- curl -I exporter-test-service:8080
HTTP/1.1 200 OK
Or from the pod in a different namespace:
$ kubectl exec -it curl-default-ns -- curl -I exporter-test-service.datenlord-monitoring.svc:8080
HTTP/1.1 200 OK
I've attached useful links which help you to understand this difference.
Edit: DNS inside deployed pod
$ kubectl exec -it exporter-test-xxxxxxxx-yyyyy -n datenlord-monitoring -- bash
root#exporter-test-74cf9f94ff-fmcqp:/# cat /etc/resolv.conf
nameserver 10.96.0.10
search datenlord-monitoring.svc.cluster.local svc.cluster.local cluster.local
options ndots:5
Useful links:
DNS for pods and services
Service types
Accessing apps in Minikube

you need to change 127.0.0.1:9897 to 0.0.0.0:9897 so that application listens to all incoming requests

Related

Testing locally k8s distributed system

I'm new to k8s and I'm trying to build a distributed system. The idea is that a stateful pod will be spawened for each user.
Main services are two Python applications MothershipService and Ship. MothershipService's purpose is to keep track of ship-per-user, do health checks, etc. Ship is running some (untrusted) user code.
MothershipService Ship-user1
| | ---------- | |---vol1
|..............| -----. |--------|
\
\ Ship-user2
'- | |---vol2
|--------|
I can manage fine to get up the ship service
> kubectl get all -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod/ship-0 1/1 Running 0 7d 10.244.0.91 minikube <none> <none>
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
service/ship ClusterIP None <none> 8000/TCP 7d app=ship
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 7d <none>
NAME READY AGE CONTAINERS IMAGES
statefulset.apps/ship 1/1 7d ship ship
My question is how do I go about testing this via curl or a browser? These are all backend services so NodePort seems not the right approach since none of this should be accessible to the public. Eventually I will build a test-suite for all this and deploy on GKE.
ship.yml (pseudo-spec)
kind: Service
metadata:
name: ship
spec:
ports:
- port: 8000
name: ship
clusterIP: None # headless service
..
---
kind: StatefulSet
metadata:
name: ship
spec:
serviceName: "ship"
replicas: 1
template:
spec:
containers:
- name: ship
image: ship
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8000
name: ship
..
One possibility is to use the kubectl port-forward command to expose the pod port locally on your system. For example, if I'm use this deployment to run a simple web server listening on port 8000:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: example
name: example
spec:
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- args:
- --port
- "8000"
image: docker.io/alpinelinux/darkhttpd
name: web
ports:
- containerPort: 8000
name: http
I can expose that on my local system by running:
kubectl port-forward deploy/example 8000:8000
As long as that port-forward command is running, I can point my browser (or curl) at http://localhost:8000 to access the service.
Alternately, I can use kubectl exec to run commands (like curl or wget) inside the pod:
kubectl exec -it web -- wget -O- http://127.0.0.1:8000
Example process on how to create a Kubernetes Service object that exposes an external IP address :
**Creating a service for an application running in five pods: **
Run a Hello World application in your cluster:
kubectl run hello-world --replicas=5 --labels="run=load-balancer-example" --image=gcr.io/google-samples/node-hello:1.0 --port=8080
The preceding command creates a Deployment object and an associated ReplicaSet object. The ReplicaSet has five Pods, each of which runs the Hello World application.
Display information about the Deployment:
kubectl get deployments hello-world
kubectl describe deployments hello-world
Display information about your ReplicaSet objects:
kubectl get replicasets
kubectl describe replicasets
Create a Service object that exposes the deployment:
kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Display information about the Service:
kubectl get services my-service
The output is similar to this:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
my-service 10.3.245.137 104.198.205.71 8080/TCP 54s
Note: If the external IP address is shown as , wait for a minute and enter the same command again.
Display detailed information about the Service:
kubectl describe services my-service
The output is similar to this:
Name: my-service
Namespace: default
Labels: run=load-balancer-example
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.3.245.137
LoadBalancer Ingress: 104.198.205.71
Port: <unset> 8080/TCP
NodePort: <unset> 32377/TCP
Endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more...
Session Affinity: None
Events:
Make a note of the external IP address exposed by your service. In this example, the external IP address is 104.198.205.71. Also note the value of Port. In this example, the port is 8080.
In the preceding output, you can see that the service has several endpoints: 10.0.0.6:8080,10.0.1.6:8080,10.0.1.7:8080 + 2 more. These are internal addresses of the pods that are running the Hello World application. To verify these are pod addresses, enter this command:
kubectl get pods --output=wide
The output is similar to this:
NAME ... IP NODE
hello-world-2895499144-1jaz9 ... 10.0.1.6 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-2e5uh ... 0.0.1.8 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-9m4h1 ... 10.0.0.6 gke-cluster-1-default-pool-e0b8d269-5v7a
hello-world-2895499144-o4z13 ... 10.0.1.7 gke-cluster-1-default-pool-e0b8d269-1afc
hello-world-2895499144-segjf ... 10.0.2.5 gke-cluster-1-default-pool-e0b8d269-cpuc
Use the external IP address to access the Hello World application:
curl http://<external-ip>:<port>
where <external-ip> is the external IP address of your Service, and <port> is the value of Port in your Service description.
The response to a successful request is a hello message:
Hello Kubernetes!
Please refer to How to Use external IP in GKE and Exposing an External IP Address to Access an Application in a Cluster for more information.

Why sessionAffinity doesn't work on a headless service

I have the following headless service in my kubernetes cluster :
apiVersion: v1
kind: Service
metadata:
labels:
app: foobar
name: foobar
spec:
clusterIP: None
clusterIPs:
- None
internalTrafficPolicy: Cluster
ipFamilies:
- IPv4
ipFamilyPolicy: SingleStack
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: foobar
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 10800
type: ClusterIP
Behind are running couple of pods managed by a statefulset.
Lets try to reach my pods individually :
Running an alpine pod to contact my pods :
> kubectl run alpine -it --tty --image=alpine -- sh
Adding curl to fetch webpage :
alpine#> add apk curl
I can curl into each of my pods :
alpine#> curl -s pod-1.foobar
hello from pod 1
alpine#> curl -s pod-2.foobar
hello from pod 2
It works just as expected.
Now I want to have a service that will loadbalance between my pods.
Let's try to use that same foobar service :
alpine#> curl -s foobar
hello from pod 1
alpine#> curl -s foobar
hello from pod 2
It works just well. At least almost : In my headless service, I have specified sessionAffinity. As soon as I run a curl to a pod, I should stick to it.
I've tried the exact same test with a normal service (not headless) and this time it works as expected. It load balances between pods at first run BUT then stick to the same pod afterwards.
Why sessionAffinity doesn't work on a headless service ?
The affinity capability is provided by kube-proxy, only connection establish thru the proxy can have the client IP "stick" to a particular pod for a period of time. In case of headless, your client is given a list of pod IP(s) and it is up to your client app. to select which IP to connect. Because the order of IP(s) in the list is not always the same, typical app. that always pick the first IP will result to connect to the backend pod randomly.

Can you expose your local minikube cluster to be accessible from a browser without editing etc/hosts?

I am following this tutorial for how to expose your local cluster for external access.
I only need to be able to check my application from browser, without exposing the app to the Internet.
> kubectl get service web
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
web NodePort 10.98.217.114 <none> 8080:32718/TCP 10m
> minikube service web --url
http://192.168.49.2:32718
Followed the guide until the etc/hosts part. I set up the ingress:
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx hello-world.info 192.168.49.2 80 96s
For various reasons I cannot edit the etc/hosts file on my Windows machine, it says another process is using it. However, neither 192.168.49.2 nor http://192.168.49.2:32718 in the browser returns anything, as well as curl 192.168.49.2 (and with :32718). I don't think that should be expected, as the hosts file merely forwards hello-world.info to the IP, I should be able to access my app with just the IP. What am I missing here?
Kubectl v1.24.1 (kustomize v4.5.4, server v1.23.3), Minikube v1.25.2, Windows 10, Minikube with the Docker driver.
Okay my solution to the problem was this: port-forward to the ingress-controller pod (not to the ingress itself object, because it doesn't seem to be possible)
Sample ingress file for a service named "web":
# example-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: example-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: web
port:
number: 8080
Run it with
kubectl apply -f example-ingress.yaml
Check that it's running
> kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
example-ingress nginx * 192.168.49.2 80 23h
If you ssh into minikube (minikube ssh), you can curl 192.168.49.2:80 and it returns the proper output.
Output nginx-controller pods:
> kubectl get pod -n ingress-nginx
NAME READY STATUS RESTARTS
AGE
ingress-nginx-admission-create-56gbc 0/1 Completed 0
46h
ingress-nginx-admission-patch-fqf92 0/1 Completed 0
46h
ingress-nginx-controller-cc8496874-7znt5 1/1 Running 4 (39m ago)
46h
Forward port to it:
> kubectl port-forward ingress-nginx-controller-cc8496874-7znt5 8080:80 -n ingress-nginx
Then check out localhost:8080. If it returns nginx 404, then your Ingress.yaml setup is probably wrong. Otherwise works for me.

k3s - Can't access my service based on service name

I have created a service like this:
apiVersion: v1
kind: Service
metadata:
name: amen-sc
spec:
ports:
- name: http
port: 3030
targetPort: 8000
selector:
component: scc-worker
I am able to access this service, from within my pods of the same cluster (& Namespace), using the IP address I get from kubectl get svc, but I am not able to access using the service name like curl amen-sc:3030.
Please advise what could possibly be wrong.
I intend to expose certain pods, only within my cluster and access them using the service-name:port format.
Make sure you have DNS service configured and corresponding pods are running.
kubectl get svc -n kube-system -l k8s-app=kube-dns
and
kubectl get pods -n kube-system -l k8s-app=kube-dns

Kubernetes service is reachable from node but not from my machine

I have a timeout problem with my site hosted on Kubernetes cluster provided by DigitalOcean.
u#macbook$ curl -L fork.example.com
curl: (7) Failed to connect to fork.example.com port 80: Operation timed out
I have tried everything listed on the Debug Services page. I use a k8s service named df-stats-site.
u#pod$ nslookup df-stats-site
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
It gives the same output when I do it from node:
u#node$ nslookup df-stats-site.deepfork.svc.cluster.local 10.245.0.10
Server: 10.245.0.10
Address: 10.245.0.10#53
Name: df-stats-site.deepfork.svc.cluster.local
Address: 10.245.16.96
With the help of Does the Service work by IP? part of the page, I tried the following command and got the expected output.
u#node$ curl 10.245.16.96
*correct response*
Which should mean that everything is fine with DNS and service. I confirmed that kube-proxy is running with the following command:
u#node$ ps auxw | grep kube-proxy
root 4194 0.4 0.1 101864 17696 ? Sl Jul04 13:56 /hyperkube proxy --config=...
But I have something wrong with iptables rules:
u#node$ iptables-save | grep df-stats-site
(unfortunately, I was not able to copy the output from node, see the screenshot below)
It is recommended to restart kube-proxy with with the -v flag set to 4, but I don't know how to do it with DigitalOcean provided cluster.
That's the configuration I use:
apiVersion: v1
kind: Service
metadata:
name: df-stats-site
spec:
ports:
- port: 80
targetPort: 8002
selector:
app: df-stats-site
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: df-stats-site
annotations:
kubernetes.io/ingress.class: nginx
certmanager.k8s.io/cluster-issuer: letsencrypt-prod
spec:
tls:
- hosts:
- fork.example.com
secretName: letsencrypt-prod
rules:
- host: fork.example.com
http:
paths:
- backend:
serviceName: df-stats-site
servicePort: 80
Also, I have a NGINX Ingress Controller set up with the help of this answer.
I must note that it worked fine before. I'm not sure what caused this, but restarting the cluster would be great, though I don't know how to do it without removing all the resources.
The solution for me was to add HTTP and HTTPS inbound rules in the Firewall (these are missing by default).
For DigitalOcean provided Kubernetes cluster, you can open it at https://cloud.digitalocean.com/networking/firewalls/.
UPDATE: Make sure to create a new firewall record rather than editing an existing one. Otherwise, your rules will be automatically removed in a couple of hours/days, because DigitalOcean k8s persists the set of rules in the firewall.
ClusterIP services are only accessible from within the cluster. If you want to access it from outside the cluster, it needs to be configured as NodePort or LoadBalancer.
If you are just trying to test something locally, you can use kubectl port-forward to forward a port on your local machine to a ClusterIP service on a remote cluster. Here's an example of creating a deployment from an image, exposing it as a ClusterIP service, then accessing it via kubectl port-forward:
$ kubectl run --image=rancher/hello-world hello-world --replicas 2
$ kubectl expose deployment hello-world --type=ClusterIP --port=8080 --target-port=80
$ kubectl port-forward svc/hello-world 8080:8080
This service is now accessible from my local computer at http://127.0.0.1:8080