Kubernetes services endpoints healtcheck - kubernetes

I've created a Kubernetes Service whose backend nodes aren't part of the Cluster but a fixed set of nodes (having fixed IPs), so I've also created an Endpoints resource with the same name:
apiVersion: v1
kind: Service
metadata:
name: elk-svc
spec:
ports:
- port: 9200
targetPort: 9200
protocol: TCP
---
kind: Endpoints
apiVersion: v1
metadata:
name: elk-svc
subsets:
-
addresses:
- { ip: 172.21.0.40 }
- { ip: 172.21.0.41 }
- { ip: 172.21.0.42 }
ports:
- port: 9200
Description of Service and Endpoints:
$ kubectl describe svc elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"spec":{"ports":[{"port":9200,"protocol":"TCP"...
Selector: <none>
Type: ClusterIP
IP: 10.233.17.18
Port: <unset> 9200/TCP
Endpoints: 172.21.0.40:9200,172.21.0.41:9200,172.21.0.42:9200
Session Affinity: None
Events: <none>
$ kubectl describe ep elk-svc
Name: elk-svc
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Endpoints","metadata":{"annotations":{},"name":"elk-svc","namespace":"default"},"subsets":[{"addresses":[{"ip":"172.21.0.40"...
Subsets:
Addresses: 172.21.0.40,172.21.0.41,172.21.0.42
NotReadyAddresses: <none>
Ports:
Name Port Protocol
---- ---- --------
<unset> 9200 TCP
Events: <none>
My pods are able to communicate with ElasticSearch using the internal cluster IP 10.233.17.18. Everything goes fine !
My question is about if there's any way to have some kind of healthCheck mechanism for that Service I've created so if one of my ElasticSearch nodes goes down, ie: 172.21.0.40, then the Service is aware of that and and will no longer route traffic to that node, but to the others. Is that possible ?
Thanks.

This is not supported in k8s.
For more clarification refer this issue raised on your requirement ::
https://github.com/kubernetes/kubernetes/issues/77738#issuecomment-491560980
For this use-case best practice would be to use a loadbalancer like haproxy

My suggestion will be to have a reverse proxy such as nginx or haproxy infront of the elastic nodes which will do health check for those nodes.

Related

AKS load balancer service accessible on wrong port

I have a simple ASP.NET core Web API. It works locally. I deployed it in Azure AKS using the following yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: sa-be
spec:
selector:
matchLabels:
name: sa-be
template:
metadata:
labels:
name: sa-be
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- name: sa-be
image: francotiveron/anapi:latest
resources:
limits:
memory: "64Mi"
cpu: "250m"
---
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
The result is:
> kubectl get service sa-be-s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
sa-be-s LoadBalancer 10.0.157.200 20.53.188.247 8080:32533/TCP 4h55m
> kubectl describe service sa-be-s
Name: sa-be-s
Namespace: default
Labels: <none>
Annotations: <none>
Selector: name=sa-be
Type: LoadBalancer
IP Families: <none>
IP: 10.0.157.200
IPs: <none>
LoadBalancer Ingress: 20.53.188.247
Port: <unset> 8080/TCP
TargetPort: 80/TCP
NodePort: <unset> 32533/TCP
Endpoints: 10.244.2.5:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I expected to reach the Web API at http://20.53.188.247:32533/, instead it is reachable only at http://20.53.188.247:8080/
Can someone explain
Is this is he expected behaviour
If yes, what is the use of the NodePort (32533)?
Yes, expected.
Explained: Kubernetes Service Ports - please read full article to understand what is going on in the background.
Loadbalancer part:
apiVersion: v1
kind: Service
metadata:
name: sa-be-s
spec:
type: LoadBalancer
selector:
name: sa-be
ports:
- port: 8080
targetPort: 80
port is the port the cloud load balancer will listen on (8080 in our
example) and targetPort is the port the application is listening on in
the Pods/containers. Kubernetes works with your cloud’s APIs to create
a load balancer and everything needed to get traffic hitting the load
balancer on port 8080 all the way back to the Pods/containers in your
cluster listening on targetPort 80.
now main:
Behind the scenes, many implementations create NodePorts to glue the cloud load balancer to the cluster. The traffic flow is usually like this.

Kubernetes service is configured but get connection refused

I have service configurd on my kuberntes cluster but when I try to curl ip:port I get connection refused
the following service configured :
apiVersion: v1
kind: Service
metadata:
name: frontend
namespace: production
spec:
type: NodePort
selector:
app: frontend
ports:
- name: control-center-web
port: 8008
protocol: TCP
targetPort: 8008
$ kubectl describe svc frontend
Name: frontend
Namespace: production
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"frontend","namespace":"production"},"spec":{"ports":[{"name":...
Selector: app=frontend
Type: NodePort
IP: <ip>
Port: control-center-web 8008/TCP
TargetPort: 8008/TCP
NodePort: control-center-web 7851/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 14m service-controller LoadBalancer -> NodePort
Why do I keep getting connection refuse for ip and port that I took from the svc?
The Endpoints in service has got None instead of IPs of the pods. This happens when the selector in service app: frontend does not match with the selector in pod spec,

Unable to forward traffic using NodePort

I have an application running inside minikube K8 cluster. It’s a simple REST endpoint. The issue is after deployment I am unable to access that application from my local computer.
Using http://{node ip}:{node port} endpoint.
However, if I do:
kubectl port-forward (actual pod name) 8000:8000
The application becomes accessible at: 127.0.0.1:8000 from my local desktop.
Is this the right way?
I believe this isn't the right way? as I am directly forwarding my traffic to the pod and this port forwarding won't remain once this pod is deleted.
What am I missing here and what is the right way to resolve this?
I have also configured a NodePort service, which should handle this but I am afraid it doesn’t seem to be working:
apiVersion: v1
kind: Service
metadata:
labels:
app: rest-api
name: rest-api-np
namespace: rest-api-namespace
spec:
type: NodePort
ports:
- port: 8000
selector:
app: rest-api
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: rest-api
name: rest-api-deployment
namespace: rest-api-namespace
spec:
replicas: 1
selector:
matchLabels:
app: rest-api
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
app: rest-api
spec:
containers:
- image: oneImage:latest
name: rest-api
You are having issues because your service is placed in default namespace while your deployment is in rest-api-namespace namespace.
I have deploy you yaml files and when the describe the service there were no endpoints:
➜ k describe svc rest-api-np
Name: rest-api-np
Namespace: default
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.100.111.228
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 31668/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Solution for that is to create service in the the same namespace. Once you do that, an ip address and port will appear in the Endpoints field:
➜ k describe svc -n rest-api-namespace rest-api-np
Name: rest-api-np
Namespace: rest-api-namespace
Labels: app=rest-api
Annotations: <none>
Selector: app=rest-api
Type: NodePort
IP: 10.99.49.24
Port: <unset> 8000/TCP
TargetPort: 8000/TCP
NodePort: <unset> 32116/TCP
Endpoints: 172.18.0.3:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Alternative way is to add endpoints manually:
apiVersion: v1
kind: Endpoints
metadata:
name: my-service # please note that endpoints and service needs to have the same name
subsets:
- addresses:
- ip: 192.0.2.42 #ip of the pod
ports:
- port: 8000
Since you can do port forwarding the rest api service properly connected to your deployment. In that case the service can be resolved using the following way.
First find out the minikube ip
minikube ip
Then the node port of your service like
kubectl get service rest-api-np
Once you have these two details just do http://(minikube-ip):(node-port)

How do I create a LoadBalancer service over Pods created by a ReplicaSet/Deployment

I'm using a ReplicaSet to manage my pods and I try to expose these pods with a service. The Pods created by a ReplicaSet have randomized names.
NAME READY STATUS RESTARTS AGE
master 2/2 Running 0 20m
worker-4szkz 2/2 Running 0 21m
worker-hwnzt 2/2 Running 0 21m
I try to expose these Pods with a Service, since some policies restrict me to use hostNetwork=true. I'm able to expose them by creating a NodePort service for each Pod with kubectl expose pod worker-xxxxx --type=NodePort.
This is clearly not a flexible way. I wonder how to create a Service (LoadBalancer type maybe?) to access to all the replicas dynamically in my ReplicaSet. If that comes with a Deployment that would be perfect too.
Thanks for any help and advice!
Edit:
I put a label on my ReplicaSet and a NodePort type Service called worker selecting that label. But I'm not able to ping worker in any of my pods. What's the correct way of doing this?
Below is how the kubectl describe service worker gives. As the Endpoints show the pods are picked up.
Name: worker
Namespace: default
Annotations: <none>
Selector: tag=worker
Type: NodePort
IP: 10.106.45.174
Port: port1 29999/TCP
TargetPort: 29999/TCP
NodePort: port1 31934/TCP
Endpoints: 10.32.0.3:29999,10.40.0.2:29999
Port: port2 29996/TCP
TargetPort: 29996/TCP
NodePort: port2 31881/TCP
Endpoints: 10.32.0.3:29996,10.40.0.2:29996
Port: port3 30001/TCP
TargetPort: 30001/TCP
NodePort: port3 31877/TCP
Endpoints: 10.32.0.3:30001,10.40.0.2:30001
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I Believe that you can optimize this a bit by using Deployments instead of ReplicaSets (This is now the standard way), i.e you could have a deployment as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
Then your service to match this would be:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
# This is the important part as this is what is used to route to
# the pods created by your deployment
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80

Set static external IP for my load balancer on GKE

I am trying to set up a static external IP for my load balancer on GKE but having no luck. Here is my Kubernetes service config file:
kind: Service
apiVersion: v1
metadata:
name: myAppService
spec:
selector:
app: myApp
ports:
- protocol: TCP
port: 3001
targetPort: 3001
type: LoadBalancer
loadBalancerIP: *********
This doesn't work. I expect to see my external IP as ********* but it just says pending:
➜ git:(master) kubectl get services
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ********* <none> 443/TCP 5m
myAppService ********* <pending> 3001:30126/TCP 5m
More details:
➜ git:(master) kubectl describe services
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Selector: <none>
Type: ClusterIP
IP: *********
Port: https 443/TCP
Endpoints: *********
Session Affinity: ClientIP
Events: <none>
Name: myAppService
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=myApp
Type: LoadBalancer
IP: *********
Port: <unset> 3001/TCP
NodePort: <unset> 30126/TCP
Endpoints:
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
5m 20s 7 service-controller Normal CreatingLoadBalancer Creating load balancer
5m 19s 7 service-controller Warning CreatingLoadBalancerFailed Error creating load balancer (will retry): Failed to create load balancer for service default/myAppService: Cannot EnsureLoadBalancer() with no hosts
Any ideas?
I've encountered the same problem, but after reading the docs carefully, it turned out that I was just reserving the static IP incorrectly.
A service of type LoadBalancer creates a network load balancer, which is regional. Therefore, also the static IP address you reserve needs to be regional also (in the regoin of your cluster).
When I changed to this solution, everything worked fine for me...
This got me stuck as well, I hope someone finds this helpful.
In addition to what Dirk said, if you happen to reserve a global static IP address as oppose to a regional one; you need to use Ingres as describe here in documentation: Configuring Domain Names with Static IP Addresses specifically step 2b.
So basically you reserve the static ip gcloud compute addresses create helloweb-ip --global
and add an Ingres:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: helloweb
# this is where you you add your reserved ip
annotations:
kubernetes.io/ingress.global-static-ip-name: helloweb-ip
labels:
app: hello
spec:
backend:
serviceName: helloweb-backend
servicePort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: helloweb-backend
labels:
app: hello
spec:
type: NodePort
selector:
app: hello
tier: web
ports:
- port: 8080
targetPort: 8080
The doc also describe how to assign a static ip if you choose type "LoadBalancer" under step 2a.