Kubernetes: Unable to explicitly set endpoint for a service - kubernetes

Hi Kubernetes Experts,
I have an application cluster running in the azure kubernetes cluster. There are 3 pods inside the application cluster. The app is designed in a way, that each pod listens on a different port. For example, pod 1 listens on 31090, pod2 on 31091 and pod 3 on 31092.
This application is needed to be connected from outside the network. At this point, I need to create a separate load balancer service for each of the pods.
In the service, I cannot use selector as app name/label as it tries to distribute traffic between all 3 pods in round robin way. Now as you see above that one port (say 31090) is running only on one pod. So, external connections to the load balancer IP fails 2/3 rd of times.
So, I am trying to create 3 different load balancer services individual to each pod, without mentioning the selector and later assigning endpoint individually to them.
The approach is explained here:
In Kubernetes, how does one select a pod by name in a service selector?
But after the endpoint is created, the service shows endpoint as blank. See below.
First I created only the service
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 31090
targetPort: 31090
name: b0
type: LoadBalancer
After this, the service shows endpoint as "none". So far, so good.
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints: <none>**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3s service-controller Ensuring load balancer
Then I create the end point, I have made sure the names match between service and endpoint, including any spaces or tabs. But the endpoint in service desc shows "" (blank). And this is why, I am unable to get to the app from outside network. Telnet to the port and external IP just keeps trying.
---
apiVersion: v1
kind: Endpoints
metadata:
name: myservice
subsets:
- addresses:
- ip: 10.240.1.32
ports:
- port: 31090
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
LoadBalancer Ingress: 20.124.49.192
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints:**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3m22s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 3m10s service-controller Ensured load balancer
Only this service is failing (using no selector). All my other external load balancer services are working fine. They all are getting to the pods. They all are using selector as app label.
Here is the pod ip. I have ensured port 31090 is running inside the pod.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ck21-cp-kafka-0 2/2 Running 2 (76m ago) 78m **10.240.1.32** aks-agentpool-26199219-vmss000013 <none> <none>
Can someone please help me here?
Thanks !

Related

Kubernetes application URL location?

I have run a basic example project and can confirm it is running, but I cannot identify its URL?
Kubectl describe service - gives me
NAME READY STATUS RESTARTS AGE
frontend-6c8b5cc5b-v9jlb 1/1 Running 0 26s
PS D:\git\helm3\lab1_kubectl_version1\yaml> kubectl describe service
Name: frontend
Namespace: default
Labels: name=frontend
Annotations: <none>
Selector: app=frontend
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.108.59.44
IPs: 10.108.59.44
Port: <unset> 80/TCP
TargetPort: 4200/TCP
Endpoints: 10.1.0.38:4200
Session Affinity: None
Events: <none>
Name: kubernetes
Namespace: default
Labels: component=apiserver
provider=kubernetes
Annotations: <none>
Should I be able to hit this locally or not? The demo suggests yes but no URL is given and anything I attempt fails.
From outside you do not have any way to connect to your service since its type is set to ClusterIP if you want directly to expose your service, you should set it to either type LoadBalancer or NodePort. For more info about these types check this link.
However your service has an internal url ( which works within the cluster, for example if you exec into a pod and curl that url, you will get a response ) and that is: <your service>.<your namespace>.svc.cluster.local
Instead of <your service> type the name of the service and instead of <your namespace> namespace in which that service resides. The rest of the url is the same for all services.

A sample containerized application in Kubernetes unable to be shown as targets in Prometheus for scraping metrics

My goal is to reproduce the observations in this blog post: https://medium.com/kubernetes-tutorials/monitoring-your-kubernetes-deployments-with-prometheus-5665eda54045
So far I am able to deploy the example rpc-app applicaiton in my cluster, the following shows the two pods for this application is running:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default rpc-app-deployment-64f456b65-5m7j5 1/1 Running 0 3h23m 10.244.0.15 my-server-ip.company.com <none> <none>
default rpc-app-deployment-64f456b65-9mnfd 1/1 Running 0 3h23m 10.244.0.14 my-server-ip.company.com <none> <none>
The application exposes metrics and is confirmed by:
root#xxxxx:/u01/app/k8s # curl 10.244.0.14:8081/metrics
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
...
rpc_durations_seconds{service="uniform",quantile="0.5"} 0.0001021102787270781
rpc_durations_seconds{service="uniform",quantile="0.9"} 0.00018233200374804932
rpc_durations_seconds{service="uniform",quantile="0.99"} 0.00019828258205623097
rpc_durations_seconds_sum{service="uniform"} 6.817882693745326
rpc_durations_seconds_count{service="uniform"} 68279
My prometheus pod is running in the same cluster. However I am unable to see any rpc_* meterics in the prometheus.
monitoring prometheus-deployment-599bbd9457-pslwf 1/1 Running 0 30m 10.244.0.21 my-server-ip.company.com <none> <none>
In the promethus GUI
click Status -> Servcie Discovery, I got
Service Discovery
rpc-metrics (0 / 3 active targets)
click Status -> Targets show nothing (0 targets)
click Status -> Configuration
The content can be seen as: https://gist.github.com/denissun/14835468be3dbef7bc924032767b9d7f
I am really new to Prometheus/Kubernetes monitoring, appreciate your help to troubleshoot this issue.
update 1 - I created the service
`
# cat rpc-app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: rpc-app-service
labels:
app: rpc-app
spec:
ports:
- name: web
port: 8081
targetPort: 8081
protocol: TCP
nodePort: 32325
selector:
app: rpc-app
type: NodePort
# kubectl get service rpc-app-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rpc-app-service NodePort 10.110.204.119 <none> 8081:32325/TCP 9h
Did you create the Kubernetes Service to expose the Deployment?
kubectl create -f rpc-app-service.yam
The Prometheus configuration watches for Service endpoints not Deployments|Pods.
Have a look at the Prometheus Operator. It's slightly more involved than running a Prometheus Deployment in your cluster but it represents a state-of-the-art deployment of Prometheus with some elegant abstractions such as PodMonitors and ServiceMonitors.

Ingress-nginx controller does not run ELB properly in AWS EC2 Kubernetes Cluster

I create cluster with kops utility on AWS EC2. Right now I am trying to configure ingress-nginx controller so it routes all traffic in my cluster. I need it handles HTTP, HTTPS and WebSocket connections. Based on this guide I made all required things:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/provider/aws/service-l4.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.28.0/deploy/static/provider/aws/patch-configmap-l4.yaml
When I am trying to get all items in ingress-nginx namespace with kubectl -n ingress-nginx get all:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx LoadBalancer 100.71.94.9 a7d3fe1383e344c1d8cb2de671xxxxxx-810xxxxxx.eu-central-1.elb.amazonaws.com 80:32389/TCP,443:31803/TCP 16m
When I open AWS console -> EC2 Instances -> Load Balancer, I can see that ELB has been created, but there OutOfService status on each node under "Instances" tab. So I can't get reach my ELB URL: a7d3fe1383e344c1d8cb2de671xxxxxx-810xxxxxx.eu-central-1.elb.amazonaws.com:
There is some more details about service using kubectl -n ingress-nginx describe service/ingress-nginx
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{"service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout":"60"...
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 60
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: LoadBalancer
IP: 100.71.94.9
LoadBalancer Ingress: a7d3fe1383e344c1d8cb2de671xxxxxx-810xxxxxx.eu-central-1.elb.amazonaws.com
Port: http 80/TCP
TargetPort: http/TCP
NodePort: http 32389/TCP
Endpoints: <none>
Port: https 443/TCP
TargetPort: https/TCP
NodePort: https 31803/TCP
Endpoints: <none>
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 15m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 15m service-controller Ensured load balancer
Am I missed something?
UPD #1
If I do the same things in EKS cluster, everything works well and ingress-controller appears on each node. Any ideas..?
You need to add security group on the EC2 instances(kubernetes worker nodes) where you have the nginx deployed to allow port 80 and 443 for the security group that was created for ELB.
Edit:
The endpoints section of the service/ingress-nginx service does not have IPs of the nginx pods. Hence when ELB sends a health check request but the request can not reach the pods so health check fails and ELB marks the backend as outofservice.

Kubernetes service architecture

Within the same kubernetes cluster,
Can I have multiple StatefulSets attached to one headless service or should each StatefulSet have it's own headless service? What are the pros and cons of doing this?
Can I mix standard and headless services in the same cluster? Specifically, I would like to use LoadBalancer service to load balance headless services. Can I define a service of type LoadBalancer and have headless services (ClusterIP = None) attached to it? If yes, how can I achieve this?
Here is my intended architecture:
Load Balancer Service
- Headless Service (Database-service)
- MySql
- BlazeGraph
- Headless Service (Web / Tomcat)
- Web Service (RESTful / GraphQL)
Any advice and insight is appreciated.
My setup
My service and the statefulsets attached to it have different labels.
database-service: app=database
mysqlset: app=mysql
My pods
khteh#khteh-T580:~ 2007 $ k get pods -l app=mysql -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mysql-0 1/1 Running 1 18h 10.1.1.4 khteh-t580 <none>
khteh#khteh-T580:~ 2008 $ k get pods -l app=blazegraph -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
blazegraph-0 1/1 Running 1 18h 10.1.1.254 khteh-t580 <none>
khteh#khteh-T580:~ 2009 $ k describe service database-service
Name: database-service
Namespace: default
Labels: app=database
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"database"},"name":"database-service","namespace":"defaul...
Selector: app=database,tier=database
Type: ClusterIP
IP: None
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: <none>
Port: blazegraph 9999/TCP
TargetPort: 9999/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Notice the service Endpoints is <none>. I am not sure this is the right setup.
Headless Service you should use in any case where you want to automatically discover all pods under the service as opposed to regular Service where you get ClusterIP instead. As an illustration from above mentioned example here is difference between DNS entries for Service (with ClusterIP) and Headless Service (without ClusterIP):
Standard service you will get the clusterIP value:
kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 10.0.0.213
Headless service you will get IP of each pod
kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.6
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.7
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.8
Now, If you connect two statefulset with single headless service, it will return the address of each pod in both the statefulset. There will be no way to differentiate the pods from two applications if you create two statefulset and one headless service for that. See the following article to understand why headless services are used
Headless service allow developer to reduce coupling from kubernetes system by allowing them to do discovery their own way. For such services, clusterIP is not allocated, kube-proxy doesn't handle these services and there is no load balancing and proxying done by platform for them. So, If you define clusterIP: None in your service there will be no load-balancing will be done from kubernetes end.
Hope this helps.
EDIT:
I did a little experiment to answer your queries, created two statefulsets of mysql database named mysql and mysql2, with 1 replica for each statefulset. They have their own PV, PVC but bound by only single headless service.
[root#ip-10-0-1-235 centos]# kubectl get pods -l app=mysql -o wide
NAME READY STATUS RESTARTS AGE IP NODE
mysql-0 1/1 Running 0 4m 192.168.13.21 ip-10-0-1-235.ec2.internal
mysql2-0 1/1 Running 0 3m 192.168.13.22 ip-10-0-1-235.ec2.internal
Now you can see the single headless service attached to both the pods
[root#ip-10-0-1-235 centos]# kubectl describe svc mysql
Name: mysql
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP: None
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.13.21:3306,192.168.13.22:3306
Session Affinity: None
Events: <none>
Now when you lookup the service from some other pod, it returns IP address of both the pods:
[root#rtp-worker-0 /]# nslookup mysql
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 192.168.13.21
Name: mysql.default.svc.cluster.local
Address: 192.168.13.22
Now, it is impossible to identify which address(pod) is of which statefulset. Now I tried to identify the statefulset using its metadata name, but couldn't
[root#rtp-worker-0 /]# nslookup mysql2.mysql.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mysql2.mysql.default.svc.cluster.local: NXDOMAIN
Hope it clarifies.

ingress-nginx No IP Address

I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.