A sample containerized application in Kubernetes unable to be shown as targets in Prometheus for scraping metrics - kubernetes

My goal is to reproduce the observations in this blog post: https://medium.com/kubernetes-tutorials/monitoring-your-kubernetes-deployments-with-prometheus-5665eda54045
So far I am able to deploy the example rpc-app applicaiton in my cluster, the following shows the two pods for this application is running:
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default rpc-app-deployment-64f456b65-5m7j5 1/1 Running 0 3h23m 10.244.0.15 my-server-ip.company.com <none> <none>
default rpc-app-deployment-64f456b65-9mnfd 1/1 Running 0 3h23m 10.244.0.14 my-server-ip.company.com <none> <none>
The application exposes metrics and is confirmed by:
root#xxxxx:/u01/app/k8s # curl 10.244.0.14:8081/metrics
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
...
rpc_durations_seconds{service="uniform",quantile="0.5"} 0.0001021102787270781
rpc_durations_seconds{service="uniform",quantile="0.9"} 0.00018233200374804932
rpc_durations_seconds{service="uniform",quantile="0.99"} 0.00019828258205623097
rpc_durations_seconds_sum{service="uniform"} 6.817882693745326
rpc_durations_seconds_count{service="uniform"} 68279
My prometheus pod is running in the same cluster. However I am unable to see any rpc_* meterics in the prometheus.
monitoring prometheus-deployment-599bbd9457-pslwf 1/1 Running 0 30m 10.244.0.21 my-server-ip.company.com <none> <none>
In the promethus GUI
click Status -> Servcie Discovery, I got
Service Discovery
rpc-metrics (0 / 3 active targets)
click Status -> Targets show nothing (0 targets)
click Status -> Configuration
The content can be seen as: https://gist.github.com/denissun/14835468be3dbef7bc924032767b9d7f
I am really new to Prometheus/Kubernetes monitoring, appreciate your help to troubleshoot this issue.
update 1 - I created the service
`
# cat rpc-app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: rpc-app-service
labels:
app: rpc-app
spec:
ports:
- name: web
port: 8081
targetPort: 8081
protocol: TCP
nodePort: 32325
selector:
app: rpc-app
type: NodePort
# kubectl get service rpc-app-service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
rpc-app-service NodePort 10.110.204.119 <none> 8081:32325/TCP 9h

Did you create the Kubernetes Service to expose the Deployment?
kubectl create -f rpc-app-service.yam
The Prometheus configuration watches for Service endpoints not Deployments|Pods.
Have a look at the Prometheus Operator. It's slightly more involved than running a Prometheus Deployment in your cluster but it represents a state-of-the-art deployment of Prometheus with some elegant abstractions such as PodMonitors and ServiceMonitors.

Related

Kubernetes: Unable to explicitly set endpoint for a service

Hi Kubernetes Experts,
I have an application cluster running in the azure kubernetes cluster. There are 3 pods inside the application cluster. The app is designed in a way, that each pod listens on a different port. For example, pod 1 listens on 31090, pod2 on 31091 and pod 3 on 31092.
This application is needed to be connected from outside the network. At this point, I need to create a separate load balancer service for each of the pods.
In the service, I cannot use selector as app name/label as it tries to distribute traffic between all 3 pods in round robin way. Now as you see above that one port (say 31090) is running only on one pod. So, external connections to the load balancer IP fails 2/3 rd of times.
So, I am trying to create 3 different load balancer services individual to each pod, without mentioning the selector and later assigning endpoint individually to them.
The approach is explained here:
In Kubernetes, how does one select a pod by name in a service selector?
But after the endpoint is created, the service shows endpoint as blank. See below.
First I created only the service
---
apiVersion: v1
kind: Service
metadata:
name: myservice
spec:
ports:
- protocol: TCP
port: 31090
targetPort: 31090
name: b0
type: LoadBalancer
After this, the service shows endpoint as "none". So far, so good.
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints: <none>**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3s service-controller Ensuring load balancer
Then I create the end point, I have made sure the names match between service and endpoint, including any spaces or tabs. But the endpoint in service desc shows "" (blank). And this is why, I am unable to get to the app from outside network. Telnet to the port and external IP just keeps trying.
---
apiVersion: v1
kind: Endpoints
metadata:
name: myservice
subsets:
- addresses:
- ip: 10.240.1.32
ports:
- port: 31090
kubectl describe service myservice
Name: myservice
Namespace: confluent
Labels: <none>
Annotations: <none>
Selector: <none>
Type: LoadBalancer
IP: 10.0.184.1
LoadBalancer Ingress: 20.124.49.192
Port: b0 31090/TCP
TargetPort: 31090/TCP
NodePort: b0 31354/TCP
**Endpoints:**
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 3m22s service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 3m10s service-controller Ensured load balancer
Only this service is failing (using no selector). All my other external load balancer services are working fine. They all are getting to the pods. They all are using selector as app label.
Here is the pod ip. I have ensured port 31090 is running inside the pod.
kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ck21-cp-kafka-0 2/2 Running 2 (76m ago) 78m **10.240.1.32** aks-agentpool-26199219-vmss000013 <none> <none>
Can someone please help me here?
Thanks !

Kubernetes service architecture

Within the same kubernetes cluster,
Can I have multiple StatefulSets attached to one headless service or should each StatefulSet have it's own headless service? What are the pros and cons of doing this?
Can I mix standard and headless services in the same cluster? Specifically, I would like to use LoadBalancer service to load balance headless services. Can I define a service of type LoadBalancer and have headless services (ClusterIP = None) attached to it? If yes, how can I achieve this?
Here is my intended architecture:
Load Balancer Service
- Headless Service (Database-service)
- MySql
- BlazeGraph
- Headless Service (Web / Tomcat)
- Web Service (RESTful / GraphQL)
Any advice and insight is appreciated.
My setup
My service and the statefulsets attached to it have different labels.
database-service: app=database
mysqlset: app=mysql
My pods
khteh#khteh-T580:~ 2007 $ k get pods -l app=mysql -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
mysql-0 1/1 Running 1 18h 10.1.1.4 khteh-t580 <none>
khteh#khteh-T580:~ 2008 $ k get pods -l app=blazegraph -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE
blazegraph-0 1/1 Running 1 18h 10.1.1.254 khteh-t580 <none>
khteh#khteh-T580:~ 2009 $ k describe service database-service
Name: database-service
Namespace: default
Labels: app=database
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"database"},"name":"database-service","namespace":"defaul...
Selector: app=database,tier=database
Type: ClusterIP
IP: None
Port: mysql 3306/TCP
TargetPort: 3306/TCP
Endpoints: <none>
Port: blazegraph 9999/TCP
TargetPort: 9999/TCP
Endpoints: <none>
Session Affinity: None
Events: <none>
Notice the service Endpoints is <none>. I am not sure this is the right setup.
Headless Service you should use in any case where you want to automatically discover all pods under the service as opposed to regular Service where you get ClusterIP instead. As an illustration from above mentioned example here is difference between DNS entries for Service (with ClusterIP) and Headless Service (without ClusterIP):
Standard service you will get the clusterIP value:
kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 10.0.0.213
Headless service you will get IP of each pod
kubectl exec zookeeper-0 -- nslookup zookeeper
Server: 10.0.0.10
Address: 10.0.0.10#53
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.6
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.7
Name: zookeeper.default.svc.cluster.local
Address: 172.17.0.8
Now, If you connect two statefulset with single headless service, it will return the address of each pod in both the statefulset. There will be no way to differentiate the pods from two applications if you create two statefulset and one headless service for that. See the following article to understand why headless services are used
Headless service allow developer to reduce coupling from kubernetes system by allowing them to do discovery their own way. For such services, clusterIP is not allocated, kube-proxy doesn't handle these services and there is no load balancing and proxying done by platform for them. So, If you define clusterIP: None in your service there will be no load-balancing will be done from kubernetes end.
Hope this helps.
EDIT:
I did a little experiment to answer your queries, created two statefulsets of mysql database named mysql and mysql2, with 1 replica for each statefulset. They have their own PV, PVC but bound by only single headless service.
[root#ip-10-0-1-235 centos]# kubectl get pods -l app=mysql -o wide
NAME READY STATUS RESTARTS AGE IP NODE
mysql-0 1/1 Running 0 4m 192.168.13.21 ip-10-0-1-235.ec2.internal
mysql2-0 1/1 Running 0 3m 192.168.13.22 ip-10-0-1-235.ec2.internal
Now you can see the single headless service attached to both the pods
[root#ip-10-0-1-235 centos]# kubectl describe svc mysql
Name: mysql
Namespace: default
Labels: <none>
Annotations: <none>
Selector: app=mysql
Type: ClusterIP
IP: None
Port: <unset> 3306/TCP
TargetPort: 3306/TCP
Endpoints: 192.168.13.21:3306,192.168.13.22:3306
Session Affinity: None
Events: <none>
Now when you lookup the service from some other pod, it returns IP address of both the pods:
[root#rtp-worker-0 /]# nslookup mysql
Server: 10.96.0.10
Address: 10.96.0.10#53
Name: mysql.default.svc.cluster.local
Address: 192.168.13.21
Name: mysql.default.svc.cluster.local
Address: 192.168.13.22
Now, it is impossible to identify which address(pod) is of which statefulset. Now I tried to identify the statefulset using its metadata name, but couldn't
[root#rtp-worker-0 /]# nslookup mysql2.mysql.default.svc.cluster.local
Server: 10.96.0.10
Address: 10.96.0.10#53
** server can't find mysql2.mysql.default.svc.cluster.local: NXDOMAIN
Hope it clarifies.

Accessing Kubernetes dashboard on Compute instance in Oracle Cloud

I have deployed kubernetes and the dashboard onto a compute instance in Oracle cloud.
I have the dashboard installed with grafana onto my compute instance.
NAME READY STATUS RESTARTS AGE
po/etcd-mst-instance1 1/1 Running 0 1h
po/heapster-7856f6b566-rkfx5 1/1 Running 0 1h
po/kube-apiserver-mst-instance1 1/1 Running 0 1h
po/kube-controller-manager-mst-instance1 1/1 Running 0 1h
po/kube-dns-d879d6bcb-b9zjf 3/3 Running 0 1h
po/kube-flannel-ds-lgklw 1/1 Running 0 1h
po/kube-proxy-g6vxm 1/1 Running 0 1h
po/kube-scheduler-mst-instance1 1/1 Running 0 1h
po/kubernetes-dashboard-dd5c889c-6vphq 1/1 Running 0 1h
po/monitoring-grafana-5d4d76cd65-p7n5l 1/1 Running 0 1h
po/monitoring-influxdb-787479f6fd-8qkg2 1/1 Running 0 1h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/heapster ClusterIP 10.98.200.184 <none> 80/TCP 1h
svc/kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP 1h
svc/kubernetes-dashboard ClusterIP 10.107.155.3 <none> 443/TCP 1h
svc/monitoring-grafana ClusterIP 10.96.130.226 <none> 80/TCP 1h
svc/monitoring-influxdb ClusterIP 10.105.163.213 <none> 8086/TCP 1h
I am trying to access the dashboard via SSH and did the below in my local computer:
ssh -L localhost:8001:172.31.4.117:6443 opc#xxxxxxxx
However, it tells me this error :
Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
Im not sure what is the best way to access the dashboard. I am new at k8s and still at a beginner stage so would want to consult as I have also tried doing kubectl proxy on my local computer but when i try to access 127.0.0.1 it gives me this error:
I0804 17:01:28.902675 77193 logs.go:41] http: proxy error: dial tcp [::1]:8080: connect: connection refused
Would really appreciaate any help and thank you
Kubernetes includes a web dashboard that can be used for basic management operations.
Once Dashboard is installed on your Kubernetes cluster, it can be accessed in a few different ways.
I prefer to use the kubectl proxy from the command line to access Kubernetes Dashboard.
Kubectl does for you: authentication with API server and forward traffic between
your cluster (with Dashboard deployed inside) and your web browser.
Please notice that kubectl does it for a local running web browser, as it is running on
a localhost.
From the command line:
kubectl proxy
Next, start browsing this address:
http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
In case Kubernetes API server is exposed and accessible, you may try:
https://<master-ip>:<apiserver-port>/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
where master-ip is the IP address of your Kubernetes master node where API is running.
On single node setup, another way is use NodePort configuration to access Dashboard.
I found it on dashboard wiki:
Here is a sample of configuration to consider and adapt to your needs:
apiVersion: v1
...
name: kubernetes-dashboard
namespace: kube-system
resourceVersion: "343478"
selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard-head
spec:
clusterIP: <your-cluster-ip>
externalTrafficPolicy: Cluster
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
type: NodePort
After applying configuration, check for the exposed port for https using the command:
kubectl -n kube-system get service kubernetes-dashboard
If it returned for example 31707, you could start your browser with:
https://<master-ip>:31707
I was inspired by web ui dashboard guide and accessing dashboard wiki.

ingress-nginx No IP Address

I've created a test k8s cluster using kubespray (3 nodes, virtualbox
centos vm based) and have been trying to follow the guide for setting up nginx ingress, but i never seem to get an external address assigned to my service:
I can see that the ingress controller is apparently installed:
[root#k8s-01 ~]# kubectl get pods --all-namespaces -l app=ingress-nginx
NAMESPACE NAME READY STATUS RESTARTS AGE
ingress-nginx nginx-ingress-controller-58c9df5856-v6hml 1/1 Running 0 28m
And following the prerequisites docs, i have set up the http-svc sample service:
[root#k8s-01 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
http-svc-794dc89f5-f2vlx 1/1 Running 0 27m
[root#k8s-01 ~]# kubectl get svc http-svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
http-svc LoadBalancer 10.233.25.131 <pending> 80:30055/TCP 27m
[root#k8s-01 ~]# kubectl describe svc http-svc
Name: http-svc
Namespace: default
Labels: app=http-svc
Annotations: <none>
Selector: app=http-svc
Type: LoadBalancer
IP: 10.233.25.131
Port: http 80/TCP
TargetPort: 8080/TCP
NodePort: http 30055/TCP
Endpoints: 10.233.65.5:8080
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Type 27m service-controller ClusterIP -> LoadBalancer
As far as i know, i should see a LoadBalancer Ingress entry, but the External IP for the service still appears to be pending, so something isn't working, but i'm at a loss where to diagnose what has gone wrong
Since you are creating your cluster locally, exposing your service as type LoadBalancer will not provision a loadbalancer for you. Use the type LoadBalancer if you are creating your cluster in a cloud environment such as AWS or GKE. In AWS it will auto-provision you an loadbalancer (ELB) and assign an external ip for the service.
To make your service work with current settings and environment change your service type from Loadbalancer to NodePort.

How to publicly expose Traefik ingress controller on Google Cloud Container Engine?

I've been trying to use Traefik as an Ingress Controller on Google Cloud's container engine.
I got my http deployment/service up and running (when I exposed it with a normal LoadBalancer, it was answering fine).
I then removed the LoadBalancer, and followed this tutorial: https://docs.traefik.io/user-guide/kubernetes/
So I got a new traefik-ingress-controller deployment and service, and an ingress for traefik's ui which I can access through the kubectl proxy.
I then create my ingress for my http service, but here comes my issue: I can't find a way to expose that externally.
I want it to be accessible by anybody via an external IP.
What am I missing?
Here is the output of kubectl get --export all:
NAME READY STATUS RESTARTS AGE
po/mywebservice-3818647231-gr3z9 1/1 Running 0 23h
po/mywebservice-3818647231-rn4fw 1/1 Running 0 1h
po/traefik-ingress-controller-957212644-28dx6 1/1 Running 0 1h
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/mywebservice 10.51.254.147 <none> 80/TCP 1d
svc/kubernetes 10.51.240.1 <none> 443/TCP 1d
svc/traefik-ingress-controller 10.51.248.165 <nodes> 80:31447/TCP,8080:32481/TCP 25m
svc/traefik-web-ui 10.51.248.65 <none> 80/TCP 3h
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
deploy/mywebservice 2 2 2 2 1d
deploy/traefik-ingress-controller 1 1 1 1 3h
NAME DESIRED CURRENT READY AGE
rs/mywebservice-3818647231 2 2 2 23h
rs/traefik-ingress-controller-957212644 1 1 1 3h
You need to expose the Traefik service. Set the service spec type to LoadBalancer. Try the below service file that i've used previously:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
app: traefik
tier: proxy
ports:
- port: 80
targetPort: 80