I created a custom exporter in Python using prometheus-client package. Then created necessary artifacts to find the metric as a target in Prometheus deployed on Kubernetes.
But I am unable to see the metric as a target despite following all available instructions.
Help in finding the problem is appreciated.
Here is the summary of what I did.
Installed Prometheus using Helm on the K8s cluster in a namespace prometheus
Created a python program with prometheus-client package to create a metric
Created and deployed an image of the exporter in dockerhub
Created a deployment against the metrics image, in a namespace prom-test
Created a Service, ServiceMonitor, and a ServiceMonitorSelector
Created a service account, role and binding to enable access to the end point
Following is the code.
Service & Deployment
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: ClusterIP
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 6000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
#serviceAccount: test-app-exporter-sa
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Service account and role binding
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-app-exporter-sa
namespace: prom-test
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-binding
subjects:
- kind: ServiceAccount
name: test-app-exporter-sa
namespace: prom-test
roleRef:
kind: ClusterRole
name: test-app-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-role
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
Service Monitor & Selector
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: prometheus
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- default
- prom-test
#any: true
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: prometheus
spec:
serviceAccountName: test-app-exporter-sa
serviceMonitorSelector:
matchLabels:
app: test-app-exporter-sm
release: prometheus
resources:
requests:
memory: 400Mi
I am able to get the target identified by Prometheus.
But though the end point can be reached within the cluster as well as from the node IP. Prometheus says the target is down.
In addition to that I am unable to see any other target.
Prom-UI
Any help is greatly appreciated
Following is my changed code
Deployment & Service
apiVersion: v1
kind: Namespace
metadata:
name: prom-test
---
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: NodePort
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Cluster Roles
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: prom-test
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: prom-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: monitoring-role
namespace: monitoring
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: monitoring-role
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
Service Monitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: monitoring
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- prom-test
- monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: monitoring
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
serviceMonitorSelector:
matchLabels:
app: test-app-exporter
release: prometheus
resources:
requests:
memory: 400Mi
I have deployed Kibana in AKS with the server.basepath of /logs since I want it to be deployed in subpath. I am trying to access Kibana service using nginx controller It is giving 503 service unavailable but Service/Pod is running. Please help me on this.
Kibana Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
template:
metadata:
labels:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
spec:
containers:
- name: kibana
image: "docker.elastic.co/kibana/kibana:7.6.0"
imagePullPolicy: IfNotPresent
ports:
- name: http
containerPort: 5601
protocol: TCP
env:
- name: ELASTICSEARCH_URL
value: http://elasticsearch:9200
- name: SERVER_BASEPATH
value: /logs
- name: SERVER_REWRITEBASEPATH
value: "true"
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
Kibana service:
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: kube-logging
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 5601
protocol: TCP
name: http
selector:
app.kubernetes.io/name: kibana
app.kubernetes.io/instance: icy-coral
Kibana Ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kibana
labels:
app.kubernetes.io/name: kibana
helm.sh/chart: kibana-0.1.0
app.kubernetes.io/instance: icy-coral
app.kubernetes.io/managed-by: Tiller
annotations:
ingress.kubernetes.io/send-timeout: "600"
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-connect-timeout: "600"
nginx.ingress.kubernetes.io/proxy-read-timeout: "1800"
nginx.ingress.kubernetes.io/proxy-send-timeout: "1800"
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: ""
http:
paths:
- path: /logs/?(.*)
backend:
serviceName: kibana
servicePort: 80
Ensure kibana is running with:
kubectl logs kibana
Check the endpoint for the service is not empty:
kubectl describe svc kibana
Check the ingress is correctly configured:
kubectl describe ingress kibana
check the ingress-controller logs:
kubectl logs -n nginx-ingress-controller-.....
Update:
You can only refer services on the same namespace of the ingress. So try to move ingress to kube-logging namespace.
Checkout this: https://github.com/kubernetes/kubernetes/issues/17088
Environment:
I have:
1- NGINX Ingress controller version: 1.15.9, image: 0.23.0
2- Kubernetes version:
Client Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.4",
GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1",
GitTreeState:"clean", BuildDate:"2019-02-28T13:37:52Z",
GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"13",
GitVersion:"v1.13.4",
GitCommit:"c27b913fddd1a6c480c229191a087698aa92f0b1",
GitTreeState:"clean", BuildDate:"2019-02-28T13:30:26Z",
GoVersion:"go1.11.5", Compiler:"gc", Platform:"linux/amd64"}
Cloud provider or hardware configuration: Virtual Machines on KVM
OS (e.g. from /etc/os-release):
NAME="CentOS Linux" VERSION="7 (Core)" ID="centos" ID_LIKE="rhel
fedora" VERSION_ID="7" PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31" CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7" CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos" REDHAT_SUPPORT_PRODUCT_VERSION="7"
Kernel (e.g. uname -a):
Linux node01 3.10.0-957.5.1.el7.x86_64 #1 SMP Fri Feb 1 14:54:57 UTC
2019 x86_64 x86_64 x86_64 GNU/Linux
Install tools: kubeadm
More details:
CNI : WEAVE
Setup:
2 Resilient HA Proxy, 3 Masters, 2 infra, and worker nodes.
I am exposing all the services as node ports, where the HA-Proxy re-assign them to a public virtual IP.
Dedicated project hosted on the infra node carrying the monitoring and logging tools (Grafana, Prometheus, EFK, etc)
Backend NFS storage as persistent storage
What happened:
I want to be able to use external Name rather than node ports, so instead of accessing grafana for instance via vip + 3000 I want to access it via http://grafana.wild-card-dns-zone
Deployment
I have created a new namespace called ingress
I deployed it as follow:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
replicas: **2**
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
name: nginx-ingress
spec:
serviceAccountName: nginx-ingress-serviceaccount
nodeSelector:
node-role.kubernetes.io/infra: infra
terminationGracePeriodSeconds: 60
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.23.0
readinessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
livenessProbe:
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
args:
- /nginx-ingress-controller
- --default-backend-service=ingress/ingress-controller-nginx-ingress-default-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
- --v3
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
labels:
app: nginx-ingress
chart: nginx-ingress-1.3.1
component: default-backend
name: ingress-controller-nginx-ingress-default-backend
namespace: ingress
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: nginx-ingress
component: default-backend
release: ingress-controller
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: nginx-ingress
component: default-backend
release: ingress-controller
spec:
nodeSelector:
node-role.kubernetes.io/infra: infra
containers:
- image: k8s.gcr.io/defaultbackend:1.4
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 8080
scheme: HTTP
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: nginx-ingress-default-backend
ports:
- containerPort: 8080
name: http
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 60
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-configuration
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
- port: 443
targetPort: 443
protocol: TCP
name: https
selector:
name: ingress-nginx
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx-ingress
chart: nginx-ingress-1.3.1
component: default-backend
name: ingress-controller-nginx-ingress-default-backend
namespace: ingress
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: nginx-ingress
component: default-backend
release: ingress-controller
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrolebinding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-rolebinding
namespace: ingress
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress
INGRESS SETUP:
services
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-25T16:03:01Z"
labels:
app: jaeger
app.kubernetes.io/component: query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jeager-query
namespace: monitoring-logging
resourceVersion: "3055947"
selfLink: /api/v1/namespaces/monitoring-logging/services/jeager-query
uid: 778550f0-4f17-11e9-9078-001a4a16021e
spec:
externalName: jaeger.example.com
ports:
- port: 16686
protocol: TCP
targetPort: 16686
selector:
app: jaeger
app.kubernetes.io/component: query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
creationTimestamp: "2019-03-25T15:40:30Z"
labels:
app: grafana
chart: grafana-2.2.4
heritage: Tiller
release: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3053698"
selfLink: /api/v1/namespaces/monitoring-logging/services/grafana
uid: 51b9d878-4f14-11e9-9078-001a4a16021e
spec:
externalName: grafana.example.com
ports:
- name: http
port: 3000
protocol: TCP
targetPort: 3000
selector:
app: grafana
release: grafana
sessionAffinity: None
type: ExternalName
status:
loadBalancer: {}
INGRESS
Ingress 1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/service-upstream: "true"
creationTimestamp: "2019-03-25T21:13:56Z"
generation: 1
labels:
app: jaeger
app.kubernetes.io/component: query-ingress
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jaeger-query
namespace: monitoring-logging
resourceVersion: "3111683"
selfLink: /apis/extensions/v1beta1/namespaces/monitoring-logging/ingresses/jaeger-query
uid: e6347f6b-4f42-11e9-9e8e-001a4a16021c
spec:
rules:
- host: jaeger.example.com
http:
paths:
- backend:
serviceName: jeager-query
servicePort: 16686
status:
loadBalancer: {}
Ingress 2
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{},"labels":{"app":"grafana"},"name":"grafana","namespace":"monitoring-logging"},"spec":{"rules":[{"host":"grafana.example.com","http":{"paths":[{"backend":{"serviceName":"grafana","servicePort":3000}}]}}]}}
creationTimestamp: "2019-03-25T17:52:40Z"
generation: 1
labels:
app: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3071719"
selfLink: /apis/extensions/v1beta1/namespaces/monitoring-logging/ingresses/grafana
uid: c89d7f34-4f26-11e9-8c10-001a4a16021d
spec:
rules:
- host: grafana.example.com
http:
paths:
- backend:
serviceName: grafana
servicePort: 3000
status:
loadBalancer: {}
EndPoints
Endpoint 1
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-03-25T15:40:30Z"
labels:
app: grafana
chart: grafana-2.2.4
heritage: Tiller
release: grafana
name: grafana
namespace: monitoring-logging
resourceVersion: "3050562"
selfLink: /api/v1/namespaces/monitoring-logging/endpoints/grafana
uid: 51bb1f9c-4f14-11e9-9e8e-001a4a16021c
subsets:
- addresses:
- ip: 10.42.0.15
nodeName: kuinfra01.example.com
targetRef:
kind: Pod
name: grafana-b44b4f867-bcq2x
namespace: monitoring-logging
resourceVersion: "1386975"
uid: 433e3d21-4827-11e9-9e8e-001a4a16021c
ports:
- name: http
port: 3000
protocol: TCP
Endpoint 2
apiVersion: v1
kind: Endpoints
metadata:
creationTimestamp: "2019-03-25T16:03:01Z"
labels:
app: jaeger
app.kubernetes.io/component: service-query
app.kubernetes.io/instance: jeager
app.kubernetes.io/managed-by: jaeger-operator
app.kubernetes.io/name: jeager-query
app.kubernetes.io/part-of: jaeger
name: jeager-query
namespace: monitoring-logging
resourceVersion: "3114702"
selfLink: /api/v1/namespaces/monitoring-logging/endpoints/jeager-query
uid: 7786d833-4f17-11e9-9e8e-001a4a16021c
subsets:
- addresses:
- ip: 10.35.0.3
nodeName: kunode02.example.com
targetRef:
kind: Pod
name: jeager-query-7d9775d8f7-2hwdn
namespace: monitoring-logging
resourceVersion: "3114693"
uid: fdac9771-4f49-11e9-9e8e-001a4a16021c
ports:
- name: query
port: 16686
protocol: TCP
I am able to curl the endpoints from inside the ingress-controller pod:
# kubectl exec -it nginx-ingress-controller-5dd67f88cc-z2g8s -n ingress -- /bin/bash
www-data#nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ curl -k https://localhost
Found.
www-data#nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ curl http://localhost
<html>
<head><title>308 Permanent Redirect</title></head>
<body>
<center><h1>308 Permanent Redirect</h1></center>
<hr><center>nginx/1.15.9</center>
</body>
</html>
www-data#nginx-ingress-controller-5dd67f88cc-z2g8s:/etc/nginx$ exit
But from out side when I am trying to reach jaeger.example.com or grafana.example.com I am getting 502 bad gatway and the following error log:
10.39.0.0 - [10.39.0.0] - - [25/Mar/2019:16:40:32 +0000] "GET /search HTTP/1.1" 502 559 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" 514 0.001 [monitoring-logging-jeager-query-16686] vip:16686, vip:16686, vip:16686 0, 0, 0 0.001, 0.000, 0.000 502, 502, 502 b7c813286fccf27fffa03eb6564edfd1
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
2019/03/25 16:40:32 [error] 2816#2816: *4617326 connect() failed (111: Connection refused) while connecting to upstream, client: 10.39.0.0, server: _, request: "GET /favicon.ico HTTP/1.1", upstream: "http://vip:16686/favicon.ico", host: "jeager.example.com", referrer: "http://jeager.example.com/search"
10.39.0.0 - [10.39.0.0] - - [25/Mar/2019:16:40:32 +0000] "GET /favicon.ico HTTP/1.1" 502 559 "http://jeager.example.com/search" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/73.0.3683.86 Safari/537.36" 494 0.001 [monitoring-logging-jeager-query-16686] vip:16686, vip:16686, vip:16686 0, 0, 0 0.000, 0.001, 0.000 502, 502, 502 9e582912614e67dfee6be1f679de5933
I0325 16:40:32.497868 8 socket.go:225] skiping metric for host jeager.example.com that is not being served
I0325 16:40:32.497886 8 socket.go:225] skiping metric for host jeager.example.com that is not being served
First thanks for cookiedough for the clue to help regarding the service issue, but later I faced an issue to create service using external name but I found my mistake thanks for "Long" user in the slack, the mistake is that I was using service of type ExternalName and it should be type cluster IP here are the steps to solve the problems (Remark https issue is a separate problem):
1- Create wild character DNS zone pointing the public IP
1- For new service just create it of type ClusterIP
2- In the namespace for the service create an ingress using the following example (yaml):
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: grafana
name: grafana
namespace: grafana-namespace
spec:
rules:
host: grafana.example.com
http:
paths:
backend:
serviceName: grafana
servicePort: 3000
3- kubectl -f apply -f grafana-ingress.yaml
Now you can reach your grafana on http://grafana.example,com
Environment: Ubuntu 18.06 bare metal, set up the cluster with kubeadm (single node)
I want to access the cluster via port 80. Currently I am able to access it via the nodePort: domain.com:31668/ but not via port 80. I am using metallb Do I need something else to handle incoming traffic?
So the current topology would be:
LoadBalancer > Ingress Controller > Ingress > Service
kubectl -n ingress-nginx describe service/ingress-nginx:
Name: ingress-nginx
Namespace: ingress-nginx
Labels: app.kubernetes.io/name=ingress-nginx
app.kubernetes.io/part-of=ingress-nginx
Annotations: <none>
Selector: app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
Type: LoadBalancer
IP: 10.99.6.137
LoadBalancer Ingress: 192.168.1.240
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31668/TCP
Endpoints: 192.168.0.8:80
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 30632/TCP
Endpoints: 192.168.0.8:443
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal IPAllocated 35m metallb-controller Assigned IP "192.168.1.240"
As I am using a bare metal environment I am using metallb.
metallb config:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- 192.168.1.240-192.168.1.250
Ingress controller yml's:
apiVersion: v1 kind: Namespace metadata: name: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap apiVersion: v1 metadata: name: nginx-configuration namespace: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
--- kind: ConfigMap apiVersion: v1 metadata: name: tcp-services namespace: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
--- kind: ConfigMap apiVersion: v1 metadata: name: udp-services namespace: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
--- apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
--- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller namespace: ingress-nginx labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx spec: replicas: 1 selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.21.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
---
output of curl -v http://192.168.1.240 (executing inside the server)
* Rebuilt URL to: http://192.168.1.240/
* Trying 192.168.1.240...
* TCP_NODELAY set
* Connected to 192.168.1.240 (192.168.1.240) port 80 (#0)
> GET / HTTP/1.1
> Host: 192.168.1.240
> User-Agent: curl/7.61.0
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Server: nginx/1.15.6
< Date: Thu, 27 Dec 2018 19:03:28 GMT
< Content-Type: text/html
< Content-Length: 153
< Connection: keep-alive
<
<html>
<head><title>404 Not Found</title></head>
<body>
<center><h1>404 Not Found</h1></center>
<hr><center>nginx/1.15.6</center>
</body>
</html>
* Connection #0 to host 192.168.1.240 left intact
kubectl describe ingress articleservice-ingress
Name: articleservice-ingress
Namespace: default
Address: 192.168.1.240
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
host.com
/articleservice articleservice:31001 (<none>)
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
Events: <none>
curl -vH 'host: elpsit.com' http://192.168.1.240/articleservice/system/ipaddr
I can reach the ingress as expected from inside the server.
I have installed argo in my own namespace in a central kubernetes cluster in my organization.
After installation when argo "workflow-controller" tries to fetch the configmaps using the API server, I get timeout error.
time="2018-08-15T01:24:40Z" level=fatal msg="Get https://192.168.0.1:443/api/v1/namespaces/2304613691/configmaps/workflow-controller-configmap: dial tcp 192.168.0.1:443: i/o timeout\ngithub.com/argoproj/argo/errors.Wrap\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:87\ngithub.com/argoproj/argo/errors.InternalWrapError\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:70\ngithub.com/argoproj/argo/workflow/controller.(*WorkflowController).ResyncConfig\n\t/root/go/src/github.com/argoproj/argo/workflow/controller/controller.go:295\nmain.Run\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:96\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:750\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:831\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:784\nmain.main\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:68\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:195\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2337"
It is trying to access the follwing url: https://192.168.0.1:443/api/v1/namespaces/2304613691/configmaps/workflow-controller-configmap from within the pod container.
I have also modified the kubernetes host config to reflect kubernetes.default and added an open all ingress and egress network policy.
But still the exception is there.
time="2018-08-16T18:23:55Z" level=fatal msg="Get https://kubernetes.default:443/api/v1/namespaces/2304613691/configmaps/workflow-controller-configmap: dial tcp: i/o timeout\ngithub.com/argoproj/argo/errors.Wrap\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:87\ngithub.com/argoproj/argo/errors.InternalWrapError\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:70\ngithub.com/argoproj/argo/workflow/controller.(*WorkflowController).ResyncConfig\n\t/root/go/src/github.com/argoproj/argo/workflow/controller/controller.go:295\nmain.Run\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:96\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:750\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:831\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:784\nmain.main\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:68\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:195\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2337"
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: argo
namespace: 2304613691
- apiVersion: v1
kind: ServiceAccount
metadata:
name: argo-ui
namespace: 2304613691
kind: List
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-role
namespace: 2304613691
rules:
- apiGroups:
- ""
resources:
- pods
- pods/exec
verbs:
- create
- get
- list
- watch
- update
- patch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- create
- delete
- apiGroups:
- argoproj.io
resources:
- workflows
verbs:
- get
- list
- watch
- update
- patch
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-ui-role
namespace: 2304613691
rules:
- apiGroups:
- ""
resources:
- pods
- pods/exec
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- apiGroups:
- argoproj.io
resources:
- workflows
verbs:
- get
- list
- watch
kind: List
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-binding
namespace: "2304613691"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-role
subjects:
- kind: ServiceAccount
name: argo
namespace: "2304613691"
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-ui-binding
namespace: "2304613691"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-ui-role
subjects:
- kind: ServiceAccount
name: argo-ui
namespace: "2304613691"
kind: List
---
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
name: workflow-controller
namespace: 2304613691
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: workflow-controller
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: workflow-controller
spec:
containers:
- args:
- --configmap
- workflow-controller-configmap
command:
- workflow-controller
env:
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: <our repo>/sample-agupta34/workflow-controller:v2.1.1
imagePullPolicy: IfNotPresent
name: workflow-controller
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: argo
serviceAccountName: argo
terminationGracePeriodSeconds: 30
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
name: argo-ui
namespace: 2304613691
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: argo-ui
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: argo-ui
spec:
containers:
- env:
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: IN_CLUSTER
value: "true"
- name: ENABLE_WEB_CONSOLE
value: "false"
- name: BASE_HREF
value: /
image: <our repo>/sample-agupta34/argoui:v2.1.1
imagePullPolicy: IfNotPresent
name: argo-ui
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: argo-ui
serviceAccountName: argo-ui
terminationGracePeriodSeconds: 30
kind: List
---
apiVersion: v1
data:
config: |
artifactRepository: {}
executorImage: <our repo>/sample-agupta34/argoexec:v2.1.1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
namespace: 2304613691
---
apiVersion: v1
kind: Service
metadata:
name: argo-ui
namespace: 2304613691
labels:
app: argo-ui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8001
selector:
app: argo-ui
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argo-ui
namespace: 2304613691
annotations:
kubernetes.io/ingress.class: "netscaler.v2"
netscaler.applecloud.io/insecure-backend: "true"
spec:
backend:
serviceName: argo-ui
servicePort: 80
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: argo-and-argo-ui-netpol
spec:
podSelector:
matchLabels:
app: workflow-controller
app: argo-ui
ingress:
- {}
egress:
- {}