AKS not creating KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables - kubernetes

I've a Deployment and Service in AKS that also has a linked ServiceAccount that enables the pods to get, watch and list services.
In an AKS deployment this used to create the KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT environment variables with the pods. Now, it seems, it doesn't.
The only thing that's changed with that particular service/ deployment was various cluster updates at which some point it seems to have stopped working.
We've tried redeploying/ deleting and recreating the service, but nothing seems to work.
Here is the Deployment yaml:
apiVersion : apps/v1
kind: Deployment
metadata:
name: open-api
labels:
name: open-api
app: test-services
spec:
selector:
matchLabels:
name: open-api
app: test-services
strategy:
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
minReadySeconds: 60
replicas: 1
template:
metadata:
labels:
name: open-api
app: test-services
spec:
serviceAccountName: open-api-account
containers:
- name: open-api
image: open-api
terminationMessagePolicy: FallbackToLogsOnError
ports:
- containerPort: 80
resources:
requests:
memory: "70Mi"
cpu: "50m"
limits:
memory: "150Mi"
cpu: "100m"
readinessProbe:
httpGet:
path: /pingz
port: 80
initialDelaySeconds: 10
periodSeconds: 3
env:
- name: "ASPNETCORE_ENVIRONMENT"
value: "$ENV_VAR"
Here's the yaml for the Service:
apiVersion: v1
kind: Service
metadata:
name: open-api
labels:
name: open-api
app: test-services
spec:
type: ClusterIP
ports:
- port: 80
selector:
name: open-api
app: test-services
Here's the yaml for the ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
name: open-api-account
namespace: test-services
automountServiceAccountToken: false
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
namespace: test-services
name: open-api-service-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["services"]
verbs: ["get", "watch", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: open-api-service-reader
namespace: test-services
subjects:
- kind: ServiceAccount
# Reference to ServiceAccount kind's `metadata.name`
name: open-api-account
# Reference to ServiceAccount kind's `metadata.namespace`
namespace: test-services
roleRef:
kind: ClusterRole
name: open-api-service-reader
apiGroup: rbac.authorization.k8s.io

These variables seem to be added automatically for pods that exist in kube-system. Not sure if this can be extended to other namespaces.

Related

Kubernetes - error: Metrics not available for pod

Problem statement
The kubernetes metrics server starts, but doesn't collect metrics.
When running $ kubectl top pods it returns error: Metrics not available for pod <namespace>/<deployment>.
Artificats
I'm using the following metrics.yml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
hostNetwork: true
containers:
- args:
- /metrics-server
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP
- --kubelet-use-node-status-port
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100
Once deployed on my local machine, the deployment looks like this:
$ kubectl describe deployment.apps/metrics-server -n metrics-server
Name: metrics-server
Namespace: metrics-server
CreationTimestamp: Tue, 13 Dec 2022 12:38:39 +0100
Labels: k8s-app=metrics-server
Annotations: deployment.kubernetes.io/revision: 1
Selector: k8s-app=metrics-server
Replicas: 1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 0 max unavailable, 25% max surge
Pod Template:
Labels: k8s-app=metrics-server
Service Account: metrics-server
Containers:
metrics-server:
Image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
Port: 4443/TCP
Host Port: 4443/TCP
Args:
/metrics-server
--cert-dir=/tmp
--secure-port=4443
--kubelet-preferred-address-types=InternalIP
--kubelet-use-node-status-port
--kubelet-insecure-tls
Requests:
cpu: 100m
memory: 200Mi
Liveness: http-get https://:https/livez delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get https://:https/readyz delay=20s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/tmp from tmp-dir (rw)
Volumes:
tmp-dir:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Priority Class Name: system-cluster-critical
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: metrics-server-5988cd75cb (1/1 replicas created)
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal ScalingReplicaSet 2m2s deployment-controller Scaled up replica set metrics-server-5988cd75cb to 1
Findings
This exact same setup (without --kubelet-insecure-tls) is working on another DigitalOcean hosted cluster. It has the following kubernetes version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.9", GitCommit:"c1de2d70269039fe55efb98e737d9a29f9155246", GitTreeState:"clean", BuildDate:"2022-07-13T14:19:57Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}
My machine, where the metrics server isn't working has the version:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"23+", GitVersion:"v1.23.14-dispatcher-dirty", GitCommit:"35498c8a8d141664928467fda116cd500d09bc21", GitTreeState:"dirty", BuildDate:"2022-11-16T20:14:00Z", GoVersion:"go1.17.13", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"24", GitVersion:"v1.24.0", GitCommit:"4ce5a8954017644c5420bae81d72b09b735c21f0", GitTreeState:"clean", BuildDate:"2022-05-03T13:38:19Z", GoVersion:"go1.18.1", Compiler:"gc", Platform:"linux/amd64"}
Possible solutions?
Try to install the same kubernetes version on my local machine? --> Tried this but Docker desktop let me change the server version.
I think I found the solution. I overlooked the following comment: https://github.com/kubernetes-sigs/metrics-server/issues/1061#issuecomment-1239227118
It turns out that Docker-Desktop 4.12 has this fixed. I was still on 4.9 but running a new Kubernetes version (1.24) where Dockershim has been removed. Several bugs have been fixed since 4.9. Updating Docker-Desktop to 4.15 (currently the latest version) fixed the issue for me.
So to summarize:
Update docker desktop to 4.12 or later
Reset the Kubernetes cluster
Check if cluster was upgraded using kubectl version
Apply metrics.yml
This is the final metrics.yml
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
rbac.authorization.k8s.io/aggregate-to-admin: "true"
rbac.authorization.k8s.io/aggregate-to-edit: "true"
rbac.authorization.k8s.io/aggregate-to-view: "true"
name: system:aggregated-metrics-reader
rules:
- apiGroups:
- metrics.k8s.io
resources:
- pods
- nodes
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
verbs:
- get
- apiGroups:
- ""
resources:
- pods
- nodes
- nodes/stats
- namespaces
- configmaps
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server-auth-reader
namespace: metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: extension-apiserver-authentication-reader
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: metrics-server:system:auth-delegator
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
labels:
k8s-app: metrics-server
name: system:metrics-server
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:metrics-server
subjects:
- kind: ServiceAccount
name: metrics-server
namespace: metrics-server
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: https
selector:
k8s-app: metrics-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: metrics-server
name: metrics-server
namespace: metrics-server
spec:
selector:
matchLabels:
k8s-app: metrics-server
strategy:
rollingUpdate:
maxUnavailable: 0
template:
metadata:
labels:
k8s-app: metrics-server
spec:
containers:
- args:
- --cert-dir=/tmp
- --secure-port=4443
- --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
- --kubelet-use-node-status-port
- --metric-resolution=40s
- --kubelet-insecure-tls
image: k8s.gcr.io/metrics-server/metrics-server:v0.6.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
httpGet:
path: /livez
port: https
scheme: HTTPS
periodSeconds: 10
name: metrics-server
ports:
- containerPort: 4443
name: https
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /readyz
port: https
scheme: HTTPS
initialDelaySeconds: 20
periodSeconds: 10
resources:
requests:
cpu: 100m
memory: 200Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsNonRoot: true
runAsUser: 1000
volumeMounts:
- mountPath: /tmp
name: tmp-dir
nodeSelector:
kubernetes.io/os: linux
priorityClassName: system-cluster-critical
serviceAccountName: metrics-server
volumes:
- emptyDir: {}
name: tmp-dir
---
apiVersion: apiregistration.k8s.io/v1
kind: APIService
metadata:
labels:
k8s-app: metrics-server
name: v1beta1.metrics.k8s.io
spec:
group: metrics.k8s.io
groupPriorityMinimum: 100
insecureSkipTLSVerify: true
service:
name: metrics-server
namespace: metrics-server
version: v1beta1
versionPriority: 100

Unable to find Prometheus custom app exporter as a target in Prometheus server deployed in Kubernetes

I created a custom exporter in Python using prometheus-client package. Then created necessary artifacts to find the metric as a target in Prometheus deployed on Kubernetes.
But I am unable to see the metric as a target despite following all available instructions.
Help in finding the problem is appreciated.
Here is the summary of what I did.
Installed Prometheus using Helm on the K8s cluster in a namespace prometheus
Created a python program with prometheus-client package to create a metric
Created and deployed an image of the exporter in dockerhub
Created a deployment against the metrics image, in a namespace prom-test
Created a Service, ServiceMonitor, and a ServiceMonitorSelector
Created a service account, role and binding to enable access to the end point
Following is the code.
Service & Deployment
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: ClusterIP
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 6000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
#serviceAccount: test-app-exporter-sa
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Service account and role binding
apiVersion: v1
kind: ServiceAccount
metadata:
name: test-app-exporter-sa
namespace: prom-test
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-binding
subjects:
- kind: ServiceAccount
name: test-app-exporter-sa
namespace: prom-test
roleRef:
kind: ClusterRole
name: test-app-exporter-role
apiGroup: rbac.authorization.k8s.io
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: test-app-exporter-role
rules:
- apiGroups: [""]
resources:
- nodes
- nodes/metrics
- services
- endpoints
- pods
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources:
- configmaps
verbs: ["get"]
- apiGroups:
- networking.k8s.io
resources:
- ingresses
verbs: ["get", "list", "watch"]
Service Monitor & Selector
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: prometheus
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- default
- prom-test
#any: true
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: prometheus
spec:
serviceAccountName: test-app-exporter-sa
serviceMonitorSelector:
matchLabels:
app: test-app-exporter-sm
release: prometheus
resources:
requests:
memory: 400Mi
I am able to get the target identified by Prometheus.
But though the end point can be reached within the cluster as well as from the node IP. Prometheus says the target is down.
In addition to that I am unable to see any other target.
Prom-UI
Any help is greatly appreciated
Following is my changed code
Deployment & Service
apiVersion: v1
kind: Namespace
metadata:
name: prom-test
---
apiVersion: v1
kind: Service
metadata:
name: test-app-exporter
namespace: prom-test
labels:
app: test-app-exporter
spec:
type: NodePort
selector:
app: test-app-exporter
ports:
- name: http
protocol: TCP
port: 5000
targetPort: 5000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: test-app-exporter
namespace: prom-test
spec:
replicas: 1
selector:
matchLabels:
app: test-app-exporter
template:
metadata:
labels:
app: test-app-exporter
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
containers:
- name: test-app-exporter
image: index.docker.io/cbotlagu/test-app-exporter:2
imagePullPolicy: Always
resources:
limits:
memory: "128Mi"
cpu: "500m"
ports:
- name: http
containerPort: 5000
imagePullSecrets:
- name: myregistrykey
Cluster Roles
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-reader
namespace: prom-test
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: prom-test
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: pod-reader
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: monitoring-role
namespace: monitoring
rules:
- apiGroups:
- ""
resources:
- nodes/metrics
- endpoints
- pods
- services
verbs:
- get
- list
- watch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-reader-from-prom-test
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: monitoring-role
subjects:
- kind: ServiceAccount
name: rel-1-kube-prometheus-stac-operator
namespace: monitoring
Service Monitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: test-app-exporter-sm
namespace: monitoring
labels:
app: test-app-exporter
release: prometheus
spec:
selector:
matchLabels:
# Target app service
app: test-app-exporter
endpoints:
- port: http
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- prom-test
- monitoring
---
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: service-monitor-selector
namespace: monitoring
spec:
serviceAccountName: rel-1-kube-prometheus-stac-operator
serviceMonitorSelector:
matchLabels:
app: test-app-exporter
release: prometheus
resources:
requests:
memory: 400Mi

istio-ingressgateway :Readiness probe failed: HTTP probe failed with statuscode: 503

I am trying to setup istio1.5.1 in minicube kubernetes cluster,I'm following the official documentation of Knative for setting up istio without sidecar injection. I;m facing issue with istio ingress gateway service which is showing external ip of ingressgateway service as .
I have gone through other answers posted here as well as many other forums but none of them helped in my case.
Using minikube v1.9.1 with driver=none
helm v2.16.5
kubectl v1.18.0
I am getting the following output for:
kubectl get pods --namespace istio-system
NAME READY STATUS RESTARTS AGE
istio-ingressgateway-b599cccd9-qnp5l 1/1 Running 0 60s
istio-pilot-b67ccb85-mfllc 1/1 Running 0 60s
kubectl get svc --namespace istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
AGE
istio-ingressgateway LoadBalancer 10.104.37.189 ***<pending>*** 15020:30168/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32576/TCP,15030:31080/TCP,15031:31767/TCP,15032:31812/TCP,15443:30660/TCP 74s
istio-pilot ClusterIP 10.100.224.212 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 74s
On describing ingress pod I am getting a warning Readiness probe failed: HTTP probe failed with statuscode: 503
Can someone help me to solve this issue.
Thanks!
Updating with output from trying out the answer:
kubectl apply -f metallb.yaml
podsecuritypolicy.policy/controller created
podsecuritypolicy.policy/speaker created
serviceaccount/controller created
serviceaccount/speaker created
clusterrole.rbac.authorization.k8s.io/metallb-system:controller created
clusterrole.rbac.authorization.k8s.io/metallb-system:speaker created
role.rbac.authorization.k8s.io/config-watcher created
role.rbac.authorization.k8s.io/pod-lister created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:controller created
clusterrolebinding.rbac.authorization.k8s.io/metallb-system:speaker created
rolebinding.rbac.authorization.k8s.io/config-watcher created
rolebinding.rbac.authorization.k8s.io/pod-lister created
daemonset.apps/speaker created
deployment.apps/controller created
$ kubectl get pods -n metallb-system
No resources found in metallb-system namespace.
After applying the yaml file it show that everything is created but I am not getting any pod deployed under metallb-system namespace.
Minikube may not provide external IP or loadbalancer you might have to use metalLB in minikube.
Metal lb : https://metallb.universe.tf/
you can also check this out for reference:https://medium.com/#emirmujic/istio-and-metallb-on-minikube-242281b1134b
This is also a good reference: https://gist.github.com/diegopacheco/9ed4fd9b9a0f341e94e0eb791169ecf9
Metal LB YAMl :
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
---
apiVersion: v1
kind: ServiceAccount
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:controller
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["services/status"]
verbs: ["update"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metallb-system:speaker
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["services", "endpoints", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create"]
---
## Role bindings
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:controller
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:controller
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: metallb-system:speaker
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: speaker
namespace: metallb-system
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: metallb-system:speaker
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
namespace: metallb-system
name: config-watcher
labels:
app: metallb
subjects:
- kind: ServiceAccount
name: controller
- kind: ServiceAccount
name: speaker
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: config-watcher
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
namespace: metallb-system
name: speaker
labels:
app: metallb
component: speaker
spec:
selector:
matchLabels:
app: metallb
component: speaker
template:
metadata:
labels:
app: metallb
component: speaker
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7472"
spec:
serviceAccountName: speaker
terminationGracePeriodSeconds: 0
hostNetwork: true
containers:
- name: speaker
image: metallb/speaker:v0.7.1
imagePullPolicy: IfNotPresent
args:
- --port=7472
- --config=config
env:
- name: METALLB_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
ports:
- name: monitoring
containerPort: 7472
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
capabilities:
drop:
- all
add:
- net_raw
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: metallb-system
name: controller
labels:
app: metallb
component: controller
spec:
revisionHistoryLimit: 3
selector:
matchLabels:
app: metallb
component: controller
template:
metadata:
labels:
app: metallb
component: controller
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "7472"
spec:
serviceAccountName: controller
terminationGracePeriodSeconds: 0
securityContext:
runAsNonRoot: true
runAsUser: 65534 # nobody
containers:
- name: controller
image: metallb/controller:v0.7.1
imagePullPolicy: IfNotPresent
args:
- --port=7472
- --config=config
ports:
- name: monitoring
containerPort: 7472
resources:
limits:
cpu: 100m
memory: 100Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- all
readOnlyRootFilesystem: true

argo workflow-controller can't connect to Kubernetes APIServer

I have installed argo in my own namespace in a central kubernetes cluster in my organization.
After installation when argo "workflow-controller" tries to fetch the configmaps using the API server, I get timeout error.
time="2018-08-15T01:24:40Z" level=fatal msg="Get https://192.168.0.1:443/api/v1/namespaces/2304613691/configmaps/workflow-controller-configmap: dial tcp 192.168.0.1:443: i/o timeout\ngithub.com/argoproj/argo/errors.Wrap\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:87\ngithub.com/argoproj/argo/errors.InternalWrapError\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:70\ngithub.com/argoproj/argo/workflow/controller.(*WorkflowController).ResyncConfig\n\t/root/go/src/github.com/argoproj/argo/workflow/controller/controller.go:295\nmain.Run\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:96\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:750\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:831\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:784\nmain.main\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:68\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:195\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2337"
It is trying to access the follwing url: https://192.168.0.1:443/api/v1/namespaces/2304613691/configmaps/workflow-controller-configmap from within the pod container.
I have also modified the kubernetes host config to reflect kubernetes.default and added an open all ingress and egress network policy.
But still the exception is there.
time="2018-08-16T18:23:55Z" level=fatal msg="Get https://kubernetes.default:443/api/v1/namespaces/2304613691/configmaps/workflow-controller-configmap: dial tcp: i/o timeout\ngithub.com/argoproj/argo/errors.Wrap\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:87\ngithub.com/argoproj/argo/errors.InternalWrapError\n\t/root/go/src/github.com/argoproj/argo/errors/errors.go:70\ngithub.com/argoproj/argo/workflow/controller.(*WorkflowController).ResyncConfig\n\t/root/go/src/github.com/argoproj/argo/workflow/controller/controller.go:295\nmain.Run\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:96\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:750\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).ExecuteC\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:831\ngithub.com/argoproj/argo/vendor/github.com/spf13/cobra.(*Command).Execute\n\t/root/go/src/github.com/argoproj/argo/vendor/github.com/spf13/cobra/command.go:784\nmain.main\n\t/root/go/src/github.com/argoproj/argo/cmd/workflow-controller/main.go:68\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:195\nruntime.goexit\n\t/usr/local/go/src/runtime/asm_amd64.s:2337"
apiVersion: v1
items:
- apiVersion: v1
kind: ServiceAccount
metadata:
name: argo
namespace: 2304613691
- apiVersion: v1
kind: ServiceAccount
metadata:
name: argo-ui
namespace: 2304613691
kind: List
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-role
namespace: 2304613691
rules:
- apiGroups:
- ""
resources:
- pods
- pods/exec
verbs:
- create
- get
- list
- watch
- update
- patch
- apiGroups:
- ""
resources:
- configmaps
verbs:
- get
- watch
- list
- apiGroups:
- ""
resources:
- persistentvolumeclaims
verbs:
- create
- delete
- apiGroups:
- argoproj.io
resources:
- workflows
verbs:
- get
- list
- watch
- update
- patch
- apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: argo-ui-role
namespace: 2304613691
rules:
- apiGroups:
- ""
resources:
- pods
- pods/exec
- pods/log
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- secrets
verbs:
- get
- apiGroups:
- argoproj.io
resources:
- workflows
verbs:
- get
- list
- watch
kind: List
---
apiVersion: v1
items:
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-binding
namespace: "2304613691"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-role
subjects:
- kind: ServiceAccount
name: argo
namespace: "2304613691"
- apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: argo-ui-binding
namespace: "2304613691"
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: argo-ui-role
subjects:
- kind: ServiceAccount
name: argo-ui
namespace: "2304613691"
kind: List
---
apiVersion: v1
items:
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
name: workflow-controller
namespace: 2304613691
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: workflow-controller
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: workflow-controller
spec:
containers:
- args:
- --configmap
- workflow-controller-configmap
command:
- workflow-controller
env:
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
image: <our repo>/sample-agupta34/workflow-controller:v2.1.1
imagePullPolicy: IfNotPresent
name: workflow-controller
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: argo
serviceAccountName: argo
terminationGracePeriodSeconds: 30
- apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
generation: 1
name: argo-ui
namespace: 2304613691
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: argo-ui
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: argo-ui
spec:
containers:
- env:
- name: ARGO_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: IN_CLUSTER
value: "true"
- name: ENABLE_WEB_CONSOLE
value: "false"
- name: BASE_HREF
value: /
image: <our repo>/sample-agupta34/argoui:v2.1.1
imagePullPolicy: IfNotPresent
name: argo-ui
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: argo-ui
serviceAccountName: argo-ui
terminationGracePeriodSeconds: 30
kind: List
---
apiVersion: v1
data:
config: |
artifactRepository: {}
executorImage: <our repo>/sample-agupta34/argoexec:v2.1.1
kind: ConfigMap
metadata:
name: workflow-controller-configmap
namespace: 2304613691
---
apiVersion: v1
kind: Service
metadata:
name: argo-ui
namespace: 2304613691
labels:
app: argo-ui
spec:
ports:
- port: 80
protocol: TCP
targetPort: 8001
selector:
app: argo-ui
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: argo-ui
namespace: 2304613691
annotations:
kubernetes.io/ingress.class: "netscaler.v2"
netscaler.applecloud.io/insecure-backend: "true"
spec:
backend:
serviceName: argo-ui
servicePort: 80
---
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: argo-and-argo-ui-netpol
spec:
podSelector:
matchLabels:
app: workflow-controller
app: argo-ui
ingress:
- {}
egress:
- {}

Couldn't access my Kubernetes service via a traefik reverse proxy

I deploy a kubernetes cluster (1.8.8) in an cloud openstack pf (1 master with public ip adress/ 3 nodes). I want to use traefik (last version 1.6.1) as a reverse proxy for accessing my services.
Traefik was well deployed as a daemonset and I can access his GUI on port 8081. My prometheus ingress appears correctly in the traefik interface but I can't access my prometheus server UI.
Could you tell me what I am doing wrong ? Did I miss something ?
Thanks
Ingress of my prometheus:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: prometheus-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- http:
paths:
- path: /prometheus
backend:
serviceName: prometheus-svc
servicePort: prom
My daemonset is below:
apiVersion: v1
kind: Namespace
metadata:
name: traefik
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: traefik-ingress-controller
namespace: traefik
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: traefik-ingress-controller
namespace: traefik
labels:
k8s-app: traefik-ingress-lb
kubernetes.io/cluster-service: "true"
spec:
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
hostNetwork: true # workaround
serviceAccountName: traefik-ingress-controller
terminationGracePeriodSeconds: 60
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- image: traefik:v1.6.1
name: traefik-ingress-lb
imagePullPolicy: Always
volumeMounts:
- mountPath: "/config"
name: "config"
resources:
requests:
cpu: 100m
memory: 20Mi
args:
- --kubernetes
- --configfile=/config/traefik.toml
volumes:
- name: config
configMap:
name: traefik-conf
---
apiVersion: v1
kind: Service
metadata:
name: traefik-web-ui
namespace: traefik
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: pathprefixstrip
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: traefik-web-ui
servicePort: 80
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
rules:
- apiGroups:
- ""
resources:
- services
- endpoints
- secrets
verbs:
- get
- list
- watch
- apiGroups:
- extensions
resources:
- ingresses
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: traefik-ingress-controller
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: traefik-ingress-controller
subjects:
- kind: ServiceAccount
name: traefik-ingress-controller
namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
name: traefik-conf
namespace: traefik
data:
traefik.toml: |-
defaultEntryPoints = ["http"]
[entryPoints]
[entryPoints.http]
address = ":80"
[web]
address = ":8081"