Minikube Ingress: unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found - minikube

I am running minikube with below configuration
Environment:
minikube version: v0.25.2
macOS version: 10.12.6
DriverName: virtualbox
ISO: minikube-v0.25.1.iso
I created Ingress resource to map service:messy-chimp-emauser to path: /
But when I am rolling-out changes to minikube, I am getting below logs in the pod for nginx-ingress-controller
5 controller.go:811] service default/messy-chimp-emauser does not have any active endpoints
5 controller.go:245] unexpected error reading configmap kube-system/tcp-services: configmap kube-system/tcp-services was not found
5 controller.go:245] unexpected error reading configmap kube-system/udp-services: configmap kube-system/udp-services was not found
And hence getting HTTP - 503 when trying to access service from browser
Steps to reproduce
STEP 1
minikube addons enable ingress
STEP 2
kubectl create -f kube-resources.yml
(replaced actual-image with k8s.gcr.io/echoserver:1.4)
kube-resources.yml
apiVersion: v1
kind: Service
metadata:
name: messy-chimp-emauser
labels:
app: messy-chimp-emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
type: ClusterIP
ports:
- port: 80
targetPort: http
protocol: TCP
name: emauser
selector:
app: messy-chimp-emauser
release: messy-chimp
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: messy-chimp-emauser
labels:
app: emauser
chart: emauser-0.1.0
release: messy-chimp
heritage: Tiller
spec:
replicas: 1
selector:
matchLabels:
app: emauser
release: messy-chimp
template:
metadata:
labels:
app: emauser
release: messy-chimp
spec:
containers:
- name: emauser
image: "k8s.gcr.io/echoserver:1.4"
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: messy-chimp-ema-chart
labels:
app: ema-chart
chart: ema-chart-0.1.0
release: messy-chimp
heritage: Tiller
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: messy-chimp-emauser
servicePort: emauser
Request to please suggest on this.

Related

Prometheus failing to pick up formatted metrics that service discovery can find

I installed kube-prometheus-stack via helm:
kubectl create namespace monitoring && \
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts && \
helm repo update && \
helm install -n monitoring prometheus-stack prometheus-community/kube-prometheus-stack
Then proceeded to deploy a fastapi application which has a metrics endpoint that Prometheus is supposed to scrape. /metrics endpoint works fine as seen below:
from starlette_prometheus import metrics, PrometheusMiddleware
from fastapi import FastAPI, Request
app = FastAPI()
# Add Prometheus metrics as middleware
app.add_middleware(PrometheusMiddleware)
app.add_route("/metrics", metrics)
I expected both the target and service discovery to work but only service discovery appears to be working
Prometheus Operator version:
2.36.1
Kubernetes version information:
Client Version: v1.24.2
Kubernetes cluster kind:
Minikube v1.25.2-1
Here are the manifests for the deployed application:
Namespace and deployment
apiVersion: v1
kind: Namespace
metadata:
name: code-detector-demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: code-detector
namespace: code-detector-demo
labels:
release: prometheus-stack
spec:
replicas: 1
selector:
matchLabels:
app: code-detector
release: prometheus-stack
template:
metadata:
labels:
app: code-detector
release: prometheus-stack
spec:
containers:
- name: code-detector
image: <MY-IMAGE:TAG>
resources:
limits:
memory: 512Mi
cpu: 1000m
ports:
- containerPort: 8000
Service
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
labels:
release: prometheus-stack
name: code-detector-service
namespace: code-detector-demo
spec:
selector:
app: code-detector
release: prometheus-stack
ports:
- name: code-detector-port
port: 8000
targetPort: 8000
ServiceMonitor
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: code-detector-servicemonitor
# same namespace that Prometheus is running in
namespace: monitoring
labels:
app: code-detector
release: prometheus-stack
spec:
selector:
matchLabels:
app: code-detector
release: prometheus-stack
endpoints:
- path: /metrics
port: code-detector-port
interval: 15s
namespaceSelector:
matchNames:
- code-detector-demo # namespace where the app is running
Here's the labels in service discovery:
Targets pane don't show any picked up scraping job:
What could I be doing wrong?
You have in your servicemonitor:
spec:
selector:
matchLabels:
app: code-detector
release: prometheus-stack
which basicaly means "get all endpoints from services with labels app=code-detector and release=prometheus-stack".
Your service don't have the label app=code-detector.

visual studio kubernetes project 503 error in azure

I have created a kubernetes project in visual studio 2019, with the default template. This template creates a WeatherForecast controller.
After that I have published it to my ARC.
I used this command to create the AKS:
az aks create -n $MYAKS -g $MYRG --generate-ssh-keys --z 1 -s Standard_B2s --attach-acr /subscriptions/mysubscriptionguid/resourcegroups/$MYRG/providers/Microsoft.ContainerRegistry/registries/$MYACR
And I enabled HTTP application routing via the azure portal.
I have deployed it to azure kubernetes (Standard_B2s), with the following deployment.yaml:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
service.yaml:
#service.yaml
apiVersion: v1
kind: Service
metadata:
name: kubernetes1
spec:
type: ClusterIP
selector:
app: kubernetes1
ports:
- port: 80 # SERVICE exposed port
name: http # SERVICE port name
protocol: TCP # The protocol the SERVICE will listen to
targetPort: http # Port to forward to in the POD
ingress.yaml:
#ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: kubernetes1
annotations:
kubernetes.io/ingress.class: addon-http-application-routing
spec:
rules:
- host: kubernetes1.<uuid (removed for this post)>.westeurope.aksapp.io # Which host is allowed to enter the cluster
http:
paths:
- backend: # How the ingress will handle the requests
service:
name: kubernetes1 # Which service the request will be forwarded to
port:
name: http # Which port in that service
path: / # Which path is this rule referring to
pathType: Prefix # See more at https://kubernetes.io/docs/concepts/services-networking/ingress/#path-types
But when I go to kubernetes1..westeurope.aksapp.io or kubernetes1..westeurope.aksapp.io/WeatherForecast I get the following error:
503 Service Temporarily Unavailable
nginx/1.15.3
It's working now. For other people who have the same problem. I have updated my deployment config from:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1-deployment
labels:
app: kubernetes1-deployment
spec:
replicas: 2
selector:
matchLabels:
app: kubernetes1
template:
metadata:
labels:
app: kubernetes1
spec:
containers:
- name: kubernetes1
image: mycontainername.azurecr.io/kubernetes1:latest
ports:
- containerPort: 80
to:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetes1
spec:
selector: # Define the wrapping strategy
matchLabels: # Match all pods with the defined labels
app: kubernetes1 # Labels follow the `name: value` template
template: # This is the template of the pod inside the deployment
metadata:
labels:
app: kubernetes1
spec:
nodeSelector:
kubernetes.io/os: linux
containers:
- image: mycontainername.azurecr.io/kubernetes1:latest
name: kubernetes1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
name: http
I don't know exactly which line solved the problem. Feel free to comment it if you know which line the problem was.

Error deploying a pod in a kubernetes cluster

I'm trying to deploy this yaml in my kubernetes cluster into one of my nodes
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
But when I try to deploy it with the command, I get this error message
pi#k8s-master-rasp4:~ $ kubectl apply -f despliegue-nginx.yaml -l kubernetes.io/hostname=k8s-worker-1
error: no objects passed to apply
Anyone knows where the problem could be?
Thanks
You are not allowed to use label selector (-l) with kubectl apply....
Use nodeSelector to assign pods to specific nodes:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-1
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
nodeSelector:
kubernetes.io/hostname: k8s-worker-1 # <-- updated here!
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80

Why am I getting 502 errors on my ALB end points, targeted at EKS hosted services

I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.

Traefik Ingress bad rule

I am working with Kubernetes on Google Cloud. I am trying to set Traefik as Ingress for the cluster. I'm based the code on the official site docs https://docs.traefik.io/user-guide/kubernetes/ but I have an error with the rule for Path Prefix Strip.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-api
labels:
app: auth-api
spec:
replicas: 2
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
version: v0.0.1
spec:
containers:
- name: auth-api
image: gcr.io/r10c-dev/auth-api:v0.1
ports:
- containerPort: 3000
env:
- name: AMQP_SERVICE
value: broker:5672
- name: CACHE_SERVICE
value: session-cache
---
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
ports:
- name: http
targetPort: 80
port: 3000
type: NodePort
selector:
app: auth-api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-api
servicePort: http
In the GKE console it seems the deployment is linked to the service and the ingress, but when I try to access the IP, the server returns and error 502.
Also I am using and static IP
gcloud compute addresses create web-static-ip --global