Ingress route pointing to the wrong service - kubernetes

I've setup k3s v1.20.4+k3s1 with Klipper Lb and nginx ingress 3.24.0 from the helm charts.
I'm following this article but I'm stumbling upon a very weird issue where my ingress hosts would point to the wrong service.
Here is my configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-routing
spec:
rules:
- host: echo1.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo1
port:
number: 80
- host: echo2.stage.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: echo2
port:
number: 80
---
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 2
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=This is echo1"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2
spec:
selector:
matchLabels:
app: echo2
replicas: 1
template:
metadata:
labels:
app: echo2
spec:
containers:
- name: echo2
image: hashicorp/http-echo
args:
- "-text=This is the new (echo2)"
ports:
- containerPort: 5678
And my Cloudflare DNS records (no DNS proxy activated):
;; A Records
api.stage.example.com. 1 IN A 162.15.166.240
echo1.stage.example.com. 1 IN A 162.15.166.240
echo2.stage.example.com. 1 IN A 162.15.166.240
But when I do a curl on echo1.stage.example.com multiple times, here is what I get:
$ curl echo1.stage.example.com
This is echo1
$ curl echo1.stage.example.com
<html>
<head><title>502 Bad Gateway</title></head>
<body>
<center><h1>502 Bad Gateway</h1></center>
<hr><center>nginx</center>
</body>
</html>
$ curl echo1.stage.example.com
This is the new (echo2)
Sometimes I get a bad gateway, sometimes I get the echo1.stage.example.com domain pointing to the service assigned to echo2.stage.example.com. Is this because of the LB? Or a bad configuration on my end? Thanks!
EDIT: It's not coming from the LB, I just switched to metallb and I still get the same issue

Ok I found the issue. It was actually not related to the config I previously posted by to my kustomization.yaml config where I had:
commonLabels:
app: myapp
Just removing that commonLabels solved the issue.

Related

Kubernetes nginx-ingress controller always 401 http response

I'm researching kubernetes and trying to configure nginx-ingress controller. So I created yaml config file for it, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-service
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: command-clusterip-service
port:
number: 80
And created relevant services like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: yuriyborovskyi91/platformsservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-service
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
And added acme.com to windows hosts file, like:
127.0.0.1 acme.com
But when trying to access http://acme.com/api/platforms or any other api route, I'm receiving 401 http error, and confused by it, because I didn't configure any authorization. Using all default settings. If to call my service by nodeport everything works fine.
Output of my kubectl get services, where service I'm trying to access running:
and response of kubectl get services --namespace=ingress-nginx

GCE ingress resource is taking too long to receive an IP address in GKE cluster. What could be the reason?

I am trying to deploy a sample application on GKE cluster. All the resources are getting created successfully except the ingress resource which is taking around 15-20 minutes to receive an ip address. By this time application times out and get in errored state. The ideal time to assign the IP addr is 2-3 minutes. Can anyone help on the issue how to debug it?
This is happening specific to a cluster while the same ingress getting the ip within 2 minutes in other clusters in GKE.
Below the manifest files I am using to deploy app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-deployment
spec:
selector:
matchLabels:
app: hello
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
type: ClusterIP
selector:
app: hello
ports:
- port: 8000
targetPort: 8080
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: zone-printer-deployment
spec:
selector:
matchLabels:
app: zone-printer
template:
metadata:
labels:
app: zone-printer
spec:
containers:
- name: zone-printer
image: gcr.io/google-samples/zone-printer:0.2
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: zone-printer-service
spec:
type: ClusterIP
selector:
app: zone-printer
ports:
- port: 9000
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
kubernetes.io/ingress.class: "gce-internal"
spec:
defaultBackend:
service:
name: hello-service
port:
number: 8000
rules:
- http:
paths:
- path: /*
pathType: ImplementationSpecific
backend:
service:
name: hello-service
port:
number: 8000
- path: /zone-printer
pathType: ImplementationSpecific
backend:
service:
name: zone-printer-service
port:
number: 9000

Kubernetes ingress path based working well

Hello I am new to kubernetes and i need some help.
I want use kubernetes ingress path for my 2 different nuxt project.
First / path working well but my
second /v1 path not get resources like .css and .js
My first deployment and service yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-1
labels:
app: nginx1
spec:
replicas: 1
selector:
matchLabels:
app: nginx1
template:
metadata:
labels:
app: nginx1
spec:
containers:
- name: nginx1
image: st/itmr:latest "can't show image"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx1-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx1
My second deployment and service yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx2
labels:
app: nginx2
spec:
replicas: 1
selector:
matchLabels:
app: nginx2
template:
metadata:
labels:
app: nginx2
spec:
containers:
- name: nginx2
image: st/itpd:latest "can't show image"
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: nginx2-svc
spec:
ports:
- port: 8080
targetPort: 80
protocol: TCP
name: http
selector:
app: nginx2
And there is the my ingress yaml file:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: some.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx1-svc
port:
number: 80
- path: /v1
pathType: Prefix
backend:
service:
name: nginx2-svc
port:
number: 8080
I tought using nginx.ingress.kubernetes.io/rewrite-target: /$1 would be work for me bu its not.
I don't know where is the problem so help me.
To clarify I am posting a community wiki answer.
The problem here was resolved by switching the project path.
See more about ingress paths here.

Why am I getting 502 errors on my ALB end points, targeted at EKS hosted services

I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.

K8s ingress with 2 domains both listening on port 80

I am trying to replicate a Name based virtual hosting with two docker images in one deployment. Unfortunately I am only able to get 1 running due to a port conflict:
2019/03/19 20:37:52 [ERR] Error starting server: listen tcp :5678: bind: address already in use
Is it really not possible to have two images listening on the same port as part of the same deployment? Or am I going wrong elsewhere?
Minimal example adapted from here
# set up ingress
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/mandatory.yaml
# set up load balancer
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/provider/cloud-generic.yaml
# spin up two containers in one deployment, same container port
kubectl apply -f test.yaml
test.yaml:
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo12
spec:
selector:
matchLabels:
app: echo12
replicas: 1
template:
metadata:
labels:
app: echo12
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
- name: echo2
image: hashicorp/http-echo
args:
- "-text=echo2"
ports:
- containerPort: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
spec:
rules:
- host: echo1.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
Update:
If I add a separate deployment it works. Is this by design or is there any way that I can achieve this in one deployment (reason: I'd like to be able to reset all deployed domains at once)?
Problem 1: Creating two different service backends in one pod of one deployment. This is not what the pods are designed for. If you want to expose multiple services, you should have a pod (at least) to back each service. Deployments wrap around the pod by allowing you to define replication and liveliness options. In your case, you should have one deployment (which creates one or multiple pods that will respond to one echo request) for its corresponding service.
Problem 2: You are not linking your services to your backends properly. The service clearly is trying to select a label app=echo or app=echo2. In your deployment, your app=echo12. Consequently, the service simply won't be able to find any active endpoints.
To address the above problems, try this below:
apiVersion: v1
kind: Service
metadata:
name: echo1
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo1
---
apiVersion: v1
kind: Service
metadata:
name: echo2
spec:
ports:
- port: 80
targetPort: 5678
selector:
app: echo2
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo1
spec:
selector:
matchLabels:
app: echo1
replicas: 1
template:
metadata:
labels:
app: echo1
spec:
containers:
- name: echo1
image: hashicorp/http-echo
args:
- "-text=echo1"
ports:
- containerPort: 5678
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo2
spec:
selector:
matchLabels:
app: echo2
replicas: 1
template:
metadata:
labels:
app: echo2
spec:
containers:
- name: echo2
image: hashicorp/http-echo
args:
- "-text=echo2"
ports:
- containerPort: 5678
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: echo-ingress
spec:
rules:
- host: echo1.example.com
http:
paths:
- backend:
serviceName: echo1
servicePort: 80
- host: echo2.example.com
http:
paths:
- backend:
serviceName: echo2
servicePort: 80
I have tested the above in my own cluster and verified that it is working (with different ingress urls of course). Hope this helps!