EKS + ALB Ingress not working on differents manifest - kubernetes

I'm trying to use the same ALB for multiple services, but when I define a new entry or rule in a manifest other than the main one, it is not added to the AWS ALB.
The only rules that appear are the ones I created in the alb-ingress.yaml manifest, the rules from app.yaml don't appear.
ALB print: https://prnt.sc/IfpfbHGvkkFi
alb-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: alb-ingress
namespace: dev
annotations:
alb.ingress.kubernetes.io/load-balancer-name: eks-test
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/group.name: "alb-dev"
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}, {"HTTP":80}]'
alb.ingress.kubernetes.io/ssl-redirect: '443'
alb.ingress.kubernetes.io/healthcheck-path: /health
spec:
ingressClassName: alb
rules:
- host: ""
http:
paths:
- path: /status
pathType: Prefix
backend:
service:
name: status
port:
number: 80
app.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deployment
namespace: dev
spec:
template:
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: k8s.gcr.io/e2e-test-images/echoserver:2.5
ports:
- containerPort: 80
replicas: 1
selector:
matchLabels:
app: app
---
apiVersion: v1
kind: Service
metadata:
name: app
namespace: dev
spec:
ports:
- port: 80
protocol: TCP
type: NodePort
selector:
app: app
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app
namespace: dev
annotations:
alb.ingress.kubernetes.io/load-balancer-name: eks-test
alb.ingress.kubernetes.io/group.name: "alb-triercloud-dev"
alb.ingress.kubernetes.io/group.order: '1'
spec:
ingressClassName: alb
rules:
- host: service-a.devopsbyexample.io
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: echoserver
port:
number: 80

Each ingress is bound to the ALB, so you need to modify your ingress resource to add a path.
See here: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.4/how-it-works/#ingress-creation

Related

Kubernetes ingress not routing

I have 2 services and deployments deployed on minikube on local dev. Both are accessible when I run minikube start service. For the sake of simplicity I have attached code with only one service
However, ingress routing is not working
CoffeeApiDeployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffeeapi-deployment
labels:
app: coffeeapi
spec:
replicas: 1
selector:
matchLabels:
app: coffeeapi
template:
metadata:
labels:
app: coffeeapi
spec:
containers:
- name: coffeeapi
image: manigupta31286/coffeeapi:latest
env:
- name: ASPNETCORE_URLS
value: "http://+"
- name: ASPNETCORE_ENVIRONMENT
value: "Development"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffeeapi-service
spec:
selector:
app: coffeeapi
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30036
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Prefix
backend:
service:
name: coffeeapi-service
port:
number: 8080
You are missing the ingress class in the spec.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
ingressClassName: nginx # (or the class you configured)
Using NodePort on your service may also be problematic. At least it's not required since you want to use the ingress controller to route traffic via the ClusterIP and not use the NodePort directly.

Unable to access app running on pod inside cluster using nginx Ingress controller

I'm using this nginx ingress controller on Hetzner server. After installation of ingress controller, I'm able to access the worker node by its IP, but not able to access the app running on pod inside the cluster. am I missing something?
Are Ingress and Traefik are different, a bit confused in the terminologies.
service file -
apiVersion: v1
kind: Service
metadata:
name: service-name-xxx
spec:
selector:
app: app-name
ports:
- protocol: 'TCP'
port: 80
targetPort: 4200
type: LoadBalancer
deployment file -
apiVersion: apps/v1
kind: Deployment
metadata:
name: deployment-name
labels:
app: app-name
spec:
replicas: 1
selector:
matchLabels:
app: app-name
template:
metadata:
labels:
app: app-name
spec:
imagePullSecrets:
- name: my-registry-key
containers:
- name: container-name
image: my-private-docker-img
imagePullPolicy: Always
ports:
- containerPort: 4200
ingress file -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
spec:
rules:
- host:
http:
paths:
- pathType: Prefix
path: "/app"
backend:
service:
name: service-name-xxx
port:
number: 4200
I think you have to add the kubernetes.io/ingress.class: "nginx" to your Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
spec:
name: hsts-ingress-backend1-minion
annotations:
kubernetes.io/ingress.class: "nginx"
You have set port to 80 and targetPort to 4200 in your service. Should mention port 80 in your ingress yaml.
backend:
service:
name: service-name-xxx
port: 80
targetPort: 4200

Configuring websockets using AWS ALB

I'm trying to configure Ingress using an AWS ALB. Through this same Ingress, I am able to access every port I choose from my containers but the one for websockets. Here are my configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: ****
name: ****-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: ***
replicas: 3
...
spec:
containers:
- image: **:latest
imagePullPolicy: Always
name: ***
...
ports:
- containerPort: 9944//This is for the Websocket!
Service:
apiVersion: v1
kind: Service
metadata:
namespace: ****
name: ***-svc
spec:
ports:
- port: 80
name: websocket
targetPort: 9944
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: ***
And finally the ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: ****
name: ***-ing
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/group.name: wstgroup
alb.ingress.kubernetes.io/load-balancer-attributes: idle_timeout.timeout_seconds=5000
alb.ingress.kubernetes.io/auth-session-timeout: '86400'
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/healthcheck-path: /
alb.ingress.kubernetes.io/healthcheck-port: '80'
alb.ingress.kubernetes.io/target-group-attributes: stickiness.enabled=true,stickiness.lb_cookie.duration_seconds=600
alb.ingress.kubernetes.io/certificate-arn: ****
labels:
app: **
spec:
rules:
- host: **.**.cloud
http:
paths:
- path: /ws/
backend:
serviceName: ***-svc
servicePort: 80
Returns 502 as an error most of the time, with curl, here is the result:
Thank you!
EDIT: For info, I'm using K8S with Fargate

Why am I getting 502 errors on my ALB end points, targeted at EKS hosted services

I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.

Kubernetes Ingress does not work with traefisk

I created a kubernetes cluster in Google Cloud Platform, after that, I installed Helm/tiller on cluster, and after, I installed traefik with helm like oficial documentation says to do.
Now i'm trying to create an Ingress for a service, but if I put the annotation kubernetes.io/ingress.class: traefik, the load balancer for Ingress is not created.
But without the annotation, it works with default Ingress.
(The service type is nodeport)
EDIT: I also tried this example in a clean google cloud kubernetes cluster: https://supergiant.io/blog/using-traefik-as-ingress-controller-for-your-kubernetes-cluster/ but its the same, when I chose kubernetes.io/ingress.class: traefik, won't be created a load balancer for ingress.
my files are:
animals-svc.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: bear
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: bear
---
apiVersion: v1
kind: Service
metadata:
name: moose
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: moose
---
apiVersion: v1
kind: Service
metadata:
name: hare
annotations:
traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: hare
animals-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: animals
annotations:
kubernetes.io/ingress.class: traefik
# kubernetes.io/ingress.global-static-ip-name: "my-reserved-global-ip"
# traefik.ingress.kubernetes.io/frontend-entry-points: http
# traefik.ingress.kubernetes.io/redirect-entry-point: http
# traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: hare.minikube
http:
paths:
- path: /
backend:
serviceName: hare
servicePort: http
- host: bear.minikube
http:
paths:
- path: /
backend:
serviceName: bear
servicePort: http
- host: moose.minikube
http:
paths:
- path: /
backend:
serviceName: moose
servicePort: http
animals-deployment.yaml:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: bear
labels:
app: animals
animal: bear
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: bear
template:
metadata:
labels:
app: animals
task: bear
version: v0.0.1
spec:
containers:
- name: bear
image: supergiantkir/animals:bear
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: moose
labels:
app: animals
animal: moose
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: moose
template:
metadata:
labels:
app: animals
task: moose
version: v0.0.1
spec:
containers:
- name: moose
image: supergiantkir/animals:moose
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: hare
labels:
app: animals
animal: hare
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: hare
template:
metadata:
labels:
app: animals
task: hare
version: v0.0.1
spec:
containers:
- name: hare
image: supergiantkir/animals:hare
ports:
- containerPort: 80
The services are created, but the ingress loadbalancer is not created:
But, if I remove the line kubernetes.io/ingress.class: traefik it works with the default ingress of Kubernetes
Traefik does not create a load balancer for you by default.
As HTTP(s) load balancing with Ingress documentation mention:
When you create an Ingress object, the GKE ingress controller creates
a Google Cloud Platform HTTP(S) load balancer and configures it
according to the information in the Ingress and its associated
Services.
This is all applicable for GKE ingress controller(gce) - more info about gce you can find here: https://github.com/kubernetes/ingress-gce
If you would like to use Traefik as ingress - you have to expose Traefik service with type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 80
More info with a lot of explanation diagrams and real working example you can find in the Exposing Kubernetes Services to the internet using Traefik Ingress Controller article.
Hope this help.
You can try to add more annotations as below
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
Like this,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-dashboard-ingress
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: traefik-ui.example.com
http:
paths:
- path: /
backend:
serviceName: traefik-dashboard
servicePort: 8080