I have a nodejs application running on Google Kubernetes Engine (v1.20.8-gke.900)
I want to add custom header to get client's Region and lat long so I refer to this article and this one also and created below kubernetes config file, but when I am printing the header I am not getting any custom header.
#k8s.yaml
apiVersion: v1
kind: Namespace
metadata:
name: my-app-ns-prod
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: npm-app-deployment
namespace: my-app-ns-prod
labels:
app: npm-app-deployment
tier: backend
spec:
template:
metadata:
name: npm-app-pod
namespace: my-app-ns-prod
labels:
app: npm-app-pod
tier: backend
spec:
containers:
- name: my-app-container
image: us.gcr.io/img/my-app:latest
ports:
- containerPort: 3000
protocol: TCP
envFrom:
- secretRef:
name: npm-app-secret
- configMapRef:
name: npm-app-configmap
imagePullPolicy: Always
imagePullSecrets:
- name: gcr-regcred
replicas: 3
minReadySeconds: 30
selector:
matchLabels:
app: npm-app-pod
tier: backend
---
apiVersion: v1
kind: Service
metadata:
name: npm-app-service
namespace: my-app-ns-prod
annotations:
cloud.google.com/backend-config: '{"ports": {"80":"npm-app-backendconfig"}}'
cloud.google.com/neg: '{"ingress": true}'
spec:
selector:
app: npm-app-pod
tier: backend
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
- name: https
protocol: TCP
port: 443
targetPort: 3000
type: LoadBalancer
---
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: npm-app-backendconfig
namespace: my-app-ns-prod
spec:
customRequestHeaders:
headers:
- "HTTP-X-Client-CityLatLong:{client_city_lat_long}"
- "HTTP-X-Client-Region:{client_region}"
- "HTTP-X-Client-Region-SubDivision:{client_region_subdivision}"
- "HTTP-X-Client-City:{client_city}"
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "true"
spec:
rules:
- host: app.example.com
http:
paths:
- path: /api/v1
pathType: Prefix
backend:
service:
name: npm-app-service
port:
number: 80
---
apiVersion: v1
kind: ConfigMap
metadata:
name: npm-app-configmap
namespace: my-app-ns-prod
data:
APP_ID: "My App"
PORT: "3000"
---
apiVersion: v1
kind: Secret
metadata:
name: npm-app-secret
namespace: my-app-ns-prod
type: Opaque
data:
MONGO_CONNECTION_URI: ""
SESSION_SECRET: ""
Actually the issue was with Ingress Controller, I missed to defined "cloud.google.com/backend-config". Once I had defined that I was able to get the custom header. Also I switched to GKE Ingress controller (gce) from nginx. But the same things works with nginx also
This is how my final ingress looks like.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-service
annotations:
cloud.google.com/backend-config: '{"default": "npm-app-backendconfig"}'
kubernetes.io/ingress.class: "gce"
spec:
...
...
Reference: User-defined request headers :
Related
I have 2 services and deployments deployed on minikube on local dev. Both are accessible when I run minikube start service. For the sake of simplicity I have attached code with only one service
However, ingress routing is not working
CoffeeApiDeployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: coffeeapi-deployment
labels:
app: coffeeapi
spec:
replicas: 1
selector:
matchLabels:
app: coffeeapi
template:
metadata:
labels:
app: coffeeapi
spec:
containers:
- name: coffeeapi
image: manigupta31286/coffeeapi:latest
env:
- name: ASPNETCORE_URLS
value: "http://+"
- name: ASPNETCORE_ENVIRONMENT
value: "Development"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: coffeeapi-service
spec:
selector:
app: coffeeapi
type: NodePort
ports:
- protocol: TCP
port: 8080
targetPort: 80
nodePort: 30036
Ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- path: /coffee
pathType: Prefix
backend:
service:
name: coffeeapi-service
port:
number: 8080
You are missing the ingress class in the spec.
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: myapp-ingress
spec:
ingressClassName: nginx # (or the class you configured)
Using NodePort on your service may also be problematic. At least it's not required since you want to use the ingress controller to route traffic via the ClusterIP and not use the NodePort directly.
I am actually trying to run 3 containers in my gke cluster. I have them exposed via a network load balancer and over that, I am using ingress so I can reach my services from different domains with SSL certs on them.
Here is the complete manifest
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app:web
spec:
containers:
- name: web
image: us-east4-docker.pkg.dev/web:e856485 # docker image
ports:
- containerPort: 3000
env:
- name: NODE_ENV
value: production
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: cms
spec:
replicas: 3
selector:
matchLabels:
app: cms
template:
metadata:
labels:
app: cms
spec:
containers:
- name: cms
image: us-east4-docker.pkg.dev/cms:4e1fe2f # docker image
ports:
- containerPort: 8055
env:
- name : DB
value : "postgres"
- name : DB_HOST
value : 10.142.0.3
- name : DB_PORT
value : "5432"
---
# DEPLOYMENT MANIFEST #
apiVersion: apps/v1
kind: Deployment
metadata:
name: api
spec:
replicas: 3
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: api
image: us-east4-docker.pkg.dev/api:4e1fe2f # docker image
ports:
- containerPort: 8080
env:
- name : HOST
value : "0.0.0.0"
- name : PORT
value : "8080"
- name : NODE_ENV
value : production
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: web-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: web
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 3000
selector:
app: web
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: cms-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: cms
spec:
ports:
- port: 8055
protocol: TCP
targetPort: 8055
selector:
app: cms
type: NodePort
---
# SERVICE MANIFEST #
apiVersion: v1
kind: Service
metadata:
name: api-lb
annotations:
cloud.google.com/neg: '{"ingress": true}'
labels:
app: api
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
app: api
type: NodePort
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: web-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: cms-cert
type: kubernetes.io/tls
---
apiVersion: v1
data:
tls.crt: abc
tls.key: abc
kind: Secret
metadata:
name: api-cert
type: kubernetes.io/tls
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress
annotations:
# If the class annotation is not specified it defaults to "gce".
kubernetes.io/ingress.class: "gce"
spec:
tls:
- secretName: api-cert
- secretName: cms-cert
- secretName: web-cert
rules:
- host: web-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: web-lb
port:
number: 3000
- host: cms-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: cms-lb
port:
number: 8055
- host: api-gke.dev
http:
paths:
- pathType: ImplementationSpecific
backend:
service:
name: api-lb
port:
number: 8080
The containers are accessible through the load balancer(network), but from ingress(L7 lb) the health check is failing.
I tried editing the health checks manually from HTTP:80 to TCP:8080/8055/3000 for 3 services and it works.
But eventually, ingress reverts it back to HTTP health check and it fails again. I also tried using NodePort instead of load balancer as service type but no luck.
Any help?
The first thing I would like to mention is that you need to recheck your implementation because from what I see, you are creating an Ingress which will create a LoadBanacer, and this Ingress is using three services of type LoadBalancer in which each one of them will also create its LoadBalancer (I'm assuming the default behaviour, unless you applied the famous workaround of deleting the service's LoadBalancer manually after it is created).
And I don't think this is correct unless you need that design for some reason. So, my suggestion is that you might want to change your services types to NodePort.
As for answering your question, what you are missing is:
You need to implement a BackendConfig with custom HealthCheck configurations.
1- Create the Backendconfig:
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: api-lb-backendconfig
spec:
healthCheck:
checkIntervalSec: INTERVAL
timeoutSec: TIMEOUT
healthyThreshold: HEALTH_THRESHOLD
unhealthyThreshold: UNHEALTHY_THRESHOLD
type: PROTOCOL
requestPath: PATH
port: PORT
2- Use this config in your service/s
apiVersion: v1
kind: Service
metadata:
annotations:
cloud.google.com/backend-config: '{"ports": {
"PORT_NAME_1":"api-lb-backendconfig"
}}'
spec:
ports:
- name: PORT_NAME_1
port: PORT_NUMBER_1
protocol: TCP
targetPort: TARGET_PORT
Once you apply such configurations, your Ingress's LoadBalanacer will be created with the BackendConfig "api-lb-backendconfig"
Consider this documentation page as your reference.
I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.
I am working with Kubernetes on Google Cloud. I am trying to set Traefik as Ingress for the cluster. I'm based the code on the official site docs https://docs.traefik.io/user-guide/kubernetes/ but I have an error with the rule for Path Prefix Strip.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-api
labels:
app: auth-api
spec:
replicas: 2
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
version: v0.0.1
spec:
containers:
- name: auth-api
image: gcr.io/r10c-dev/auth-api:v0.1
ports:
- containerPort: 3000
env:
- name: AMQP_SERVICE
value: broker:5672
- name: CACHE_SERVICE
value: session-cache
---
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
ports:
- name: http
targetPort: 80
port: 3000
type: NodePort
selector:
app: auth-api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-api
servicePort: http
In the GKE console it seems the deployment is linked to the service and the ingress, but when I try to access the IP, the server returns and error 502.
Also I am using and static IP
gcloud compute addresses create web-static-ip --global
I’ve been applying Authentication Policy to my testing service using JWT. I have followed this guide and it did work as expected. But, when I tried to using a different pod image, it did not work even though almost everything is the same.
Is there anyone facing this issue? or know the reason why it did not work in my case?
Thank you very much!
These are my configuration files:
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hostname
spec:
replicas: 1
selector:
matchLabels:
app: hostname
version: v1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
labels:
app: hostname
version: v1
spec:
containers:
- image: rstarmer/hostname:v1
imagePullPolicy: Always
name: hostname
resources: {}
restartPolicy: Always
Service
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hostname
name: hostname
spec:
ports:
- name: http
port: 8001
targetPort: 80
selector:
app: hostname
Gateway
---
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: hostname-gateway
namespace: foo
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- "*"
VirtualService
---
piVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: hostname-vs
namespace: foo
spec:
hosts:
- "*"
gateways:
- hostname-gateway
http:
- route:
- destination:
port:
number: 8001
host: hostname.foo.svc.cluster.local
Policy
---
apiVersion: "authentication.istio.io/v1alpha1"
kind: "Policy"
metadata:
name: "jwt-example"
namespace: foo
spec:
targets:
- name: hostname
origins:
- jwt:
issuer: "testing#secure.istio.io"
jwksUri: "https://raw.githubusercontent.com/istio/istio/release-1.0/security/tools/jwt/samples/jwks.json"
principalBinding: USE_ORIGIN
As stated by OP on the Istio forums you need to respect the naming convention for the port name of your service.
It can either be "http" or "http2".
For instance this is valid
apiVersion: v1
kind: Service
metadata:
name: somename
namespace: auth
spec:
selector:
app: someapp
ports:
- port: 80
targetPort: 3000
name: http
And this is not
apiVersion: v1
kind: Service
metadata:
name: somename
namespace: auth
spec:
selector:
app: someapp
ports:
- port: 80
targetPort: 3000
Not specifying a name for the port is not valid.