I have the following deployment running in Google Cloud Platform (GCP):
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: mybackend
labels:
app: backendweb
spec:
replicas: 1
selector:
matchLabels:
app: backendweb
tier: web
template:
metadata:
labels:
app: backendweb
tier: web
spec:
containers:
- name: mybackend
image: eu.gcr.io/teststuff/backend:latest
ports:
- containerPort: 8081
This uses the following service:
apiVersion: v1
kind: Service
metadata:
name: mybackend
labels:
app: backendweb
spec:
type: NodePort
selector:
app: backendweb
tier: web
ports:
- port: 8081
targetPort: 8081
Which uses this ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: backend-ip
labels:
app: backendweb
spec:
backend:
serviceName: mybackend
servicePort: 8081
When I spin all this up in Google Cloud Platform however, I get the following error message on my ingress:
All backend services are in UNHEALTHY state
I've looked through my pod logs with no indication about the problem. Grateful for any advice!
Most likely this problem is caused by not returning 200 on route '/' for your pod. Please check your pod configurations. If you don't want to return 200 at route '/', you could add a readiness rule for health check like this:
readinessProbe:
httpGet:
path: /healthz
port: 80
initialDelaySeconds: 5
periodSeconds: 5
Related
I have setup application with statefulset
# Simple deployment used to deploy and manage the app in nigelpoulton/getting-started-k8s:1.0
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: coredeploy
labels:
app: core123
spec:
replicas: 1
# minReadySeconds: 10
updateStrategy:
type: RollingUpdate
rollingUpdate:
partition: 0
# maxSurge: 1
selector:
matchLabels:
app: core123
serviceName: core123
template:
metadata:
labels:
app: core123
spec:
terminationGracePeriodSeconds: 1
containers:
- name: hello
image: docker-registry.myregistry.com:5000/core_centos:LMS-130022
imagePullPolicy: Always
ports:
- containerPort: 8008
readinessProbe:
tcpSocket:
port: 8008
periodSeconds: 1
This is my service
apiVersion: v1
kind: Service
metadata:
name: service-core
spec:
selector:
app: core123
type: NodePort
ports:
- name: nodeportcore
protocol: TCP
port: 9988
targetPort: 8008
This is my ingress
apiVersion: "networking.k8s.io/v1"
kind: "Ingress"
metadata:
name: "testIngress"
spec:
rules:
- http:
paths:
- path: "/"
backend:
service:
name: "service-core"
port:
number: 9988
pathType: "ImplementationSpecific"
After i apply ingress manifest file. My application is running but not login => Logs login successful but still back to login screen. After check i recognize url of my application when i run it in localhost on-premise (Not container this url in container is the same)
http://localhost:8008/#/public/login
http://localhost:8008/#/user/settings
http://localhost:8008/#/user/dashboard/overview
http://localhost:8008/#/user/history/processing
http://localhost:8008/#/user/policy/template
It url start with # and then url name as /public/login, /user/settings, /user/dashboard/overview, /#//
=> My question how i setup correctly ingress to run with my application
I am new to istio and I want to expose three services and route traffic to those services based on the port number passed to "website.com:port" or subdomain.
services deployment config files:
apiVersion: v1
kind: Service
metadata:
name: visitor-service
labels:
app: visitor-service
spec:
ports:
- port: 8000
nodePort: 30800
targetPort: 8000
selector:
app: visitor-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: visitor-service
spec:
replicas: 1
selector:
matchLabels:
app: visitor-service
template:
metadata:
labels:
app: visitor-service
spec:
containers:
- name: visitor-service
image: visitor-service
ports:
- containerPort: 8000
second service:
apiVersion: v1
kind: Service
metadata:
name: auth-service
labels:
app: auth-service
spec:
ports:
- port: 3004
nodePort: 30304
targetPort: 3004
selector:
app: auth-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
spec:
replicas: 1
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
spec:
containers:
- name: auth-service
image: auth-service
ports:
- containerPort: 3004
Third one:
apiVersion: v1
kind: Service
metadata:
name: gateway
labels:
app: gateway
spec:
ports:
- port: 8080
nodePort: 30808
targetPort: 8080
selector:
app: gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: gateway
ports:
- containerPort: 8080
If someone can help setting up the gateway and virtual service configuration it would be great.
It seems like you simply want to expose your applications, for that reason istio seems like a total overkill since it comes with a lot of overhead that you won't be using.
Regardless of whether you want to use istio as your default ingress or any other ingress-controller (nginx, traefik, ...) the following construct applies to all of them:
Expose the ingress-controller via a service of type NodePort or LoadBalancer, depending on your infrastructure. In a cloud environment the latter one will most likely work the best for you (if on GKE, AKS, EKS, ...).
Once it is exposed set up a DNS A record to point to the external IP address. Afterwards you can start configuring your ingress, depending on which ingress-controller you chose the following YAML may need some adjustments (example is given for istio):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
If a request for something like httpbin.example.com comes in to your ingress-controller it is going to send the request to a service named httpbin on port 8000.
As can be seen in the YAML posted above, the rules and paths field take a list (indicated by the - in the next line). To expose multiple services simply add a new entry to the list, e.g.:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /httpbin
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
- path: /apache
pathType: Prefix
backend:
serviceName: apache
servicePort: 8080
This is going to send requests like httpbin.example.com/httpbin/ to httpbin and httpbin.example.com/apache/ to apache.
For further information see:
https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
https://kubernetes.io/docs/concepts/services-networking/ingress/
Currently processing on this tutorial,
https://github.com/argoproj/argocd-example-apps/tree/master/guestbook
https://argoproj.github.io/argo-cd/getting_started/#5-register-a-cluster-to-deploy-apps-to-optional
My short-term milestone is to render guest-book's UI on browser.
I'm trying to connect via Ingress, and it went wrong.
Error message's like this,
Status: 502
The server encountered a temporary error and could not complete your request.
I suppose something's wrong around service and pod.
guestbook-ui-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: guestbook-ui
guestbook-ui-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
labels:
app: guestbook-ui
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: guestbook-ui-service
servicePort: 80
guestbook-ui-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: guestbook-ui
spec:
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
template:
metadata:
labels:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
name: guestbook-ui
ports:
- containerPort: 80
I don't know which part I am missing, please lmk any ambiguous part or more detail.
Thanks, in advance!
Use this service instead.
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
app: guestbook-ui
It has type: NodePort added to it.
You can check really good example on how to deploy an app, expose it via a service and add an ingress to it. It's available in kubernetes docs Deploy a hello, world app.
Also if you are having problem understanding the difference between NodePort, ClusterIP and what Ingress is I recommend reading Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?
I am building a service in EKS that has two deployments, two services (NodePort) , and a single ingress.
I am using the aws-alb-ingress-controller.
When I run kubectl port-forward POD 8080:80 It does show me my working pods.
When I look at the generated endpoints by the alb I get 502 errors.
When I look at the Registered Targets of the target group I am seeing the message, Health checks failed with these codes: [502]
Here is my complete yaml.
---
#Example game deployment and service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-game"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-game"
spec:
containers:
- image: alexwhen/docker-2048
imagePullPolicy: Always
name: "example-game"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-game"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
#Example nginxdemo Deployment and Service
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: "example-nginxdemo"
namespace: "example-app"
spec:
replicas: 5
template:
metadata:
labels:
app: "example-nginxdemo"
spec:
containers:
- image: nginxdemos/hello
imagePullPolicy: Always
name: "example-nginxdemo"
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: "service-example-nginxdemo"
namespace: "example-app"
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
type: NodePort
selector:
app: "example-app"
---
#Shared ALB ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: "example-ingress"
namespace: "example-app"
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
Alb.ingress.kubernetes.io/healthcheck-port: traffic-port
alb.ingress.kubernetes.io/healthcheck-path: /
# alb.ingress.kubernetes.io/scheme: internal
# alb.ingress.kubernetes.io/load-balancer-attributes: routing.http2.enabled=true
labels:
app: example-app
spec:
rules:
- http:
paths:
- path: /game/*
backend:
serviceName: "service-example-game"
servicePort: 80
- path: /nginxdemo/*
backend:
serviceName: "service-example-nginxdemo"
servicePort: 80
I don't know why but it turns out that the label given to to ingress has to be unique.
When I changed the label from 'example-app' to 'example-app-ingress' it just started working.
I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.
I have been following this guide : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
My deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
the service of those deployments:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
My Load balancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
I've set up the Nginx Ingress Controller with this : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...
Every resources create correctly, and I think the ingress is actually working but is getting confused where to go.
Thanks for your help!
In general, use externalTrafficPolicy: Cluster instead of Local. You can gain some performance (latency) improvement by using Local but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, Cluster is the default option for externalTrafficPolicy.
In your ingress, you route /bleble to service bleble, but your service name is actually bleble-svc. please make them consistent. Also, you would need to set your servicePort to 8080 as you exposed 8080 in your service configuration.
For internal service like bleble-svc, Cluster IP is good enough in your case as it does not need external access.
Hope this helps.
Found it!
The containerPort in the Deployment were set to 8000, the targetport of the services as well, but the person who did the Dockerfile of the code exposed the port 80. Which was the reason it was getting the 502 Bad getaway!
Thanks a lot as well to #Fei who has been a fantastic helper!