Kubernetes ingress-nginx gives 502 error (Bad Gateway) - kubernetes

I have an EKS cluster for which I want :
- 1 Load Balancer per cluster,
- Ingress rules to direct to the right namespace and the right service.
I have been following this guide : https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nginx-ingress-with-cert-manager-on-digitalocean-kubernetes
My deployments:
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- name: hello-world
image: IMAGENAME
ports:
- containerPort: 8000
name: hello-world
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: bleble
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: bleble
template:
metadata:
labels:
app: bleble
spec:
containers:
- name: bleble
image: IMAGENAME
ports:
- containerPort: 8000
name: bleble
the service of those deployments:
apiVersion: v1
kind: Service
metadata:
name: hello-world-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: hello-world
type: NodePort
---
apiVersion: v1
kind: Service
metadata:
name: bleble-svc
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8000
selector:
app: bleble
type: NodePort
My Load balancer:
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
targetPort: http
My ingress:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: simple-fanout-example
namespace : default
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: internal-lb.aws.com
http:
paths:
- path: /bleble
backend:
serviceName: bleble-svc
servicePort: 80
- path: /hello-world
backend:
serviceName: hello-world-svc
servicePort: 80
I've set up the Nginx Ingress Controller with this : kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.24.1/deploy/mandatory.yaml
I am unsure why I get a 503 Service Temporarily Unavailable for one service and one 502 for another... I would guess it's a problem of ports or of namespace? In the guide, they don't define namespace for the deployment...
Every resources create correctly, and I think the ingress is actually working but is getting confused where to go.
Thanks for your help!

In general, use externalTrafficPolicy: Cluster instead of Local. You can gain some performance (latency) improvement by using Local but you need to configure those pod allocations with a lot efforts. You will hit 5xx errors with those misconfigurations. In addition, Cluster is the default option for externalTrafficPolicy.
In your ingress, you route /bleble to service bleble, but your service name is actually bleble-svc. please make them consistent. Also, you would need to set your servicePort to 8080 as you exposed 8080 in your service configuration.
For internal service like bleble-svc, Cluster IP is good enough in your case as it does not need external access.
Hope this helps.

Found it!
The containerPort in the Deployment were set to 8000, the targetport of the services as well, but the person who did the Dockerfile of the code exposed the port 80. Which was the reason it was getting the 502 Bad getaway!
Thanks a lot as well to #Fei who has been a fantastic helper!

Related

Expose services via Istio ingress gateway

I am new to istio and I want to expose three services and route traffic to those services based on the port number passed to "website.com:port" or subdomain.
services deployment config files:
apiVersion: v1
kind: Service
metadata:
name: visitor-service
labels:
app: visitor-service
spec:
ports:
- port: 8000
nodePort: 30800
targetPort: 8000
selector:
app: visitor-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: visitor-service
spec:
replicas: 1
selector:
matchLabels:
app: visitor-service
template:
metadata:
labels:
app: visitor-service
spec:
containers:
- name: visitor-service
image: visitor-service
ports:
- containerPort: 8000
second service:
apiVersion: v1
kind: Service
metadata:
name: auth-service
labels:
app: auth-service
spec:
ports:
- port: 3004
nodePort: 30304
targetPort: 3004
selector:
app: auth-service
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
spec:
replicas: 1
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
spec:
containers:
- name: auth-service
image: auth-service
ports:
- containerPort: 3004
Third one:
apiVersion: v1
kind: Service
metadata:
name: gateway
labels:
app: gateway
spec:
ports:
- port: 8080
nodePort: 30808
targetPort: 8080
selector:
app: gateway
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: gateway
spec:
replicas: 1
selector:
matchLabels:
app: gateway
template:
metadata:
labels:
app: gateway
spec:
containers:
- name: gateway
image: gateway
ports:
- containerPort: 8080
If someone can help setting up the gateway and virtual service configuration it would be great.
It seems like you simply want to expose your applications, for that reason istio seems like a total overkill since it comes with a lot of overhead that you won't be using.
Regardless of whether you want to use istio as your default ingress or any other ingress-controller (nginx, traefik, ...) the following construct applies to all of them:
Expose the ingress-controller via a service of type NodePort or LoadBalancer, depending on your infrastructure. In a cloud environment the latter one will most likely work the best for you (if on GKE, AKS, EKS, ...).
Once it is exposed set up a DNS A record to point to the external IP address. Afterwards you can start configuring your ingress, depending on which ingress-controller you chose the following YAML may need some adjustments (example is given for istio):
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
If a request for something like httpbin.example.com comes in to your ingress-controller it is going to send the request to a service named httpbin on port 8000.
As can be seen in the YAML posted above, the rules and paths field take a list (indicated by the - in the next line). To expose multiple services simply add a new entry to the list, e.g.:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: istio
name: ingress
spec:
rules:
- host: httpbin.example.com
http:
paths:
- path: /httpbin
pathType: Prefix
backend:
serviceName: httpbin
servicePort: 8000
- path: /apache
pathType: Prefix
backend:
serviceName: apache
servicePort: 8080
This is going to send requests like httpbin.example.com/httpbin/ to httpbin and httpbin.example.com/apache/ to apache.
For further information see:
https://istio.io/latest/docs/tasks/traffic-management/ingress/kubernetes-ingress/
https://kubernetes.io/docs/concepts/services-networking/ingress/

How do I make a forward-proxy server on k8s and ALB(or NLB)?

I created forward proxy server on EKS pods behind ALB(created by AWS Load Balancer Controller). All pod can take a response through 8118 port through ALB.
The resources like pod and ingress looked good to me. Then I tried if the proxy server work well with curl -Lx k8s-proxy-sample-domain.ap-uswest-1.elb.amazonaws.com:18118 ipinfo.io
Normally, I get random ip address from ipinfo.io. But it didn't.... So, I also did port-forad. Like this:
kubectl port-forward specifi-pod 8118:8118
Then I re-try redirect access on my host address.
curl -Lx localhost:8118 ipinfo.io
In this case, it went well. I cannot catch the reason. What's the difference between THROUGH ALB and port-forward. Should I use NLB for some reason? Or some misconfigure?
My environement
k8s version: v1.18.2
node type: fargate
Manifest
Here is my manifest.
---
apiVersion: v1
kind: Namespace
metadata:
name: tor-proxy
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: tor-proxy
name: tor-proxy-deployment
spec:
selector:
matchLabels:
app.kubernetes.io/name: tor-proxy
replicas: 5
template:
metadata:
labels:
app.kubernetes.io/name: tor-proxy
spec:
containers:
- image: dperson/torproxy
imagePullPolicy: Always
name: tor-proxy
ports:
- containerPort: 8118
---
apiVersion: v1
kind: Service
metadata:
labels:
name: tor-proxy
name: tor-proxy-service
namespace: tor-proxy
spec:
ports:
- port: 18118
targetPort: 8118
protocol: TCP
type: NodePort
selector:
app.kubernetes.io/name: tor-proxy
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
namespace: tor-proxy
name: tor-proxy-ingress
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 18118}]'
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: tor-proxy-service
servicePort: 18118
Use NLB not ALB, because it pass the client IP toward a target site through proxy server.

Wanna connect via Ingress on algocd-tutorial

Currently processing on this tutorial,
https://github.com/argoproj/argocd-example-apps/tree/master/guestbook
https://argoproj.github.io/argo-cd/getting_started/#5-register-a-cluster-to-deploy-apps-to-optional
My short-term milestone is to render guest-book's UI on browser.
I'm trying to connect via Ingress, and it went wrong.
Error message's like this,
Status: 502
The server encountered a temporary error and could not complete your request.
I suppose something's wrong around service and pod.
guestbook-ui-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui-service
spec:
ports:
- port: 80
targetPort: 80
selector:
app: guestbook-ui
guestbook-ui-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
labels:
app: guestbook-ui
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: guestbook-ui-service
servicePort: 80
guestbook-ui-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: guestbook-ui
spec:
replicas: 1
revisionHistoryLimit: 3
selector:
matchLabels:
app: guestbook-ui
template:
metadata:
labels:
app: guestbook-ui
spec:
containers:
- image: gcr.io/heptio-images/ks-guestbook-demo:0.2
name: guestbook-ui
ports:
- containerPort: 80
I don't know which part I am missing, please lmk any ambiguous part or more detail.
Thanks, in advance!
Use this service instead.
apiVersion: v1
kind: Service
metadata:
name: guestbook-ui-service
spec:
type: NodePort
ports:
- port: 80
targetPort: 80
selector:
app: guestbook-ui
It has type: NodePort added to it.
You can check really good example on how to deploy an app, expose it via a service and add an ingress to it. It's available in kubernetes docs Deploy a hello, world app.
Also if you are having problem understanding the difference between NodePort, ClusterIP and what Ingress is I recommend reading Kubernetes NodePort vs LoadBalancer vs Ingress? When should I use what?

Kubernetes nginx ingress controller throws error when trying to obtain endpoints for service

I am trying to set micro-services on Kubernetes on Google Cloud Platform. I've created a deployment, clusterIp and ingress configuration files.
First after creating a cluster, I run this command to install nginx ingress.
helm install my-nginx stable/nginx-ingress --set rbac.create=true
I use helm v3.
Then I apply deployment and clusterIp configurations.
deployment and clusterIp configurations:
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-production-deployment
spec:
replicas: 2
selector:
matchLabels:
component: app-production
template:
metadata:
labels:
component: app-production
spec:
containers:
- name: app-production
image: eu.gcr.io/my-project/app:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app-production-cluser-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP
My ingress config is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-service
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: app.example.com
http:
paths:
- path: /
backend:
serviceName: app-production-cluster-ip-service
servicePort: 80
I get this error from Google Cloud Platform logs withing ingress controller:
Error obtaining Endpoints for Service "default/app-production-cluster-ip-service": no object matching key "default/app-production-cluster-ip-service" in local store
But when I do kubectl get endpoints command the output is this:
NAME ENDPOINTS AGE
app-production-cluser-ip-service 10.60.0.12:80,10.60.1.13:80 17m
I am really not sure what I'm doing wrong.
The service name mentioned in the ingress not matching. Please recreate a service and check
apiVersion: v1
kind: Service
metadata:
name: app-production-cluster-ip-service
spec:
type: ClusterIP
selector:
component: app-production
ports:
- port: 80
targetPort: 80
protocol: TCP

Configure Ingress Kubernetes - accessible only on single node

I had setup ingress on my Kubernetes Cluster running on VMWAre virtual machines by following everything similar to the specifications here. All the ports are open and accessible.
https://github.com/nginxinc/kubernetes-ingress/tree/master/examples/complete-example
My master is x.x.x.10 and nodes are x.x.x.12 and x.x.x.13.
After the creation of ingress/controllers, I need to get the IP where the nginx-controller runs
nginx-ingress-rc-kgfmd 1/1 Running 0 21h 172.16.5.5 x.x.x.12
so, it usually runs either on x.x.x.12 or x.x.x.13, and then when I do this it hits my web service
curl --resolve master.federated.fds:80:x.x.x.12 https://master.federated.fds/coffee
where master.federated.fds is the DNS resolvable name of Master.
I need to make it work without the help of IP address and only with the DNS resolvable name or else atleast with any of the node ip's
Eg: http://node2.federated.fds/coffee, when I curl this I get Connection refused error
Updating with specifications
apiVersion: v1
kind: Service
metadata:
name: coffee-svc
labels:
app: coffee
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
# nodePort: 30080
type: NodePort
selector:
app: coffee
ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: cafe-ingress
spec:
rules:
- host: jciamaster.federated.fds
http:
paths:
- path: /tea
backend:
serviceName: tea-svc
servicePort: 80
- path: /coffee
backend:
serviceName: coffee-svc
servicePort: 80
nginx ing controller
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
hostPort: 80
I see that the port 80 is listening only on the node where nginx pod runs and not on any other node. Could someone pls let me know how to access the application through all node ip's or thro a url like jciamaster.federated.fds?
Thanks,
Update:
Tried to run with nginx controller as svc as suggested by Marc
apiVersion: v1
kind: ReplicationController
metadata:
name: nginx-ingress-rc
labels:
app: nginx-ingress
spec:
replicas: 1
selector:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
containers:
- image: nginxdemos/nginx-ingress:0.8.1
imagePullPolicy: Always
name: nginx-ingress
ports:
- containerPort: 80
# Uncomment the lines below to enable extensive logging and/or customization of
# NGINX configuration with configmaps
#args:
#- -v=3
#- -nginx-configmaps=default/nginx-config
---
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx-ingress-label
name: nginx-ing-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
nodePort: 30000
type: NodePort
selector:
name: nginx-ingress
When I hit http://x.x.x.:30000/coffee it just hangs and does nothing.Anything I am doing wrong?
You can expose the nginx controller Pod with a NodePort Service, then you can access it on all nodes.