Kubernetes Nginx Ingress phpmyadmin 502 error - kubernetes

I am trying to solve quite an annoying problem I encounter when using Kubernetes. When I am trying to reach PHPMyAdmin on my server, it returns an Nginx 502 Bad Gateway error.
My structure in my cluster is as follows. I use an Nginx ingress LoadBalancer on DigitalOcean to get traffic into my cluster. It then passes my ingress (in the first code block), which splits the traffic across the subdomains.
When traffic goes to the phpmyadmin subdomain, the request is passed to the 'phpmyadmin-service' with service port 8085.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: nginx
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: "app1.example.com"
http:
paths:
- path: /
backend:
serviceName: app1-service
servicePort: 80
- host: "phpmyadmin.example.com"
http:
paths:
- path: /
backend:
serviceName: phpmyadmin-service
servicePort: 8085
Then, the service receives the request, and passes it to the 'phpmyadmin-deployment' deployment. This is a deployment that runs the phpmyadmin/phpmyadmin:fpm docker image.
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
name: phpmyadmin-deployment
labels:
app: phpmyadmin
spec:
selector:
matchLabels:
app: phpmyadmin
template:
metadata:
creationTimestamp: null
labels:
app: phpmyadmin
spec:
containers:
- name: phpmyadmin
image: phpmyadmin/phpmyadmin:fpm
ports:
- containerPort: 8087
env:
- name: PMA_ABSOLUTE_URI
value: 'phpmyadmin.example.com'
- name: PMA_HOST
value: mysql
- name: PMA_PORT
value: "3306"
- name: MYSQL_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mysql-secrets
key: rootpw
---
apiVersion: v1
kind: Service
metadata:
name: phpmyadmin-service
spec:
type: ClusterIP
selector:
app: phpmyadmin
ports:
- port: 8085
targetPort: 8087
So something is giving me the 502 Bad Gateway error, and I don't know what it is. Thanks in advance for replying!

Related

Kubernetes nginx-ingress controller always 401 http response

I'm researching kubernetes and trying to configure nginx-ingress controller. So I created yaml config file for it, like this:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-srv
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: 'true'
spec:
rules:
- host: acme.com
http:
paths:
- path: /api/platforms
pathType: Prefix
backend:
service:
name: platforms-clusterip-service
port:
number: 80
- path: /api/c/platforms
pathType: Prefix
backend:
service:
name: command-clusterip-service
port:
number: 80
And created relevant services like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: platforms-depl
spec:
replicas: 1
selector:
matchLabels:
app: platformservice
template:
metadata:
labels:
app: platformservice
spec:
containers:
- name: platformservice
image: yuriyborovskyi91/platformsservice:latest
---
apiVersion: v1
kind: Service
metadata:
name: platforms-clusterip-service
spec:
type: ClusterIP
selector:
app: platformservice
ports:
- name: platformservice
protocol: TCP
port: 80
targetPort: 80
And added acme.com to windows hosts file, like:
127.0.0.1 acme.com
But when trying to access http://acme.com/api/platforms or any other api route, I'm receiving 401 http error, and confused by it, because I didn't configure any authorization. Using all default settings. If to call my service by nodeport everything works fine.
Output of my kubectl get services, where service I'm trying to access running:
and response of kubectl get services --namespace=ingress-nginx

Why a path /dev(/|$)(.*) and a rewrite annotation aren't enough for a redirect in the Nginx ingress?

When I try to use a path different from / and a rewrite annotation in an ingress service I get an "error timeout" in browser. I need to be able to access my front with something like example.com/dev and the frontend's pod needs to receive requests to /.
I use Azure K8s 1.22.6 and Nginx ingress 4.1.0.
Here's my resources:
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
selector:
app: frontend
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend-deployment
spec:
replicas: 1
selector:
matchLabels:
app: frontend
template:
metadata:
labels:
app: frontend
spec:
containers:
- name: frontend
image: ealen/echo-server
ports:
- containerPort: 80
imagePullPolicy: Always
And ingress. This configuration works and I left lines which I'm needed commented:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-fa
# annotations:
# nginx.ingress.kubernetes.io/use-regex: "true"
#nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
#- path: /dev(/|$)(.*)
- path: /
pathType: Prefix
backend:
service:
name: frontend-service
port:
number: 80
If I use the rewrite annotation $2 and the path "/dev(/|$)(.*)" then I get "ERR_CONNECTION_TIMED_OUT". What I'm missing?
the problem was in the frontend vue application - I've added
publicPath: '/dev/',
into the vue.config.js
ps I've skipped the echo service

Kubernetes Ingress does not work with traefisk

I created a kubernetes cluster in Google Cloud Platform, after that, I installed Helm/tiller on cluster, and after, I installed traefik with helm like oficial documentation says to do.
Now i'm trying to create an Ingress for a service, but if I put the annotation kubernetes.io/ingress.class: traefik, the load balancer for Ingress is not created.
But without the annotation, it works with default Ingress.
(The service type is nodeport)
EDIT: I also tried this example in a clean google cloud kubernetes cluster: https://supergiant.io/blog/using-traefik-as-ingress-controller-for-your-kubernetes-cluster/ but its the same, when I chose kubernetes.io/ingress.class: traefik, won't be created a load balancer for ingress.
my files are:
animals-svc.yaml:
---
apiVersion: v1
kind: Service
metadata:
name: bear
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: bear
---
apiVersion: v1
kind: Service
metadata:
name: moose
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: moose
---
apiVersion: v1
kind: Service
metadata:
name: hare
annotations:
traefik.backend.circuitbreaker: "NetworkErrorRatio() > 0.5"
spec:
type: NodePort
ports:
- name: http
targetPort: 80
port: 80
selector:
app: animals
task: hare
animals-ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: animals
annotations:
kubernetes.io/ingress.class: traefik
# kubernetes.io/ingress.global-static-ip-name: "my-reserved-global-ip"
# traefik.ingress.kubernetes.io/frontend-entry-points: http
# traefik.ingress.kubernetes.io/redirect-entry-point: http
# traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: hare.minikube
http:
paths:
- path: /
backend:
serviceName: hare
servicePort: http
- host: bear.minikube
http:
paths:
- path: /
backend:
serviceName: bear
servicePort: http
- host: moose.minikube
http:
paths:
- path: /
backend:
serviceName: moose
servicePort: http
animals-deployment.yaml:
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: bear
labels:
app: animals
animal: bear
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: bear
template:
metadata:
labels:
app: animals
task: bear
version: v0.0.1
spec:
containers:
- name: bear
image: supergiantkir/animals:bear
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: moose
labels:
app: animals
animal: moose
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: moose
template:
metadata:
labels:
app: animals
task: moose
version: v0.0.1
spec:
containers:
- name: moose
image: supergiantkir/animals:moose
ports:
- containerPort: 80
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: hare
labels:
app: animals
animal: hare
spec:
replicas: 2
selector:
matchLabels:
app: animals
task: hare
template:
metadata:
labels:
app: animals
task: hare
version: v0.0.1
spec:
containers:
- name: hare
image: supergiantkir/animals:hare
ports:
- containerPort: 80
The services are created, but the ingress loadbalancer is not created:
But, if I remove the line kubernetes.io/ingress.class: traefik it works with the default ingress of Kubernetes
Traefik does not create a load balancer for you by default.
As HTTP(s) load balancing with Ingress documentation mention:
When you create an Ingress object, the GKE ingress controller creates
a Google Cloud Platform HTTP(S) load balancer and configures it
according to the information in the Ingress and its associated
Services.
This is all applicable for GKE ingress controller(gce) - more info about gce you can find here: https://github.com/kubernetes/ingress-gce
If you would like to use Traefik as ingress - you have to expose Traefik service with type: LoadBalancer
Example:
apiVersion: v1
kind: Service
metadata:
name: traefik
spec:
type: LoadBalancer
selector:
k8s-app: traefik-ingress-lb
ports:
- port: 80
targetPort: 80
More info with a lot of explanation diagrams and real working example you can find in the Exposing Kubernetes Services to the internet using Traefik Ingress Controller article.
Hope this help.
You can try to add more annotations as below
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
Like this,
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-dashboard-ingress
namespace: traefik
annotations:
kubernetes.io/ingress.class: traefik
traefik.ingress.kubernetes.io/frontend-entry-points: http,https
traefik.ingress.kubernetes.io/redirect-entry-point: https
traefik.ingress.kubernetes.io/redirect-permanent: "true"
spec:
rules:
- host: traefik-ui.example.com
http:
paths:
- path: /
backend:
serviceName: traefik-dashboard
servicePort: 8080

Kubernetes nginx ingress + oauth2 external auth timing out

I am attempting to protect a service's status page with an oauth2_proxy, using Azure AD as the external auth provider. Currently if I browse to the public url of the app (https://sub.domain.com/service/hangfire) I got a 504 gateway timeout, where it should be directing me to authenticate.
I had been mostly following this guide for reference: https://msazure.club/protect-kubernetes-webapps-with-azure-active-directory-aad-authentication/
If I disable the annotations that direct the authentication, I can get to the public status page without a problem. If I browse to https://sub.domain.com/oauth2, I get a prompt to authenticate with my provider, which I would expect. I am not sure where the issue lies in the ingress config but I was unable to find any similar cases to this online, stackoverflow or otherwise.
In this case, everything (oauth deployment, service, and ingress rules) lives in a 'dev' namespace except the actual ingress deployment, which lives in its own namespace. I don't suspect this makes a difference, but SSL termination is handled by a gateway outside the cluster.
oauth2 deployment:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.2.0
imagePullPolicy: IfNotPresent
args:
- --provider=azure
- --email-domain=domain.com
- --upstream=http://servicename
- --http-address=0.0.0.0:4180
- --azure-tenant=id
- --client-id=id
- --client-secret=number
env:
- name: OAUTH2_PROXY_COOKIE_SECRET
value: secret
ports:
- containerPort: 4180
protocol : TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
app: oauth2-proxy
Ingress rules:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-url: https://sub.domain.com/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: https://sub.domain.com/oauth2/start?rd=$https://sub.domain.com/service/hangfire"
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /service/hangfire
backend:
serviceName: service
servicePort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: service-oauth2-proxy
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: sub.domain.com
http:
paths:
- path: /oauth2
backend:
serviceName: oauth2-proxy
servicePort: 4180
I am getting 504 errors when I browse to the url but I do not see any errors in the ingress pods.
I ended up finding the resolution here: https://github.com/helm/charts/issues/5958
I had to use the internal service address for the auth-url, which I had not seen mentioned anywhere else.
nginx.ingress.kubernetes.io/auth-url: http://oauth2-proxy.development.svc.cluster.local:4180/oauth2/auth
This is what I've been doing with my oAuth proxy for Azure AD:
annotations:
kubernetes.io/ingress.class: "nginx"
ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
And I've been using this oAuth proxy:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- env:
- name: OAUTH2_PROXY_PROVIDER
value: azure
- name: OAUTH2_PROXY_AZURE_TENANT
value: xxx
- name: OAUTH2_PROXY_CLIENT_ID
value: yyy
- name: OAUTH2_PROXY_CLIENT_SECRET
value: zzz
- name: OAUTH2_PROXY_COOKIE_SECRET
value: anyrandomstring
- name: OAUTH2_PROXY_HTTP_ADDRESS
value: "0.0.0.0:4180"
- name: OAUTH2_PROXY_UPSTREAM
value: "http://where_to_redirect_to:443"
image: machinedata/oauth2_proxy:latest
imagePullPolicy: IfNotPresent
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
My setup is similar to 4c74356b41's
oauth2-proxy deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: oauth2-proxy
template:
metadata:
labels:
app: oauth2-proxy
spec:
containers:
- args:
- --azure-tenant=TENANT-GUID
- --email-domain=company.com
- --http-address=0.0.0.0:4180
- --provider=azure
- --upstream=file:///dev/null
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
key: client-id
name: oauth2-proxy
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
key: client-secret
name: oauth2-proxy
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
key: cookie-secret
name: oauth2-proxy
image: quay.io/pusher/oauth2_proxy:v3.1.0
name: oauth2-proxy
oauth2-proxy service
apiVersion: v1
kind: Service
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
selector:
app: oauth2-proxy
type: ClusterIP
oauth2-proxy ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: oauth2-proxy
servicePort: 80
path: /oauth2
oauth2-proxy configuration
apiVersion: v1
kind: Secret
metadata:
labels:
app: oauth2-proxy
name: oauth2-proxy
namespace: monitoring
data:
# Values below are fake
client-id: AAD_CLIENT_ID
client-secret: AAD_CLIENT_SECRET
cookie-secret: COOKIE_SECRET
Application using AAD Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/auth-signin: https://$host/oauth2/start?rd=$request_uri
nginx.ingress.kubernetes.io/auth-url: https://$host/oauth2/auth
labels:
app: myapp
name: myapp
namespace: monitoring
spec:
rules:
- host: myapp.hostname.net
http:
paths:
- backend:
serviceName: myapp
servicePort: 80
path: /
tls:
- hosts:
- myapp.hostname.net
An additional step that needs to be done is to add the redirect URI to the AAD App registration. Navigate to your AAD App Registration in the Azure portal > Authentication > Add https://myapp.hostname.net/oauth2/callback to Redirect URIs > Save

Traefik Ingress bad rule

I am working with Kubernetes on Google Cloud. I am trying to set Traefik as Ingress for the cluster. I'm based the code on the official site docs https://docs.traefik.io/user-guide/kubernetes/ but I have an error with the rule for Path Prefix Strip.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth-api
labels:
app: auth-api
spec:
replicas: 2
selector:
matchLabels:
app: auth-api
template:
metadata:
labels:
app: auth-api
version: v0.0.1
spec:
containers:
- name: auth-api
image: gcr.io/r10c-dev/auth-api:v0.1
ports:
- containerPort: 3000
env:
- name: AMQP_SERVICE
value: broker:5672
- name: CACHE_SERVICE
value: session-cache
---
apiVersion: v1
kind: Service
metadata:
name: auth-api
spec:
ports:
- name: http
targetPort: 80
port: 3000
type: NodePort
selector:
app: auth-api
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: main-ingress
annotations:
kubernetes.io/ingress.class: traefik
traefik.frontend.rule.type: PathPrefixStrip
kubernetes.io/ingress.global-static-ip-name: "web-static-ip"
spec:
rules:
- http:
paths:
- path: /auth
backend:
serviceName: auth-api
servicePort: http
In the GKE console it seems the deployment is linked to the service and the ingress, but when I try to access the IP, the server returns and error 502.
Also I am using and static IP
gcloud compute addresses create web-static-ip --global