ingress get the 503 feedback - kubernetes

I want to config the ingress to work with my domain name.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: app1
servicePort: 80
- path: /my-api2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- name: app1
port: 3000
targetPort: 80
- name: app2
port: 4000
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: app1
image: XXX
ports:
- name: app1
containerPort: 3000
- name: app2
image: XXX
ports:
- name: app2
containerPort: 4000
I can reach the app1 service by serverIP:3000(example. 172.16.111.211:3000/my-api1). But remotely it always return the 503 status code(curl https://example.com/my-api1).
# kubectl describe ingress app-ingress
Name: app-ingress
Namespace: default
Address: serverIP
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
app-tls terminates example.com
Rules:
Host Path Backends
---- ---- --------
example.com
/my-api1(/|$)(.*) app1:80 (<error: endpoints "app1" not found>)
/my-api2 app2:80 (<error: endpoints "app2" not found>)

First thing is your service name is not matching, you have created a service with name my-api but in ingress you have referred it as app1 and app2 which is not available.
Second error is selector label between your deployment and service are not matching. Deployment created with label user-api but in service selector is mentioned as my-api.
If you need 80 port for both the application then you have to create two different services and refer that in your ingress.
Below should work for your requirement.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: app1
servicePort: 80
- path: /my-api2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1
spec:
selector:
app: user-api
ports:
- name: app1
port: 3000
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app2
spec:
selector:
app: user-api
ports:
- name: app2
port: 4000
targetPort: 80
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: app1
image: XXX
ports:
- name: app1
containerPort: 3000
- name: app2
image: XXX
ports:
- name: app2
containerPort: 4000

You made a mistake on port and targetPort.
It should be:
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- name: app1
port: 3000
targetPort: 3000
- name: app2
port: 4000
targetPort: 4000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: my-api
servicePort: app1
- path: /my-api2
backend:
serviceName: my-api
servicePort: app2
port is for exposing service port
targetPort is targeting a pod exposed port

Related

Keycloak oauth-proxy with mocked server mocklab on Kubernetes

When trying to setup keycloak oauth2 on local machine with minikube (virtualbox driver)
with mocked server from:
https://www.mocklab.io/docs/oauth2-mock/
I get error from :
[2022/11/04 15:09:14] [provider.go:55] Performing OIDC Discovery...
[2022/11/04 15:09:17] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://keycloak.192.168.59.103.nip.io/realms/skyrealm/.well-known/openid-configuration": dial tcp 192.168.59.103:443: connect: no route to host
Basic auth with generated password is working.
minikube version: v1.27.1
Kubernetes version: v1.25.2
Docker version: 20.10.18
Operating System: Ubuntu 22.04
Keycloak deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:19.0.3
args: ["start"]
imagePullPolicy: Always
env:
- name: KEYCLOAK_ADMIN
value: null
valueFrom:
secretKeyRef:
key: keycloakuser
name: skysecrets
- name: KEYCLOAK_ADMIN_PASSWORD
value: null
valueFrom:
secretKeyRef:
key: keycloakpass
name: skysecrets
- name: KC_PROXY
value: "edge"
- name: KC_HOSTNAME_STRICT
value: "false"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
service:
---
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
keycloak ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
# konghq.com/strip-path: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- keycloak.192.168.59.104.nip.io
rules:
- host: keycloak.192.168.59.104.nip.io
http:
paths:
- path: /keycloak(/|$)(.*)
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
oauth-proxy setup:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=keycloak-oidc
- --client-id=YYY
- --client-secret=XXX
- --redirect-url=https://oauth.mocklab.io/oauth/authorize
- --oidc-issuer-url=https://keycloak.192.168.59.103.nip.io/realms/skyrealm
# for supporting PKCE methods from keycloak
- --code-challenge-method="S256"
# for error with x509 certificates
- --ssl-insecure-skip-verify=true
- --ssl-upstream-insecure-skip-verify=true
# additional required parameters
- --email-domain=*
- --cookie-secure=false
- --cookie-secret=ZZZ
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kube-system
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
service ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sky-oauth-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /oauth/offer(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-offer-service
port:
number: 5552
- path: /oauth/message(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-message-service
port:
number: 5553
- path: /oauth/auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 9100
I setup second ingresses for basic auth with generated passwortd and it is working:
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /offer(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-offer-service
port:
number: 5552
- path: /message(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-message-service
port:
number: 5553
- path: /auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 9100
I want be able to test oauth2 before I deploy it on cloud.
How to setup it correctly ?

502 Bad Gateway with GKE nginx

Im trying to forward to my api on /api/ and my UI on /
My UI is all working fine and my logs for the nginx ingress and api (which ive tested with port forwarding) are all working fine. But when i curl or go to IP/api/healthz (which would be the healthcheck for Hasura) I get 502 bad gateway
Service for hasura:
apiVersion: v1
kind: Service
metadata:
name: hasura-svc
labels:
app: hasura-app
spec:
selector:
app: hasura-app
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
Deployment for Hasura:
kind: Deployment
metadata:
name: hasura-dep
labels:
hasuraService: custom
app: hasura-app
spec:
selector:
matchLabels:
app: hasura-app
replicas: 1
template:
metadata:
creationTimestamp: null
labels:
app: hasura-app
spec:
containers:
- name: hasura-app
image: hasura/graphql-engine:v1.3.0
imagePullPolicy: IfNotPresent
command: ["graphql-engine"]
args: ["serve", "--enable-console"]
envFrom:
- configMapRef:
name: hasura-cnf
ports:
- containerPort: 8080
protocol: TCP
resources: {}
nginx service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/ingress.global-static-ip-name: "gcloud-ip"
spec:
backend:
serviceName: nginx
servicePort: 80
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: hasura-svc
servicePort: 8080
- path: /*
backend:
serviceName: client
servicePort: 3333

Kubernetes routing to specific pod in function of a param in url

The need I have just looks like this stuff :
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: http
spec:
serviceName: "nginx-set"
replicas: 2
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: gcr.io/google_containers/nginx-slim:0.8
ports:
- containerPort: 80
name: http
----
apiVersion: v1
kind: Service
metadata:
name: nginx-set
labels:
app: nginx
spec:
ports:
- port: 80
name: http
clusterIP: None
selector:
app: nginx
Here is the interesting part :
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/testPath'
backend:
hostNames:
- web-0
serviceName: nginx-set #! There is no extra service. This
servicePort: '80' # is the Statefulset's Headless Service
I'm able to target a specific pod because of setting the hostName in function of the url.
Now I'd like to know if it's possible in kubernetes to create a rule like that
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/connect/(\d+)'
backend:
hostNames:
- web-(result of regex match with \d+)
serviceName: nginx-set #! There is no extra service. This
servicePort: '80' # is the Statefulset's Headless Service
or if I have to wrote a rule for each pod ?
Sorry that isn't possible, the best solution is to create multiple paths, each one referencing one pod:
apiVersion: voyager.appscode.com/v1beta1
kind: Ingress
metadata:
name: test-ingress
namespace: default
spec:
rules:
- host: appscode.example.com
http:
paths:
- path: '/connect/0'
backend:
hostNames:
- web-0
serviceName: nginx-set
servicePort: '80'
- path: '/connect/1'
backend:
hostNames:
- web-1
serviceName: nginx-set
servicePort: '80'

Canot access to sidecar container in Kubernetes

I have the following hello world deployment.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: hello:v0.0.1
imagePullPolicy: Always
args:
- /hello
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: hello
type: NodePort
And I have ingress object deploy with side-car container
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: alb-ingress-controller
template:
metadata:
creationTimestamp: null
labels:
app: alb-ingress-controller
spec:
containers:
- name: server
image: alb-ingress-controller:v0.0.1
imagePullPolicy: Always
args:
- /server
- --ingress-class=alb
- --cluster-name=AAA
- --aws-max-retries=20
- --healthz-port=10254
ports:
- containerPort: 10254
protocol: TCP
- name: alb-sidecar
image: sidecar:v0.0.1
imagePullPolicy: Always
args:
- /sidecar
- --port=5000
ports:
- containerPort: 5000
protocol: TCP
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: alb-ingress
serviceAccount: alb-ingress
---
apiVersion: v1
kind: Service
metadata:
name: alb-ingress-controller-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: alb-ingress-controller
type: NodePort
And I have Ingress here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: test-alb
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: hello-service
servicePort: 80
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80
I would expect to access to /alb-sidecar the same way that I access to /hello, but only /hello endpoint works for me. And keep getting 502 Bad Gateway for /alb-sidecar endpoint. The sidecar container is just a simple web app listening on /alb-sidecar.
Do I need do anything different when the sidecar container runs in a different namespace or how would you run a sidecar next to ALB ingress controller?
If you created the deployment alb-ingress-controller and the service alb-ingress-controller-service in another namespace, you need to create another ingress resource in the exact namespace.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
namespace: alb-namespace
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: alb-service
spec:
rules:
- http:
paths:
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80

Kuberentes nginx subdomain

Host2 (subdomain) works perfect, host 1 gives error message 'default backend - 404'. Both dockerfiles for web1 and web2 works on local machine and YAML files is almost identical except for name variables, nodePort and image location etc..
Any ideas what can be wrong here?
Ingress config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: test-ip
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: host1.com
http:
paths:
- path: /*
backend:
serviceName: web1
servicePort: 80
- host: sub.host1.com
http:
paths:
- path: /*
backend:
serviceName: web2
servicePort: 80
YAML for web1
apiVersion: v1
kind: Service
metadata:
name: web1
spec:
selector:
app: web1
type: NodePort
ports:
- name: http
protocol: TCP
port: 80
nodePort: 32112
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: web1-deployment
spec:
selector:
matchLabels:
app: web1
replicas: 1
metadata:
labels:
app: web1
spec:
terminationGracePeriodSeconds: 60
containers:
- name: web1
image: gcr.io/image..
imagePullPolicy: Always
ports:
- containerPort: 80
# HTTP Health Check
livenessProbe:
httpGet:
path: /
port: 80
initialDelaySeconds: 30
timeoutSeconds: 5
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: host1.com
http:
paths:
- path: /
backend:
serviceName: web1
servicePort: 80
- host: sub.host1.com
http:
paths:
- path: /
backend:
serviceName: web2
servicePort: 80
I think this should work. I am also using same config. Please correct indentation issues if any in the yaml, I posted here