Im trying to forward to my api on /api/ and my UI on /
My UI is all working fine and my logs for the nginx ingress and api (which ive tested with port forwarding) are all working fine. But when i curl or go to IP/api/healthz (which would be the healthcheck for Hasura) I get 502 bad gateway
Service for hasura:
apiVersion: v1
kind: Service
metadata:
name: hasura-svc
labels:
app: hasura-app
spec:
selector:
app: hasura-app
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
Deployment for Hasura:
kind: Deployment
metadata:
name: hasura-dep
labels:
hasuraService: custom
app: hasura-app
spec:
selector:
matchLabels:
app: hasura-app
replicas: 1
template:
metadata:
creationTimestamp: null
labels:
app: hasura-app
spec:
containers:
- name: hasura-app
image: hasura/graphql-engine:v1.3.0
imagePullPolicy: IfNotPresent
command: ["graphql-engine"]
args: ["serve", "--enable-console"]
envFrom:
- configMapRef:
name: hasura-cnf
ports:
- containerPort: 8080
protocol: TCP
resources: {}
nginx service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/ingress.global-static-ip-name: "gcloud-ip"
spec:
backend:
serviceName: nginx
servicePort: 80
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: hasura-svc
servicePort: 8080
- path: /*
backend:
serviceName: client
servicePort: 3333
Related
When trying to setup keycloak oauth2 on local machine with minikube (virtualbox driver)
with mocked server from:
https://www.mocklab.io/docs/oauth2-mock/
I get error from :
[2022/11/04 15:09:14] [provider.go:55] Performing OIDC Discovery...
[2022/11/04 15:09:17] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://keycloak.192.168.59.103.nip.io/realms/skyrealm/.well-known/openid-configuration": dial tcp 192.168.59.103:443: connect: no route to host
Basic auth with generated password is working.
minikube version: v1.27.1
Kubernetes version: v1.25.2
Docker version: 20.10.18
Operating System: Ubuntu 22.04
Keycloak deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:19.0.3
args: ["start"]
imagePullPolicy: Always
env:
- name: KEYCLOAK_ADMIN
value: null
valueFrom:
secretKeyRef:
key: keycloakuser
name: skysecrets
- name: KEYCLOAK_ADMIN_PASSWORD
value: null
valueFrom:
secretKeyRef:
key: keycloakpass
name: skysecrets
- name: KC_PROXY
value: "edge"
- name: KC_HOSTNAME_STRICT
value: "false"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
service:
---
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
keycloak ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
# konghq.com/strip-path: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- keycloak.192.168.59.104.nip.io
rules:
- host: keycloak.192.168.59.104.nip.io
http:
paths:
- path: /keycloak(/|$)(.*)
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
oauth-proxy setup:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=keycloak-oidc
- --client-id=YYY
- --client-secret=XXX
- --redirect-url=https://oauth.mocklab.io/oauth/authorize
- --oidc-issuer-url=https://keycloak.192.168.59.103.nip.io/realms/skyrealm
# for supporting PKCE methods from keycloak
- --code-challenge-method="S256"
# for error with x509 certificates
- --ssl-insecure-skip-verify=true
- --ssl-upstream-insecure-skip-verify=true
# additional required parameters
- --email-domain=*
- --cookie-secure=false
- --cookie-secret=ZZZ
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kube-system
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
service ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sky-oauth-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /oauth/offer(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-offer-service
port:
number: 5552
- path: /oauth/message(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-message-service
port:
number: 5553
- path: /oauth/auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 9100
I setup second ingresses for basic auth with generated passwortd and it is working:
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /offer(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-offer-service
port:
number: 5552
- path: /message(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-message-service
port:
number: 5553
- path: /auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 9100
I want be able to test oauth2 before I deploy it on cloud.
How to setup it correctly ?
I have a kubernetes cluster having two deployments ui-service-app and user-service-app. Both of the deployments are exposed through Cluster IP services namely ui-service-svc and user-service-svc. In addition there is a Ingress for accessing both of my applications inside those deployments from outside the cluster.
Now I want to make a api call from my application inside ui-service-app to user-service-app. Currently I am using the ingress-ip/user to do so. But there should be some way to do this internally?
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
UPDATE 1:
THIS IS THE ERROR PAGE WHEN I CHANGE THE URL IN REACT APP TO HTTP://USER-SERVICE-SVC
Use the service name of the associated service.
From any other pod in the same namespace, the hostname user-service-svc will map to the Service you've defined, so http://user-service-svc would connect you to the web server of the user-service-app Deployment (no port specified, because your Service is mapping port 80 to container port 3000).
From another namespace, you can use the hostname <service>.<namespace>.svc.cluster.local, but that's not relevant to what you're doing here.
See the Service documentation for more details.
I have got 2 deployments in my cluster UI and USER. Both of these are exposed by Cluster IP service. There is an ingress which makes both the services publicly accessible.
Now when I do "kubectl exec -it UI-POD -- /bin/sh" and then try to "ping USER-SERVICE-CLUSTER-IP:PORT" it doesn't work.
All I get is No packet returned i.e. a failure message.
Attaching my .yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
Ping operates by means of Internet Control Message Protocol (ICMP) packets. This is not what your service is serving. You can try curl USER-SERVICE-CLUSTER-IP/ping or curl http://user-service-svc/ping within your UI pod.
I have a Kubernetes cluster with a backend service and a security service.
The ingress is defined as follows:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: solidary-life
annotations:
kubernetes.io/ingress.global-static-ip-name: sl-ip
certmanager.k8s.io/acme-http01-edit-in-place: "true"
ingress.kubernetes.io/force-ssl-redirect: "true"
ingress.kubernetes.io/ssl-redirect: "true"
labels:
app: sl
spec:
rules:
- host: app-solidair-vlaanderen.com
http:
paths:
- path: /v0.0.1/*
backend:
serviceName: backend-backend
servicePort: 8080
- path: /auth/*
backend:
serviceName: security-backend
servicePort: 8080
tls:
- secretName: solidary-life-tls
hosts:
- app-solidair-vlaanderen.com
The backend service is configured like:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: backend
labels:
app: sl
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: backend-app
image: gcr.io/solidary-life-218713/sv-backend:0.0.6
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /v0.0.1/api/online
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: backend-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
and the auth server service:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: security
labels:
app: sl-security
spec:
template:
metadata:
labels:
app: sl
tier: web
spec:
containers:
- name: security-app
image: gcr.io/solidary-life-218713/sv-security:0.0.1
ports:
- name: http
containerPort: 8080
- name: management
containerPort: 9090
- name: jgroups-tcp
containerPort: 7600
- name: jgroups-tcp-fd
containerPort: 57600
- name: jgroups-udp
containerPort: 55200
protocol: UDP
- name: jgroups-udp-mc
containerPort: 45688
protocol: UDP
- name: jgroups-udp-fd
containerPort: 54200
protocol: UDP
- name: modcluster
containerPort: 23364
- name: modcluster-udp
containerPort: 23365
protocol: UDP
- name: txn-recovery-ev
containerPort: 4712
- name: txn-status-mgr
containerPort: 4713
readinessProbe:
httpGet:
path: /auth/
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: security-backend
labels:
app: sl
spec:
type: NodePort
selector:
app: sl
tier: web
ports:
- port: 8080
targetPort: 8080
Now I can go to the url's:
https://app-solidair-vlaanderen.com/v0.0.1/api/online
https://app-solidair-vlaanderen.com/auth/
Sometimes this works, sometimes I get 404's. This is quite annoying and I am quite new to Kubernetes. I don't find the error.
Can it have something to do with the "sl" label that's on both the backend and security service definition?
Yes. At least that must be the start of the issue, assuming all your services are on the same Kubernetes namespace. Can you use a different label for each?
So, in essence, you have 2 services that are randomly selecting pods belonging to the security Deployment and the backend deployment. One way to determine where your service is really sending requests is by looking at its endpoints and running:
kubectl -n <your-namespace> <get or describe> ep
I have the following hello world deployment.
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: hello:v0.0.1
imagePullPolicy: Always
args:
- /hello
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
ports:
- port: 80
targetPort: 3000
protocol: TCP
selector:
app: hello
type: NodePort
And I have ingress object deploy with side-car container
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: alb-ingress-controller
name: alb-ingress-controller
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: alb-ingress-controller
template:
metadata:
creationTimestamp: null
labels:
app: alb-ingress-controller
spec:
containers:
- name: server
image: alb-ingress-controller:v0.0.1
imagePullPolicy: Always
args:
- /server
- --ingress-class=alb
- --cluster-name=AAA
- --aws-max-retries=20
- --healthz-port=10254
ports:
- containerPort: 10254
protocol: TCP
- name: alb-sidecar
image: sidecar:v0.0.1
imagePullPolicy: Always
args:
- /sidecar
- --port=5000
ports:
- containerPort: 5000
protocol: TCP
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
serviceAccountName: alb-ingress
serviceAccount: alb-ingress
---
apiVersion: v1
kind: Service
metadata:
name: alb-ingress-controller-service
spec:
ports:
- port: 80
targetPort: 5000
protocol: TCP
selector:
app: alb-ingress-controller
type: NodePort
And I have Ingress here
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: test-alb
spec:
rules:
- http:
paths:
- path: /hello
backend:
serviceName: hello-service
servicePort: 80
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80
I would expect to access to /alb-sidecar the same way that I access to /hello, but only /hello endpoint works for me. And keep getting 502 Bad Gateway for /alb-sidecar endpoint. The sidecar container is just a simple web app listening on /alb-sidecar.
Do I need do anything different when the sidecar container runs in a different namespace or how would you run a sidecar next to ALB ingress controller?
If you created the deployment alb-ingress-controller and the service alb-ingress-controller-service in another namespace, you need to create another ingress resource in the exact namespace.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-alb
namespace: alb-namespace
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/subnets: AAA
alb.ingress.kubernetes.io/security-groups: AAA
labels:
app: alb-service
spec:
rules:
- http:
paths:
- path: /alb-sidecar
backend:
serviceName: alb-ingress-controller-service
servicePort: 80