I have a Kubernetes ingress rule set as;
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-svc
annotations:
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: IP
spec:
ingressClassName: alb
rules:
- host: api.example.com
http:
paths:
- path: /api/service1
pathType: Prefix
backend:
service:
name: svc-service1
port:
number: 3000
- path: /api/service/service2
pathType: Prefix
backend:
service:
name: svc-service2
port:
number: 3000
- path: /api/service/service3
pathType: Prefix
backend:
service:
name: svc-service3
port:
number: 3000
- path: /api/service/service4
pathType: Prefix
backend:
service:
name: svc-service4
port:
number: 3000
When I try to access api.example.com/api/service1 it gets the required data from the service SERVICE1 but when I try to access a path in SERVICE2 which makes a request to SERVICE1, I get a 504 Gateway Time-out error.
I have tried accessing SERVICE1 from SERVICE2 using the clusterIP but to no avail and I even tried to access it from SERVICE2 using Axios and SERVICE1 path api.example.com/api/service1 but to no avail.
My services are structured as;
apiVersion: apps/v1
kind: Deployment
metadata:
name: svc-service1
spec:
selector:
matchLabels:
app: svc-service1
replicas: 1
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
app: svc-service1
spec:
containers:
- name: svc-service1
image: IMAGE_PATH
# imagePullPolicy: IfNotPresent
env:
- name: ENV_VAR_1
value: ENV_VAR_VALUE_1
ports:
- containerPort: 3000
protocol: TCP
# restartPolicy: Always
---
apiVersion: v1
kind: Service
metadata:
name: svc-service1
spec:
selector:
app: svc-service1
type: ClusterIP
ports:
- name: svc-service1
protocol: TCP
port: 3000
targetPort: 3000
Related
When trying to setup keycloak oauth2 on local machine with minikube (virtualbox driver)
with mocked server from:
https://www.mocklab.io/docs/oauth2-mock/
I get error from :
[2022/11/04 15:09:14] [provider.go:55] Performing OIDC Discovery...
[2022/11/04 15:09:17] [main.go:60] ERROR: Failed to initialise OAuth2 Proxy: error intiailising provider: could not create provider data: error building OIDC ProviderVerifier: could not get verifier builder: error while discovery OIDC configuration: failed to discover OIDC configuration: error performing request: Get "https://keycloak.192.168.59.103.nip.io/realms/skyrealm/.well-known/openid-configuration": dial tcp 192.168.59.103:443: connect: no route to host
Basic auth with generated password is working.
minikube version: v1.27.1
Kubernetes version: v1.25.2
Docker version: 20.10.18
Operating System: Ubuntu 22.04
Keycloak deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: keycloak
namespace: default
labels:
app: keycloak
spec:
replicas: 1
selector:
matchLabels:
app: keycloak
template:
metadata:
labels:
app: keycloak
spec:
containers:
- name: keycloak
image: quay.io/keycloak/keycloak:19.0.3
args: ["start"]
imagePullPolicy: Always
env:
- name: KEYCLOAK_ADMIN
value: null
valueFrom:
secretKeyRef:
key: keycloakuser
name: skysecrets
- name: KEYCLOAK_ADMIN_PASSWORD
value: null
valueFrom:
secretKeyRef:
key: keycloakpass
name: skysecrets
- name: KC_PROXY
value: "edge"
- name: KC_HOSTNAME_STRICT
value: "false"
ports:
- name: http
containerPort: 8080
readinessProbe:
httpGet:
path: /realms/master
port: 8080
service:
---
apiVersion: v1
kind: Service
metadata:
name: keycloak
labels:
app: keycloak
spec:
ports:
- name: http
port: 8080
targetPort: 8080
selector:
app: keycloak
type: LoadBalancer
keycloak ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: keycloak-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
# konghq.com/strip-path: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- keycloak.192.168.59.104.nip.io
rules:
- host: keycloak.192.168.59.104.nip.io
http:
paths:
- path: /keycloak(/|$)(.*)
pathType: Prefix
backend:
service:
name: keycloak
port:
number: 8080
oauth-proxy setup:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=keycloak-oidc
- --client-id=YYY
- --client-secret=XXX
- --redirect-url=https://oauth.mocklab.io/oauth/authorize
- --oidc-issuer-url=https://keycloak.192.168.59.103.nip.io/realms/skyrealm
# for supporting PKCE methods from keycloak
- --code-challenge-method="S256"
# for error with x509 certificates
- --ssl-insecure-skip-verify=true
- --ssl-upstream-insecure-skip-verify=true
# additional required parameters
- --email-domain=*
- --cookie-secure=false
- --cookie-secret=ZZZ
image: quay.io/oauth2-proxy/oauth2-proxy:latest
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: kube-system
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
service ingress:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: sky-oauth-ingress
namespace: default
annotations:
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /oauth/offer(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-offer-service
port:
number: 5552
- path: /oauth/message(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-message-service
port:
number: 5553
- path: /oauth/auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 9100
I setup second ingresses for basic auth with generated passwortd and it is working:
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /offer(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-offer-service
port:
number: 5552
- path: /message(/|$)(.*)
pathType: Prefix
backend:
service:
name: sky-message-service
port:
number: 5553
- path: /auth(/|$)(.*)
pathType: Prefix
backend:
service:
name: auth-service
port:
number: 9100
I want be able to test oauth2 before I deploy it on cloud.
How to setup it correctly ?
I have a kubernetes cluster having two deployments ui-service-app and user-service-app. Both of the deployments are exposed through Cluster IP services namely ui-service-svc and user-service-svc. In addition there is a Ingress for accessing both of my applications inside those deployments from outside the cluster.
Now I want to make a api call from my application inside ui-service-app to user-service-app. Currently I am using the ingress-ip/user to do so. But there should be some way to do this internally?
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
UPDATE 1:
THIS IS THE ERROR PAGE WHEN I CHANGE THE URL IN REACT APP TO HTTP://USER-SERVICE-SVC
Use the service name of the associated service.
From any other pod in the same namespace, the hostname user-service-svc will map to the Service you've defined, so http://user-service-svc would connect you to the web server of the user-service-app Deployment (no port specified, because your Service is mapping port 80 to container port 3000).
From another namespace, you can use the hostname <service>.<namespace>.svc.cluster.local, but that's not relevant to what you're doing here.
See the Service documentation for more details.
I have got 2 deployments in my cluster UI and USER. Both of these are exposed by Cluster IP service. There is an ingress which makes both the services publicly accessible.
Now when I do "kubectl exec -it UI-POD -- /bin/sh" and then try to "ping USER-SERVICE-CLUSTER-IP:PORT" it doesn't work.
All I get is No packet returned i.e. a failure message.
Attaching my .yml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service-app
labels:
app: user-service-app
spec:
replicas: 1
selector:
matchLabels:
app: user-service-app
template:
metadata:
labels:
app: user-service-app
spec:
containers:
- name: user-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
path: /ping
port: 3000
readinessProbe:
httpGet:
path: /ping
port: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "user-service-svc"
namespace: "default"
labels:
app: "user-service-app"
spec:
type: "ClusterIP"
selector:
app: "user-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ui-service-app
labels:
app: ui-service-app
spec:
replicas: 1
selector:
matchLabels:
app: ui-service-app
template:
metadata:
labels:
app: ui-service-app
spec:
containers:
- name: ui-service-app
image: <MY-IMAGE-URL>
imagePullPolicy: Always
ports:
- containerPort: 3000
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "ui-service-svc"
namespace: "default"
labels:
app: "ui-service-app"
spec:
type: "ClusterIP"
selector:
app: "ui-service-app"
ports:
- protocol: "TCP"
port: 80
targetPort: 3000
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: awesome-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
defaultBackend:
service:
name: ui-service-svc
port:
number: 80
rules:
- http:
paths:
- path: /login
pathType: Prefix
backend:
service:
name: ui-service-svc
port:
number: 80
- path: /user(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service-svc
port:
number: 80
Ping operates by means of Internet Control Message Protocol (ICMP) packets. This is not what your service is serving. You can try curl USER-SERVICE-CLUSTER-IP/ping or curl http://user-service-svc/ping within your UI pod.
Im trying to forward to my api on /api/ and my UI on /
My UI is all working fine and my logs for the nginx ingress and api (which ive tested with port forwarding) are all working fine. But when i curl or go to IP/api/healthz (which would be the healthcheck for Hasura) I get 502 bad gateway
Service for hasura:
apiVersion: v1
kind: Service
metadata:
name: hasura-svc
labels:
app: hasura-app
spec:
selector:
app: hasura-app
type: NodePort
ports:
- port: 8080
targetPort: 8080
protocol: TCP
Deployment for Hasura:
kind: Deployment
metadata:
name: hasura-dep
labels:
hasuraService: custom
app: hasura-app
spec:
selector:
matchLabels:
app: hasura-app
replicas: 1
template:
metadata:
creationTimestamp: null
labels:
app: hasura-app
spec:
containers:
- name: hasura-app
image: hasura/graphql-engine:v1.3.0
imagePullPolicy: IfNotPresent
command: ["graphql-engine"]
args: ["serve", "--enable-console"]
envFrom:
- configMapRef:
name: hasura-cnf
ports:
- containerPort: 8080
protocol: TCP
resources: {}
nginx service:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx
annotations:
kubernetes.io/ingress.global-static-ip-name: "gcloud-ip"
spec:
backend:
serviceName: nginx
servicePort: 80
rules:
- http:
paths:
- path: /api/*
backend:
serviceName: hasura-svc
servicePort: 8080
- path: /*
backend:
serviceName: client
servicePort: 3333
I want to config the ingress to work with my domain name.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: app1
servicePort: 80
- path: /my-api2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- name: app1
port: 3000
targetPort: 80
- name: app2
port: 4000
targetPort: 80
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: app1
image: XXX
ports:
- name: app1
containerPort: 3000
- name: app2
image: XXX
ports:
- name: app2
containerPort: 4000
I can reach the app1 service by serverIP:3000(example. 172.16.111.211:3000/my-api1). But remotely it always return the 503 status code(curl https://example.com/my-api1).
# kubectl describe ingress app-ingress
Name: app-ingress
Namespace: default
Address: serverIP
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
app-tls terminates example.com
Rules:
Host Path Backends
---- ---- --------
example.com
/my-api1(/|$)(.*) app1:80 (<error: endpoints "app1" not found>)
/my-api2 app2:80 (<error: endpoints "app2" not found>)
First thing is your service name is not matching, you have created a service with name my-api but in ingress you have referred it as app1 and app2 which is not available.
Second error is selector label between your deployment and service are not matching. Deployment created with label user-api but in service selector is mentioned as my-api.
If you need 80 port for both the application then you have to create two different services and refer that in your ingress.
Below should work for your requirement.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: app1
servicePort: 80
- path: /my-api2
backend:
serviceName: app2
servicePort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app1
spec:
selector:
app: user-api
ports:
- name: app1
port: 3000
targetPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: app2
spec:
selector:
app: user-api
ports:
- name: app2
port: 4000
targetPort: 80
----
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-api
spec:
selector:
matchLabels:
app: user-api
template:
metadata:
labels:
app: user-api
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: app1
image: XXX
ports:
- name: app1
containerPort: 3000
- name: app2
image: XXX
ports:
- name: app2
containerPort: 4000
You made a mistake on port and targetPort.
It should be:
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- name: app1
port: 3000
targetPort: 3000
- name: app2
port: 4000
targetPort: 4000
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
tls:
- hosts:
- example.com
secretName: app-tls
rules:
- host: example.com
http:
paths:
- path: /my-api1(/|$)(.*)
backend:
serviceName: my-api
servicePort: app1
- path: /my-api2
backend:
serviceName: my-api
servicePort: app2
port is for exposing service port
targetPort is targeting a pod exposed port