Is it possible to add IAP on top of the internal load balancer?
In my setup I have:
Published and configured oauth screen as per https://cloud.google.com/iap/docs/enabling-kubernetes-howto#enabling_iap
Certificates for my internal domain
Kubernetes service setup with the backendConfig (as per https://cloud.google.com/iap/docs/enabling-kubernetes-howto#add-iap-to-backendconfig)
Application running in kubernetes
Kubernetes Ingress that creates internal load balancer with the proper ssl certificates
On my local machine i have /etc/hosts pointing to the load balancer.
I am able to authenticate when accessing the app but then i get "You don't have access" page instead of my application.
In my GCP console I can't see the internal load balancer listed on the IAP page. I can only see external load balancers.
As per this documentation https://cloud.google.com/iap/docs/concepts-overview#your_responsibilities it looks like IAP is supported with HTTPS ILB.
Is iLB with IAP supported on GCP?
My k8s config
apiVersion: cloud.google.com/v1
kind: BackendConfig
metadata:
name: config-default
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: my-secret
-------------------------------
apiVersion: v1
kind: Service
metadata:
name: ilb-service
annotations:
cloud.google.com/backend-config: '{"default": "config-default"}'
cloud.google.com/neg: '{"ingress": true}'
labels:
app: hello
spec:
type: NodePort
selector:
app: hello
ports:
- port: 80
targetPort: 8080
protocol: TCP
name: host1
- port: 443
targetPort: 8080
protocol: TCP
name: host2
--------------------------------
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ilb-demo-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce-internal"
kubernetes.io/ingress.allow-http: "false"
ingress.gcp.kubernetes.io/pre-shared-cert: "titan-testing-ilb"
kubernetes.io/ingress.regional-static-ip-name: "my-internal-address"
spec:
defaultBackend:
service:
name: ilb-service
port:
number: 443
-------------------------------------------
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-app
spec:
selector:
matchLabels:
app: hello
replicas: 3
template:
metadata:
labels:
app: hello
spec:
containers:
- name: hello
image: "us-docker.pkg.dev/google-samples/containers/gke/hello-app:2.0"
ports:
- containerPort: 8080
protocol: TCP
User Type:
I have experienced the same issue at some point and it turned out that I had forgotten to set the user type to "external".
This is if you have managed to get past the login screen of OAuth.
Related
I'm trying to create a GKE Ingress that points to two different backend services based on path. I've seen a few posts explaining this is only possible with an nginx Ingress because gke ingress doesn't support rewrite-target. However, this Google documentation, GKE Ingresss - Multiple backend services, seems to imply otherwise. I've followed the steps in the docs but haven't had any success. Only the service that is available on the path prefix of / is returned. Any other path prefix, like /v2, returns a 404 Not found.
Details of my setup are below. Is there an obvious error here -- is the Google documentation incorrect and this is only possible using nginx ingress?
-- Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app-static-ip
networking.gke.io/managed-certificates: app-managed-cert
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 80
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 8080
-- Service 1
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 80
targetPort: 5000
-- Service 2
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 8080
targetPort: 5000
GCP Ingress supports multiple paths. This is also well described in Setting up HTTP(S) Load Balancing with Ingress. For my test I've used both Hello-world v1 and v2.
There are 3 possible issues.
Issue is with container ports opened. You can check it using netstat:
$ kk exec -ti first-55bb869fb8-76nvq -c container -- bin/sh
/ # netstat -plnt
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 :::8080 :::* LISTEN 1/hello-app
Issue might be also caused by the Firewall configuration. Make sure you have proper settings. (In general, in the new cluster I didn't need to add anything but if you have more stuff and have specific Firewall configurations it might block).
Misconfiguration between port, containerPort and targetPort.
Below my example:
1st deployment with
apiVersion: apps/v1
kind: Deployment
metadata:
name: first
labels:
app: api
spec:
selector:
matchLabels:
app: api
template:
metadata:
labels:
app: api
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:1.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-service
labels:
app: api
spec:
type: NodePort
selector:
app: api
ports:
- port: 5000
targetPort: 8080
2nd deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: second
labels:
app: api-2
spec:
selector:
matchLabels:
app: api-2
template:
metadata:
labels:
app: api-2
spec:
containers:
- name: container
image: gcr.io/google-samples/hello-app:2.0
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: api-2-service
labels:
app: api-2
spec:
type: NodePort
selector:
app: api-2
ports:
- port: 6000
targetPort: 8080
Ingress
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: api-service
port:
number: 5000
- path: /v2
pathType: Prefix
backend:
service:
name: api-2-service
port:
number: 6000
Outputs:
$ curl 35.190.XX.249
Hello, world!
Version: 1.0.0
Hostname: first-55bb869fb8-76nvq
$ curl 35.190.XX.249/v2
Hello, world!
Version: 2.0.0
Hostname: second-d7d87c6d8-zv9jr
Please keep in mind that you can also use Nginx Ingress on GKE by adding specific annotation.
kubernetes.io/ingress.class: "nginx"
Main reason why people use nginx ingress on GKE is using rewrite annotation and possibility to use ClusterIP or NodePort as serviceType, where GCP ingress allows only NodePort serviceType.
Additional information you can find in GKE Ingress for HTTP(S) Load Balancing
I am attaching the image of my application flow. Here the Gateway and other services are created using NestJS. The request for any API comes through the gateway.
The Gateway-pod and API-pod communicate using TCP protocol.
After deployment the Gateway is not able to discover any API pods.
I am attaching the YAML image file also for both Gateway & Pods.
Please do let me know what mistake I am doing in the YAML file.
**APPLICATION DIAGRAM**
Gateway YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: gateway-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: gateway-container
image: nest-api-gateway:v8
ports:
- containerPort: 1000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: gateway-svc
spec:
selector:
app: roushan-app
ports:
- name: gateway-svc-container
protocol: TCP
port: 80
targetPort: 1000
type: LoadBalancer
Pod YAML
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: roushan
name: pod1-deployment
spec:
replicas: 1
selector:
matchLabels:
app: roushan-app
template:
metadata:
labels:
app: roushan-app
spec:
containers:
- name: pod1-container
image: nest-api-pod1:v2
ports:
- containerPort: 4000
apiVersion: v1
kind: Service
metadata:
namespace: roushan
name: pod1-srv
spec:
selector:
app: roushan-app
ports:
- name: pod1-svc-container
protocol: TCP
port: 80
targetPort: 4000
the gateway should be able to access the services by their DNS name. for example pod1-srv.svc.cluster.local, if this does not work you may need to look at the Kubernetes DNS setup.
I have not used AKS, they may use a different domain name for the cluster other than svc.cluster.local
YAML Points
Ideally, you should be keeping the different selectors across the deployment.
You are using the same selectors for both deployments. Gateway and application deployment.
Service will forward the traffic to deployment based on selectors and labels, this might redirect the service-2 request to POD-1.
Networking
You gateway service(Pods) connect to internal service by just service-name like : pod1-srv if in same namespaces.
if gateway and application in different namespaces you have to call each other like http://<servicename>.<namespace>.svc.cluster.local
I am totally new to Istio, and the pitch looks very exciting. However, I can't make it work, which probably means I don't use it properly.
My goal is to implement session affinity between 2 services, this is why originally I end up using Istio. However, I do a very basic test, and it does not seem to work:
I have an kubernetes demo app which as a front service, a stateful-service, and a stateless service. From a browser, I access the front-service which dispatches the request either on the stateful or stateless services, using K8s service names as url: http://api-stateful or http://api-stateless.
I want to declare a virtual service to intercept requests sent from the front-service to the stateful-service.I don't declare it as a gateway, as I understood gateway was at the external border of the K8s cluster.
I use Docker on Windows with Istio 1.6.
I copy my yaml file below. the basic test I want to do: reroute traffic for api-stateful to api-stateless, to validate that the Virtual Service is taken into account. And it does not work. Do you see what is wrong? Is it a wrong usage of Virtual Service ? My Kiali console does not detect any problem in setup.
####################################################################
######################### STATEFUL BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateful-deployment
spec:
replicas: 3
selector:
matchLabels:
app: stateful-backend
tier: backend
template:
metadata:
labels:
app: stateful-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateful containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateful
spec:
selector:
app: stateful-backend
tier: backend
ports:
- protocol: TCP
port: 3002
targetPort: http
---
#####################################################################
######################### STATELESS BACKEND #########################
# Deployment for pocbackend containers, listening on port 3000
apiVersion: apps/v1
kind: Deployment
metadata:
name: stateless-backend
spec:
replicas: 3
selector:
matchLabels:
app: stateless-backend
tier: backend
template:
metadata:
labels:
app: stateless-backend
tier: backend
spec:
containers:
- name: pocbackend
image: pocbackend:2.0
ports:
- name: http
containerPort: 3000
---
# Service for Stateless containers, listening on port 3000
apiVersion: v1
kind: Service
metadata:
name: api-stateless
spec:
selector:
app: stateless-backend
tier: backend
ports:
- protocol: TCP
port: 3001
targetPort: http
---
#############################################################
######################### FRONT END #########################
# deployment of the container pocfrontend listening to port 3500
apiVersion: apps/v1
kind: Deployment
metadata:
name: front-deployment
spec:
replicas: 2
selector:
matchLabels:
app: frontend
tier: frontend
template:
metadata:
labels:
app: frontend
tier: frontend
spec:
containers:
- name: pocfrontend
image: pocfrontend:2.0
ports:
- name: http
containerPort: 3500
---
# Service exposing frontend on node port 85
apiVersion: v1
kind: Service
metadata:
name: frontend-service
spec:
type: NodePort
selector:
app: frontend
tier: frontend
ports:
- protocol: TCP
port: 3500
targetPort: http
nodePort: 30000
---
##############################################################
############ ISTIO PROXY FOR API-STATEFUL SERVIC E############
##############################################################
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: api-stateful-proxy
spec:
hosts:
- api-stateful
http:
- route:
- destination:
host: api-stateless
As mentioned in comments this can be fixed with DestinationRule with sticky session configuration.
Example of that can be found in istio documentation:
LoadBalancerSettings
Load balancing policies to apply for a specific destination. See Envoy’s load balancing documentation for more details.
For example, the following rule uses a round robin load balancing policy for all traffic going to the ratings service.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
The following example sets up sticky sessions for the ratings service hashing-based load balancer for the same ratings service using the the User cookie as the hash key.
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: bookinfo-ratings
spec:
host: ratings.prod.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: user
ttl: 0s
I'm running a kubernetes cluster which consists of three nodes and brilliantly works, but it's time to make my web application secure, so I deployed an ingress controller(traefik). But I was unable to find instructions for setting up https on it. I know most of things I will have to do, like setting up a "secret"(container with certs) etc. but I was wondering how to configure my ingress controller and all files related to it so I would be able to use secure connection
I have already configured ingress controller and created some frontends and backends. Also I configured nginx server(It's actually a web application I'm running) to work on 443 port
My web application deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: nginx
replicas: 3 # tells deployment to run 3 pods matching the template
template: # create pods using pod definition in this template
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: ilchub/my-nginx
ports:
- containerPort: 443
tolerations:
- key: "primary"
operator: Equal
value: "true"
effect: "NoSchedule"
Traefik ingress controller deployment code
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: traefik-ingress
namespace: kube-system
labels:
k8s-app: traefik-ingress-lb
spec:
replicas: 1
selector:
matchLabels:
k8s-app: traefik-ingress-lb
template:
metadata:
labels:
k8s-app: traefik-ingress-lb
name: traefik-ingress-lb
spec:
serviceAccountName: traefik-ingress
terminationGracePeriodSeconds: 60
containers:
- image: traefik
name: traefik-ingress-lb
ports:
- name: http
containerPort: 80
- name: https
containerPort: secure
- name: admin
containerPort: 8080
args:
- --api
- --kubernetes
- --logLevel=INFO
Ingress for traefik dashboard
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: traefik-web-ui
namespace: kube-system
spec:
rules:
- host: cluster.aws.ctrlok.dev
http:
paths:
- path: /
backend:
serviceName: traefik-web-ui
servicePort: web
External expose related config
kind: Service
apiVersion: v1
metadata:
name: traefik-ingress-service
namespace: kube-system
spec:
selector:
k8s-app: traefik-ingress-lb
ports:
- protocol: TCP
port: 80
nodePort: 30036
name: web
- protocol: TCP
port: 443
nodePort: 30035
name: secure
- protocol: TCP
port: 8080
nodePort: 30034
name: admin
type: NodePort
What I want to do is securing my application which is already running. Final result has to be a webpage running over https
Actually you have 3 ways to configure Traefik to use https to communicate with backend pods:
If the service port defined in the ingress spec is 443 (note that you can still use targetPort to use a different port on your pod).
If the service port defined in the ingress spec has a name that starts with https (such as https-api, https-web or just https).
If the ingress spec includes the annotation ingress.kubernetes.io/protocol: https.
If either of those configuration options exist, then the backend communication protocol is assumed to be TLS, and will connect via TLS automatically.
Also additional authentication annotations should be added to the Ingress object, like:
ingress.kubernetes.io/auth-tls-secret: secret
And of course, add a TLS Certificate to the Ingress
For learning purposes, i have two services in a cluster on google cloud:
API Service with the following k8 config:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp-api
labels:
app: myapp-api
spec:
replicas: 1
template:
metadata:
labels:
app: myapp-api
spec:
containers:
- image: gcr.io/prefab-basis-213412/myapp-api:0.0.1
name: myapp-api
ports:
- containerPort: 3000
service.yaml
kind: Service
apiVersion: v1
metadata:
name: myapp-api
spec:
selector:
app: myapp-api
ports:
- protocol: TCP
port: 80
targetPort: 3000
And a second service, called frontend, that's publicly exposed:
deployment.yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myappfront-deployment
spec:
replicas: 1
template:
metadata:
labels:
app: myappfront
spec:
containers:
- image: gcr.io/prefab-basis-213412/myappfront:0.0.11
name: myappfront-deployment
ports:
- containerPort: 3000
service.yaml
apiVersion: v1
kind: Service
metadata:
name: myappfront-service
spec:
type: LoadBalancer
selector:
app: myappfront
ports:
- port: 80
targetPort: 3000
The front service is basically a nodejs app that only does a rest call to the api service like so axios.get('http://myapp-api').
The issue is that the call is failing and it's unable to find the requested endpoint. Is there any additional config that i'm currently missing for the API service to be discovered?
P.S. Both services are running and I can connect to them by proxying from localhost.
Since you're able to hit the services when proxying it sounds like you've already gone through most of the debugging steps for in-cluster issues (https://kubernetes.io/docs/tasks/debug-application-cluster/debug-service/). That suggests the problem might not be within the cluster. Something to watch out for with frontends is where the http call takes place. It could be in the server with node but given you're seeing this issue I'd suggest that it's in the browser. (If you see can see the call in the browser console then it is.) If the frontend's call is being made in the browser then it doesn't have access to the Kubernetes dns and the cluster-internal service name won't resolve. To handle this you could make the backend service LoadBalancer and pass the external name into the frontend.