Using Google IAM for GKE service web access - kubernetes

I am hosting an application on GKE and would like to be able to let users from my organization access this application from the web. I would like them to be able to log-in using their Google Account IAM credentials.
Is there a way to configure a service exposing the clusters web endpoint such that to access this service the user simply needs to login with their google account?
For example, when testing a service I can easily do a web-preview in the cloud-shell and then access the web application in my browser.
Is there a way to configure this such that any users authorized in my organization can access the web interface of my application?
(Note, I asked the same question on DevOps but I feel like that site is not yet as active as it should be so I ask here as well)

Okay, I managed to make it work perfectly. But it took a few steps. I am including the manifest here that is required to setup the IAP using an ingress. It requires a few things which I listed in the manifest below. Hopefully this can help others since I could not find a single source that had all of this put together. Essentially all you need to do is run kubectl apply -f secure-ingress.yaml to make everything work (as long as you have all the depenedencies) and then you just need to configure your IAP as you like it.
secure-ingress.yaml
# Configure IAP security using ingress automatically
# requirements: kubernetes version at least 1.10.5-gke.3
# requirements: service must respond with 200 at / endpoint (the healthcheck)
# dependencies: need certificate secret my-secret-cert
# dependencies: need oath-client secret my-secret-oath (with my.domain.com configured)
# dependencies: need external IP address my-external-ip
# dependencies: need domain my.domain.com to point to my-external-ip IP
# dependencies: need an app (deployment/statefulset) my-app
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-secure-ingress
namespace: default
annotations:
kubernetes.io/ingress.class: "gce"
kubernetes.io/ingress.allow-http: "false"
kubernetes.io/ingress.global-static-ip-name: my-external-ip
spec:
tls:
- secretName: my-secret-cert
backend:
serviceName: my-service-be-web
servicePort: 1234
---
kind: Service
apiVersion: v1
metadata:
name: my-service-be-web
namespace: default
annotations:
beta.cloud.google.com/backend-config:
'{"default": "my-service-be-conf"}'
spec:
type: NodePort
selector:
app: my-app
ports:
- protocol: TCP
port: 1234
targetPort: 1234
name: my-port-web
---
apiVersion: cloud.google.com/v1beta1
kind: BackendConfig
metadata:
name: my-service-be-conf
namespace: default
spec:
iap:
enabled: true
oauthclientCredentials:
secretName: my-secret-oath

Related

How to use the Google Managed Certificate for GKE Kubernetes

I have the https being generated for frontend https for flytime.io (cloud run)
Now I want to use this for https support for backend (multi-ingress gke autopilot), from the following manual, intended to used the api.flytime.io domain:
https://cloud.google.com/kubernetes-engine/docs/how-to/multi-cluster-ingress#https_support
But how should I configure the PATH_TO_KEYFILE and PATH_TO_CERTFILE with the manual (or there are other ways to do that)? If using the Google managed certification is not possible (why?), how do I generate a certificate for host name of api.flytime.io and get PATH_TO_KEYFILE and PATH_TO_CERTFILE?
If you are using GKE managed certificates, you don't use the secret method for setting up SSL in your Ingress. You have to create a ManagedCertificate object and then use the object's name in your Ingress in the networking.gke.io/managed-certificates annotation.
Here's an example. First, create the ManagedCertificate object.
apiVersion: networking.gke.io/v1
kind: ManagedCertificate
metadata:
name: managed-cert
spec:
domains:
- DOMAIN_NAME1 #<===== must be valid a domain you are owning
Now, reference this in your Ingress as follows:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: managed-cert-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: ADDRESS_NAME
networking.gke.io/managed-certificates: managed-cert #<=== HERE IS YOUR CERT
kubernetes.io/ingress.class: "gce"
spec:
defaultBackend:
service:
name: mc-service
port:
number: SERVICE_PORT
You can find more information on this docs page.

Traefik IngressRoute. How to direct traffic to only one pod?

I have an ingressroute configuration:
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
name: demo-rr-camunda-bpm-platform-app-ingress-route1
namespace: bpm
spec:
entryPoints:
- bpm
routes:
- kind: Rule
match: PathPrefix(`/bpm/demo-rr-camunda-bpm-platform-app`)
services:
- kind: Service
name: demo-rr-camunda-bpm-platform-app-service1
port: 5000
tls:
secretName: k8s-code-ru-tls
With this configuration, if two replicas are running, when you try to request a login, the page is displayed from one pod, and the authorization attempt flies to the other pod (and does not work). It is necessary that both the login page and the authorization URL work on the same pod.
If I'm not mistaken, then the solution to this problem is to use Stickiness https://doc.traefik.io/traefik/routing/services/
If so, then I cannot figure out how to apply this code to the existing ingressroute that I described above:
## Dynamic configuration
http:
services:
my-service:
loadBalancer:
sticky:
cookie: {}
you should use kubernetes service which will track your deployment (with 2 replicas) using labels. the service will do the job for you, and Ingressroute will serve the two replicas

Isolated apps in kubernaties for end users

I have tradional server app stack:
database
app (ruby)
microservice (node)
App is avaible by https://example.com
My users wants isolated personal apps (for high availability) with full database access by connection string
So we need server app stacks:
- personal (isolated) databases
- personal (isolated) apps
- personal (isolated) microservices
Apps must be avaible by http://cloud.example.com/userX, where userX is user's login
I think each user should have their own namespace. Thus, the personal database, application, and microserver belong to this namespace.
Also I have one Ingress (in namespace: kube-public) now for all users apps:
# ? apiVersion: networking.k8s.io/v1beta1
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: mgrs
namespace: kube-public
annotations:
kubernetes.io/ingress.class: "nginx"
cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
tls:
- hosts:
- cloud.example.com
secretName: cloud-tls
rules:
- host: cloud.example.com
http:
paths:
- path: /user1
backend:
serviceName: user1-service
servicePort: 80
- path: /user2
backend:
serviceName: user2-service
servicePort: 80
...
How it is possible with Kubernetes? Maybe I need several Ingress for each users?
Maybe it is more easy make paths
userX.example.com instead cloud.example.com/userX ?
One approach is use one Ngnix as a dynamic proxy to services, for that you cloud add a ConfigMap to dynamicaly rote to user service,
If you use one namespace and put the user name in the service name you should use this config:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-dns-file
data:
nginx.conf: |
server {
listen 80;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
location ~ /(.*) {
proxy_pass http://$1-service.default.svc.cluster.local;
}
}
If you use one namespace per user and put the user name in the service name you should use something like this config:
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-config-dns-file
data:
nginx.conf: |
server {
listen 80;
resolver kube-dns.kube-system.svc.cluster.local valid=5s;
location ~ /(.*) {
proxy_pass http://$1-service.$1.svc.cluster.local;
}
}
Another possibility aligned with this one is to use a Nginx-Ingress controller and take advantages of Ngnix as ingress controller and the possibilities of apply some configuration to achieve what you wish.

What's the easiest way to add TLS to a Kubernetes service?

I have a simple web server exposed publicly on Kubernetes on GKE and a domain registered. I'm looking to add TLS to this so it's accessible via HTTPS. I've heard a lot about using Let's Encrypt and ended up attempting this: https://github.com/jetstack/cert-manager/blob/master/docs/tutorials/acme/quick-start/index.rst but found it totally over-whelming. Is there a simpler approach to using Let's Encrypt given that my deployment is just a single service and pod?
The config I'm using is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
labels:
app: web
spec:
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: gcr.io/my-repo
ports:
- containerPort: 8080
livenessProbe:
httpGet:
path: /healthz
port: 8080
readinessProbe:
initialDelaySeconds: 10
httpGet:
path: /healthz
port: 8080
---
apiVersion: v1
kind: Service
metadata:
name: web-balancer-service
spec:
ports:
- port: 8080
protocol: TCP
targetPort: 8080
selector:
run: web
type: NodePort
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-app
spec:
rules:
- host: my.domain.com
http:
paths:
- path: /*
backend:
serviceName: web-balancer-service
servicePort: 8080
========================================
EDIT: Following #Utku Ă–zdemir's suggestion I tried to codify those changes into YAML. I created the IP address with
gcloud compute addresses create example-ip-address --global
And the certificate and provisioning with: https://gist.github.com/nickponline/ab74d3d179e21474551b7596c6478eea
Everything provisions correctly but when I inspect the ManagedCertificates with kubectl describe ManagedCertificates example-certificate is says
Spec:
Domains:
app.domain.xyz
Status:
Certificate Name: xxxxxxxxxxxxxxxxxx
Certificate Status: Provisioning
Domain Status:
Domain: app.domain
Status: FailedNotVisible
Events: <none>
I've waited 24 hours so assume that this isn't going to change.
Since you use the ingress controller of the GKE itself, when you create an Ingress resource, it triggers the creation of a Load Balancer resource in the Google Cloud Platform. Normally, SSL termination is a responsibility of the ingress controller, therefore that GCP load balancer is responsible of doing the SSL termination.
This means, cert-manager will not work for your case, since the certificates will live outside of your cluster, and the traffic will be already SSL terminated before coming in your cluster.
Luckily, GCP has self-provisioned SSL (Let's Encrypt) support. To make use of that, yo need to follow the steps below:
Go to the Load Balancing screen on GCP, switch to advanced view, and jump to the Certificates tab (or simply click here).
Create a new SSL certificate, with "Create Google-managed certificate" chosen. To the domain field, write down the exact domain you want the SSL certificate for. It should look like this:
Go to the External IP Addresses screen, and reserve a new static IP address. Choose the type to be global (at the time of writing, GCP ingress controller only supports global IP addresses). Should look like this:
Take the static IP that you reserved (in this example it is 34.95.84.106)
Go to your domain registrar, and add an A type record for your domain (the one in the SSL certificate) to point to the static IP you allocated. In this example, it would be my-app.example.com -> 34.95.84.106.
Finally, you will need to edit your ingress to put 2 annotations, so it will hint the Google Cloud's Ingress Controller to use the static IP you reserved, and the certificate you created. See the ingress example below:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: my-ingress-app
annotations:
ingress.gcp.kubernetes.io/pre-shared-cert: my-ssl-certificate # the name of the SSL certificate resource you created
kubernetes.io/ingress.global-static-ip-name: my-static-ip # the name of the static ip resource you created
kubernetes.io/ingress.allow-http: "false" # if you want to block plain http
spec:
rules:
- host: my-app.example.com
http:
paths:
- path: /*
backend:
serviceName: web-balancer-service
servicePort: 8080
Apply it, and verify that the changes are reflected by going to the Load Balancers screen on GCP.
Important Notes:
If there already is a GCP Load Balancer that is created by an Ingress, the changes you do (annotations) on the ingress will not reflect to the existing load balancer. Therefore, delete your existing ingress, make sure the existing load-balancer disappears, and create the ingress with correct annotations, so the load balancer will be configured correctly.
For the Let's Encrypt provisioning to work, your DNS record should be in place. It checks the owner of the domain using DNS before issuing the certificate. Also, the initial provisioning can take quite some time (up to half an hour).

Kubernetes / External access from pod in GKE

I new in Kubernetes, and I created pods the following yml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp-act
namespace: default
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: myapp:latest
envFrom:
- configMapRef:
name: map-myapp
The issue is that myapp is trying to query other apps which are located in my google project (as GCE machines) but are not part of the GKE cluster - without success.
i.e the issue is that I can't connect to the internal IP outside the cluster. I tried also to create service but it didn't fix the issue. all the information I found is how to expose my cluster to the world, but this is the opposite way.
what am I missing?
the issue is that I can't connect to the internal IP outside the
cluster.
What you miss is called Ingress I believe.
Ingress, added in Kubernetes v1.1, exposes HTTP and HTTPS routes from
outside the cluster to services within the cluster.
You can find more details and complete docs here.
Update: As you pointed out Ingress is a beta feature, but you can successfully use it if you are OK with the limitations. Most likely you are, just go through the list. "Deployed on the master" means in my understanding that the ingress controller works on the k8s master node, a fact that should not normally bother you. What should you define next?
1.First you need to define a service which targets the pods in your deployment. It seems that you haven't done that yet, have you?
2.Then, on the next step, you need to create the Ingress, as shown in the docs, e.g.:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: your-service-name
servicePort: 80
Here your-service-name is the name of the service that you have already defined in point 1).
After you have done all this the backend service will be available outside of the cluser on a similar URL: https://.service..com
In this case you should create an external service type with associated endpoint, like this:
kind: Endpoints
apiVersion: v1
metadata:
name: mongo
subsets:
- addresses:
- ip: 10.240.0.4
ports:
- port: 27017
---
kind: Service
apiVersion: v1
metadata:
name: mongo
Spec:
type: ClusterIP
ports:
- port: 27017
targetPort: 27017
Please refer to this GCP blog post, that decribes very well in details the kubernetes best practices for mapping external services, living outside your cluster.