Keycloak cluster Production setup on Kubernetes - Google K8S Engine (GKE) - kubernetes

I am trying to deploy Keycloak onto Kubernetes Engine in HA (cluster) mode.
I am doing the deployment with an ingress service with TLS setting to be able to access externally.
The TLS setting was pretty straightforward, so got it done.
I placed the manifest files here
https://github.com/vsomasvr/keycloak-gke/tree/master/keycloak
The issue is that the keycloak does not form the cluster, hence keycloak is not functioning, the authentication itself fails.
This manifest works well for a single replica (which is not a cluster, so not helpful and not interested in sticky-session related config).
I think this is the crucial problem to be solved for the keycloak production installtion.
Any help is greatly appreciated.

There is a blogpost on this here.
The only things I needed to do where the following:
1) Create own Docker image
FROM jboss/keycloak:latest
ADD cli/JDBC_PING.cli /opt/jboss/tools/cli/jgroups/discovery/
The JDBC_PING.cli can be found here
2) Update your deployment with an extra Env
- name: JGROUPS_DISCOVERY_PROTOCOL
value: "JDBC_PING"
This did the job for me with 2 replicas on GKE.

Related

How to setup basic auth for Prometheus deployed on K8s cluster using yamls?

How to setup basic auth for Prometheus deployed on K8s cluster using yamls??
I was able to achieve this easily when Prometheus was deployed on a host locally using tar file. But when it is deployed as a pod in K8s cluster, tried almost everything on the internet but no luck.
Any kind of help would be really appreciated!
Thanks!
I'm not sure why would official documentation work only in a vm and not container, but if it truly not work than you can use webserver to hide your web interface behind it and setup authentication on it.

How to add external GCP loadbalancer to kubespray cluster?

I deployed a kubernetes cluster on Google Cloud using VMs and Kubespray.
Right now, I am looking to expose a simple node app to external IP using loadbalancer but showing my external IP from gcloud to service does not work. It stays on pending state when I query kubectl get services.
According to this, kubespray does not have any loadbalancer mechanicsm included/integrated by default. How should I progress?
Let me start of by summarizing the problem we are trying to solve here.
The problem is that you have self-hosted kubernetes cluster and you want to be able to create a service of type=LoadBalancer and you want k8s to create a LB for you with externlIP and in fully automated way, just like it would if you used a GKE (kubernetes as a service solution).
Additionally I have to mention that I don't know much of a kubespray, so I will only describe all the steps that need to bo done to make it work, and leave the rest to you. So if you want to make changes in kubespray code, it's on you.
All the tests I did with kubeadm cluster but it should not be very difficult to apply it to kubespray.
I will start of by summarizing all that has to be done into 4 steps:
tagging the instances
enabling cloud-provider functionality
IAM and service accounts
additional info
Tagging the instances
All worker node instances on GCP have to be labeled with unique tag that is the name of an instance; these tags are later used to create a firewall rules and target lists for LB. So lets say that you have an instance called worker-0; you need to tag that instance with a tag worker-0
Otherwise it will result in an error (that can be found in controller-manager logs):
Error syncing load balancer: failed to ensure load balancer: no node tags supplied and also failed to parse the given lists of hosts for tags. Abort creating firewall rule
Enabling cloud-provider functionality
K8s has to be informed that it is running in cloud and what cloud provider that is so that it knows how to talk with the api.
controller manager logs informing you that it wont create an LB.
WARNING: no cloud provider provided, services of type LoadBalancer will fail
Controller Manager is responsible for creation of a LoadBalancer. It can be passed a flag --cloud-provider. You can manually add this flag to controller manager pod manifest file; or like in your case since you are running kubespray, you can add this flag somewhere in kubespray code (maybe its already automated and just requires you to set some env or sth, but you need to find it out yourself).
Here is how this file looks like with the flag:
apiVersion: v1
kind: Pod
metadata:
labels:
component: kube-controller-manager
tier: control-plane
name: kube-controller-manager
namespace: kube-system
spec:
containers:
- command:
- kube-controller-manager
...
- --cloud-provider=gce # <----- HERE
As you can see the value in our case is gce, which stangs for Google Compute Engine. It informs k8s that its running on GCE/GCP.
IAM and service accounts
Now that you have your provider enabled, and tags covered, I will talk about IAM and permissions.
For k8s to be able to create a LB in GCE, it needs to be allowed to do so. Every GCE instance has a deafult service account assigned. Controller Manager uses instance service account, stored within instance metadata to access GCP API.
For this to happen you need to set Access Scopes for GCE instance (master node; the one where controller manager is running) so it can use Cloud Engine API.
Access scopes -> Set access for each API -> compute engine=Read Write
To do this the instance has to be stopped, so now stop the instance. It's better to set these scopes during instance creation so that you don't need to make any unnecessary steps.
You also need to go to IAM & Admin page in GCP Console and add permissions so that master instance's service account has Kubernetes Engine Service Agent role assigned. This is a predefined role that has much more permissions than you probably need but I have found that everything works with this role so I decided to use is for demonstration purposes, but you probably want to use least privilege rule.
additional info
There is one more thing I need to mention. It does not impact you but while testing I have found out an interesting thing.
Firstly I created only one node cluster (single master node). Even though this is allowed from k8s point of view, controller manager would not allow me to create a LB and point it to a master node where my application was running. This draws a conclusion that one cannot use LB with only master node and has to create at least one worker node.
PS
I had to figure it out the hard way; by looking at logs, changing things and looking at logs again to see if the issue got solved. I didn't find a single article/documentation page where it is documented in one place. If you manage to solve it for yourself, write the answer for others. Thank you.

Does Istio envoy proxy sidecar has anything to do with container filesystem?

Recently I was adding Istio to my kubernetes cluster. When enabling istio to one of the namespaces where MongoDB statefulset were deployed, MongoDB was failed to start up.
The error message was "keyfile permissions too open"
When I analyzed whats going on, keyfile is coming from the /etc/secrets-volume which is mounted to the statefulset from kubernetes secret.
The file permissions was 440 instead of 400. Because of this MongoDB started to complain that "permissions too open" and the pod went to Crashbackloopoff.
When I disable Istio injection in that namespace, MongoDB is starting fine.
Whats going on here? Does Istio has anything to do with container filesystem, especially default permissions?
The istio sidecar injection is not always meant for all kinds of containers like mentioned in istio documentation guide. These containers should be excluded from istio sidecar injection.
In case of Databases that are deployed using StatefulSets some of the containers might be temporary or used as operators which can end up in crash loop or other problematic states.
There is also alternative approach to not istio inject databases at all and just add them as external services with ServiceEntry objects. There is entire blog post in istio documentation how to do that specifically with MongoDB. the guide is little outdated so be sure to refer to current documentation page for ServiceEntry which also has examples of using external MongoDB.
Hope it helps.

Deploy DB+Proxy+SSL with kubernetes

I have very little knowledge of how kubernetes works and I’m trying to learn. I have some difficulties to understand how I can use kubernetes to deploy my DB (CouchDB) the reverse proxy (nginx) and the ssl certificate (letsencrypt with certbot-auto).
I run CentOS 8 and have installed podman for the containers. I can install each one in different containers within the same pod and I can make them communicate properly.
What I don’t understand is how can I use kubernetes to properly deploy all of these containers and scale them in a cluster.
My questions are the following:
Where should I start to make kubernetes work with these three components? Should I install the three containers first with their configuration (the DB can be configured to handle clusters but my understanding is that kubernetes handles clusters. So I’m wondering if I have to configure the DB for the cluster and hence install two nodes)
Should I install letsencrypt with certbot? I don’t understand how kubernetes can deploy new pods to have them work with letsencrypt automatically configured
If anyone can give me the steps to get this done it would be really great...I just don’t really know where to start and the docs and tutorials are a bit confusing.
I think you need to deploy two applications for your DB and Nginx, but for your certificates, we have different methods to have letsencrypt on kubernetes
for letsencrypt and nginx these two articles could help you to get some insights about what you need to do
Nginx & LetsEncrypt and this one Let’s Encrypt on Kubernetes
and for CouchDB this article may help you CouchDB on Kubernetes, in this article mentions NFS as storage but you can have your own

Rancher connect to kubernetes instead of start kubernetes

Rancher is designed (as best as I can tell) to own and run a kubernetes cluster. Rancher does provide a configuration so that kubectl can interact w/ the kubernetes cluster. Rancher seems like a nice tool. But as far as I can tell, there is no way to connect to an existing kubernetes cluster. Is there any way to do this?
If you are looking for a service that can connect to an existing k8s cluster(s) then try Containership. You can use Kubectl and/or the Containership UI to manage you workloads, config maps, etc on multiple clusters.
Hope this helps!
I got this answer on the rancher forums
There is not, most of the value we can add at the moment is around configuring, managing, and controlling access to the installation we setup.
https://forums.rancher.com/t/rancher-connect-to-kubernetes-instead-of-start-kubernetes/3209