I installed Keycloak using the bitnami/keycloak Helm chart (https://bitnami.com/stack/keycloak/helm).
As I'm also using Prometheus-Operator for monitoring I enabled the metrics endpoint and the service monitor:
keycloak:
...
metrics:
enabled: true
serviceMonitor:
enabled: true
namespace: monitoring
additionalLabels:
release: my-prom-operator-release
As I'm way more interested in actual realm metrics I installed the keycloak-metrics-spi provider (https://github.com/aerogear/keycloak-metrics-spi) by setting up an init container that downloads it to a shared volume.
keycloak:
...
extraVolumeMounts:
- name: providers
mountPath: /opt/bitnami/keycloak/providers
extraVolumes:
- name: providers
emptyDir: {}
...
initContainers:
- name: metrics-spi-provider
image: SOME_IMAGE_WITH_WGET_INSTALLED
imagePullPolicy: Always
command:
- sh
args:
- -c
- |
KEYCLOAK_METRICS_SPI_VERSION=2.5.2
wget --no-check-certificate -O /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar \
https://github.com/aerogear/keycloak-metrics-spi/releases/download/${KEYCLOAK_METRICS_SPI_VERSION}/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
chmod +x /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar
touch /providers/keycloak-metrics-spi-${KEYCLOAK_METRICS_SPI_VERSION}.jar.dodeploy
volumeMounts:
- name: providers
mountPath: /providers
The provider enables metrics endpoints on the regular public-facing http port instead of the http-management port, which is not great for me. But I can block external access to them in my reverse proxy.
What I'm missing is some kind of auto-scraping of those endpoints. Right now I created an additional template, that creates a new service monitor for each element of a predefined list in my chart:
values.yaml
keycloak:
...
metrics:
extraServiceMonitors:
- realmName: master
- realmName: my-realm
servicemonitor-metrics-spi.yaml
{{- range $serviceMonitor := .Values.keycloak.metrics.extraServiceMonitors }}
---
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: {{ $.Release.Name }}-spi-{{ $serviceMonitor.realmName }}
...
spec:
endpoints:
- port: http
path: /auth/realms/{{ $serviceMonitor.realmName }}/metrics
...
{{- end }}
Is there a better way of doing this? So that Prometheus can auto-detect all my realms and scrape their endpoints?
Thanks in advance!
As commented by #jan-garaj there is no need to query all the endpoints. All return the accumulated data of all realms. So it is enough to just scrape the endpoint of one realm (e.g. the master realm).
Thanks a lot!
It might help someone, the bitnami image so the helm chart already include the metrics-spi-provider. So do not need any further installation action but the metrics must be enabled in values.
Related
In my POD, I wanted to restrict ALL my containers to read-only file systems with
securityContext: readOnlyRootFilesystem: true
example (note: yaml reduced for brevity)
apiVersion: v1
kind: Pod
metadata:
labels:
run: server123
name: server123
spec:
securityContext:
readOnlyRootFilesystem: true
containers:
- image: server1-image
name: server1
- image: server2-image
name: server2
- image: server3-image
name: server3
this will result in:
error: error validating "server123.yaml": error validating data:
ValidationError(Pod.spec.securityContext): unknown field
"readOnlyRootFilesystem" in io.k8s.api.core.v1.PodSecurityContext; if
you choose to ignore these errors, turn validation off with
--validate=false
instead I have to configure as:
apiVersion: v1
kind: Pod
metadata:
labels:
run: server123
name: server123
spec:
containers:
- image: server1-image
name: server1
securityContext:
readOnlyRootFilesystem: true
- image: server2-image
name: server2
securityContext:
readOnlyRootFilesystem: true
- image: server3-image
name: server3
securityContext:
readOnlyRootFilesystem: true
Is there a way to set this security restriction ONCE for all containers?
If not why not?
In Kubernetes, can configure securityContext at pod and/or container level,
containers would inherit pod-level settings, but can override in their own.
The configuration options for pods and containers do not, however, overlap - you can only set specific ones at each level,
Container level: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#securitycontext-v1-core
Pod level: https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.23/#podsecuritycontext-v1-core
Its not documented clearly what can be inherited and what cannot (and why!). You have to read through both lists and compare.
I would assume that POD's securityContext would allow, say, readOnlyRootFilesystem: true and various capabilities, to be set once and not have to be replicated in each underlying container's securityContext, but PodSecurityContext does not allow this!
Would be particularly useful when (re)configuring various workloads to adhere to PodSecurityPolicies.
I wonder why a Pod's securityContext configuration is labelled as such, and not instead as podSecurityContext, which is what it actually represents.
This requirement demands a policy implementation and this is possible using pod security policy. Please read here.
There is a dedicated restriction control specified - "Requiring the use of a read only root file system".
Note: This is going to be deprecated in v1.25.x with a new technology. So please plan for it.
I am really struggling regarding how my application which is deployed in --dev namespace can connect to postgreSQL database which I deployed independently using helm with --database namespace. What I did so far is as below.
Database and myapp deployed different namespace. I just copy the name PGHOST,PGPASSWORD from some examples but I am not sure where should I use this name and is that has to be same somewhere in postgreSQL?
Should I take care anything else to connect database or is there anything that is not best practice? Should I add a namespace to jdbc url?
Locally we connect to database using below parameters but what should be the way after we deploy our application via helm? We are using sequelize as a client library
const connectionString = postgres://${global.config.database_username}:${global.config.database_password}#${global.config.database_host}:${global.config.database_port}/${global.config.database_name};
postgres values
## Specify PGDATABASE
##
DBName: db
After I deployed postgres;
# of replicas: 3
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin
jdbc url: jdbc:postgresql://my-postgres-postgresql-helm:port
deployment.yaml
- name: PGHOST
valueFrom:
configMapKeyRef:
name: {{ .Release.Name }}-configmap
key: jdbc-url
- name: PGDATABASE
value: {{ .Values.postgres.database name | quote }}
- name: PGPASSWORD
value: "64000"
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: {{ template "my-mp.name" . }}
key: POSTGRES_PASSWORD
configmaps.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ .Release.Name }}-configmap
labels:
app.kubernetes.io/name: {{ include "my-mp.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
helm.sh/chart: {{ include "my-mp.chart" . }}
data:
jdbc-url: jdbc:postgresql://my-postgres-postgresql-helm..
values.yaml
postgres:
service name: my-postgres-postgresql-helm
service port: 64000
database name: db
database user: admin
Is this a typo in your question about the jdbc url jdbc url: jdbc:postgresql://my-postgre? You have mentioned that the service name is my-postgres-postgresql-helm and hence the jdbc url should be something like: jdbc:postgresql://my-postgres-postgresql-helm.database. Note the .database appended to the service name! Since your application pod is running in a different namespace, you should append the namespace name at the end of the service name. Had they been in the same namespace, you wouldn't need it.
Now, if that doesn't fix it, to debug the issues, this is what I would do if I were you:
Check if there any NetworkPolicies which add restrictions on the namespace level; that is allowing traffic only between specific namespaces or even pods, which may prevent the traffic from your application pod reaching your postgres pod.
Make sure your Service for postgres pod is proper. That is, describing the service should list the Pod's IP as Endpoints. If not check the Service's label selector and make sure it uses the same labels as the postgres pod.
Exec into your pod and check if your application pod is able to reach the service through nslookup using the service name, that is my-postgres-postgresql-helm.database.
If all these tests are positive and working, then most probably it is some other configuration issue. Let me know if this fixes your issue and GL.
If I understand correctly, you have the database and the app in different namespaces and the point of namespaces is to isolate.
If you really need to access it, you can use the DNS autogenerated entry servicename.namespace.svc.cluster.local
I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
Couple of questions:
Do I need to first have inventory for k8s defined? Something like: https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.
If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)
I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.
Thanks.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
The fine manual describes how one uses connection plugins, and while it is possible to use in in tasks, that is unlikely to make any sense unless your inventory started with Pods.
The way I have seen that connection used is to start by identifying the Pods against which you might want to take action, and then run a playbook against a unique group for that purpose:
- hosts: all
tasks:
- set_fact:
# this is *just an example for brevity*
# in reality you would use `k8s:` or `kubectl get -o name pods -l my-selector=my-value` to get the pod names
pod_names:
- nginx-12345
- nginx-3456
- add_host:
name: '{{ item }}'
groups:
- my-pods
with_items: '{{ pod_names }}'
- hosts: my-pods
connection: kubectl
tasks:
# and now you are off to the races
- command: ps -ef
# watch out if the Pod doesn't have a working python installed
# as you will have to use raw: instead
# (and, of course, disable "gather_facts: no")
- raw: ps -ef
First install k8s collections
ansible-galaxy collection install community.kubernetes
and here is play-book, it will sort all pods and run a command in every pod
---
-
hosts: localhost
vars_files:
- vars/main.yaml
collections:
- community.kubernetes
tasks:
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '{{ k8s_kubeconfig }}'
kind: Pod
namespace: test
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
-
k8s_exec:
kubeconfig: '{{ k8s_kubeconfig }}'
namespace: "{{ namespace }}"
pod: "{{ item.metadata.name }}"
command: apt update
with_items: "{{ pod_list.resources }}"
register: exec
loop_control:
label: "{{ item.metadata.name }}"
Maybe you can use like this...
- shell: |
kubectl exec -i -n {{ namespace }} {{ pod_name }} -- bash -c 'clickhouse-client --query "INSERT INTO customer FORMAT CSV"
--user=test --password=test < /mnt/azure/azure/test/test.tbl'
As per the latest documentation you can use the following k8s modules
The following are some of the examples
- name: Create a k8s namespace
kubernetes.core.k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Create a Service object from an inline definition
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: testing
labels:
app: galaxy
service: web
spec:
selector:
app: galaxy
service: web
ports:
- protocol: TCP
targetPort: 8000
name: port-8000-tcp
port: 8000
- name: Remove an existing Service object
kubernetes.core.k8s:
state: absent
api_version: v1
kind: Service
namespace: testing
name: web
I'm trying to setup GCR with kubernetes
and getting Error: ErrImagePull
Failed to pull image "eu.gcr.io/xxx/nodejs": rpc error: code = Unknown desc = Error response from daemon: pull access denied for eu.gcr.io/xxx/nodejs, repository does not exist or may require 'docker login'
Although I have setup the secret correctly in the service account, and added image pull secrets in the deployment spec
deployment.yml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.18.0 (06a2e56)
creationTimestamp: null
labels:
io.kompose.service: nodejs
name: nodejs
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: nodejs
spec:
containers:
- env:
- name: MONGO_DB
valueFrom:
configMapKeyRef:
key: MONGO_DB
name: nodejs-env
- name: MONGO_HOSTNAME
value: db
- name: MONGO_PASSWORD
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_PASSWORD
- name: MONGO_PORT
valueFrom:
configMapKeyRef:
key: MONGO_PORT
name: nodejs-env
- name: MONGO_USERNAME
valueFrom:
secretKeyRef:
name: mongo-secret
key: MONGO_USERNAME
image: "eu.gcr.io/xxx/nodejs"
name: nodejs
imagePullPolicy: Always
ports:
- containerPort: 8080
resources: {}
imagePullSecrets:
- name: gcr-json-key
initContainers:
- name: init-db
image: busybox
command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
restartPolicy: Always
status: {}
used this to add the secret, and it said created
kubectl create secret docker-registry gcr-json-key --docker-server=eu.gcr.io --docker-username=_json_key --docker-password="$(cat mycreds.json)" --docker-email=mygcpemail#gmail.com
How can I debug this, any ideas are welcome!
It looks like the issue is caused by lack of permission on the related service account
XXXXXXXXXXX-compute#XXXXXX.gserviceaccount.com which is missing Editor role.
Also,we need to restrict the scope to assign permissions only to push and pull images from google kubernetes engine, this account will need storage admin view permission which can be assigned by following the instructions mentioned in this article [1].
Additionally, to set the read-write storage scope when creating a Google Kubernetes Engine cluster, use the --scopes option to mention this scope "storage-rw"[2].
[1] https://cloud.google.com/container-registry/docs/access-control
[2]https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform#google-kubernetes-engineā
If the VM instance for pushing or pulling images and the Container Registry storage bucket are in the same Google Cloud Platform project, the Compute Engine default service account is configured with appropriate permissions to push or pull images.
If the VM instance is in a different project or if the instance uses a different service account, you must configure access to the storage bucket used by the repository.
By default, a Compute Engine VM has the read-only access scope configured for storage buckets. To push private Docker images, your instance must have read-write storage access scope configured as described in Access scopes.
Please have 1 for further reference:
Please follow below table as 2:
Action Permission Role Role Title
Pull (Read Only) - storage.objects.get roles/storage.objectViewer Storage Object Viewer
storage.objects.list
Also, you could share if there having any error code as you are having trouble in any steps.
I'm using a script to run helm command which upgrades my k8s deployment.
Before I've used kubectl to directly deploy, as I've move to helm and started using charts, I see an error after deploying on the k8s pods:
MountVolume.SetUp failed for volume "secret" : invalid character '\r' in string literal
My script looks similar to:
value1="foo"
value2="bar"
helm upgrade deploymentName --debug --install --atomic --recreate-pods --reset-values --force --timeout 900 pathToChartDir --set value1 --set value2
The deployment.yaml is as following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploymentName
spec:
selector:
matchLabels:
run: deploymentName
replicas: 2
template:
metadata:
labels:
run: deploymentName
app: appName
spec:
containers:
- name: deploymentName
image: {{ .Values.image.acr.registry }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
volumeMounts:
- name: secret
mountPath: /secrets
readOnly: true
ports:
- containerPort: 1234
env:
- name: DOTENV_CONFIG_PATH
value: "/secrets/env"
volumes:
- name: secret
flexVolume:
driver: "azure/kv"
secretRef:
name: "kvcreds"
options:
usepodidentity: "false"
tenantid: {{ .Values.tenantid }}
subscriptionid: {{ .Values.subsid }}
resourcegroup: {{ .Values.rg }}
keyvaultname: {{ .Values.kvname }}
keyvaultobjecttype: secret
keyvaultobjectname: {{ .Values.objectname }}
As can be seen, the error relates to the secret volume and its values.
I've triple checked there is no line-break or anything like that in the values.
I've run helm lint - no errors found.
I've run helm template - nothing strange or missing in output.
Update:
I've copied the output of helm template and put in a deploy.yaml file.
Then used kubectl apply -f deploy.yaml to manually deploy the service, and... it works.
That makes me think it's actually some kind of a bug in helm? make sense?
Update 2:
I've also tried replacing the azure/kv volume with emptyDir volume and I was able to deploy using helm. It looks like a specific issue of helm with azure/kv volume?
Any ideas for a workaround?
A completely correct answer requires that I say the actual details of your \r problem might be different from mine.
I found the issue in my case by looking in the kv log of the AKS node (/var/log/kv-driver.log). In my case, the error was:
Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="Access denied. Caller was not found on any access policy.\r\n
You can learn to SSH into the node on this page:
https://learn.microsoft.com/en-us/azure/aks/ssh
If you want to follow the solution, I opened an issue:
https://github.com/Azure/kubernetes-keyvault-flexvol/issues/121