Setting up realms in Keycloak during kubernetes helm install - keycloak

I'm trying to get keycloak set up as a helm chart requirement to run some integration tests. I can get it to bring it up and run it, but I can't figure out how to set up the realm and client I need. I've switched over to the 1.0.0 stable release that came out today:
https://github.com/kubernetes/charts/tree/master/stable/keycloak
I wanted to use the keycloak.preStartScript defined in the chart and use the /opt/jboss/keycloak/bin/kcadm.sh admin script to do this, but apparently by "pre start" they mean before the server is brought up, so kcadm.sh can't authenticate. If I leave out the keycloak.preStartScript I can shell into the keycloak container and run the kcadm.sh scripts I want to use after it's up and running, but they fail as part of the pre start script.
Here's my requirements.yaml for my chart:
dependencies:
- name: keycloak
repository: https://kubernetes-charts.storage.googleapis.com/
version: 1.0.0
Here's my values.yaml file for my chart:
keycloak:
keycloak:
persistence:
dbVendor: H2
deployPostgres: false
username: 'admin'
password: 'test'
preStartScript: |
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password 'test'
/opt/jboss/keycloak/bin/kcadm.sh create realms -s realm=foo -s enabled=true -o
CID=$(/opt/jboss/keycloak/bin/kcadm.sh create clients -r foo -s clientId=foo -s 'redirectUris=["http://localhost:8080/*"]' -i)
/opt/jboss/keycloak/bin/kcadm.sh get clients/$CID/installation/providers/keycloak-oidc-keycloak-json
persistence:
dbVendor: H2
deployPostgres: false
Also a side annoyance is that I need to define the persistence settings in both places or it either fails or brings up postgresql in addition to keycloak

I tried this too and also hit this problem so have raised an issue. I prefer to use -Dimport with a realm .json file but your points suggest a postStartScript option would make sense so I've included both in the PR on that issue

the Keycloak chart has been updated. Have a look at these PRs:
https://github.com/kubernetes/charts/pull/5887
https://github.com/kubernetes/charts/pull/5950

Related

Add Trino dataset to Apache Superset

I have currently Trino deployed in my Kubernetes cluster using the official Trino(trinodb) Helm Chart. In the same way I deployed Apache superset.
Using port forwarding of trino to 8080 and superset to 8088, I am able to access the UI for both from localhost but also I am able to use the trino command line API to query trino using:
./trino --server http:localhost:8080
I don't have any authentication set
mysql is setup correctly as Trino catalog
when I try to add Trino as dataset for Superset using either of the following sqlalchemy URLs:
trino://trino#localhost:8080/mysql
trino://localhost:8080/mysql
When I test the connection from Superset UI, I get the following error:
ERROR: Could not load database driver: TrinoEngineSpec
Please advise how I could solve this issue.
You should install sqlalchemy-trino to make the trino driver available.
Add these lines to your values.yaml file:
additionalRequirements:
- sqlalchemy-trino
bootstrapScript: |
#!/bin/bash
pip install sqlalchemy-trino &&\
if [ ! -f ~/bootstrap ]; then echo "Running Superset with uid {{ .Values.runAsUser }}" > ~/bootstrap; fi
If you want more details about the problem, see this Github issue.
I added two options that do the same thing because in some versions the additionalRequirements doesn't work and you may need the bootstrapScript option to install the driver.

Creating MongoDB through Helm with Gitlab fails

I followed the instructions here to create a Mongo instance using helm. This requires some adaptation as it's being created through gitlab with gitlab-ci.
Unfortunately, the part about values.yaml gets skimmed over and I haven't found complete examples for mongo through helm. [Many of the examples also seemed deprecated as well.]
Being unsure how to address the issue, using this as a values.yaml file:
global:
mongodb:
DBName: 'example'
mongodbUsername: "user"
mongodbPassword: "password"
mongodbDatabase: "database"
mongodbrootPassword: "password"
auth.rootPassword: "password"
The following error is returned:
'auth.rootPassword' must not be empty, please add '--set auth.rootPassword=$MONGODB_ROOT_PASSWORD' to the command. To get the current value:
export MONGODB_ROOT_PASSWORD=$(kubectl get secret --namespace angular-23641052-review-86-sh-mousur review-86-sh-mousur-mongodb -o jsonpath="{.data.mongodb-root-password}" | base64 --decode)
Given that I'm only using helm as called by gitlab-ci, I am unsure on how to implement the set or otherwise set the root password.
From what I could tell, I thought setting variables in env in values.yaml had solved the problem:
env:
- name: "auth.rootPassword"
value: "password"
The problem went away - but has returned.
Gitlab is making a move towards using .gitlab/auto-deploy-values.yaml.
I created a local version of autodeploy.sh and added the value auth.rootPassword="password" to auto-deploy-value.yaml and the value seems to work. cf. [Gitlab's documentation][1]

Helm Postgres password authentication failed

I install the Bitnami Helm chart, using the example shown in the README:
helm install my-db \
--namespace dar \
--set postgresqlPassword=secretpassword,postgresqlDatabase=my-database \
bitnami/postgresql
Then, following the instructions shown in the blurb which prints after the installation is successful I forward the port to port 5432 then try and connect:
PGPASSWORD="secretpassword" psql --host 127.0.0.1 -U postgres -d my-database -p 5432
But I get the following error:
psql: error: could not connect to server: FATAL: password authentication failed for user "postgres"
How can this be? Is the Helm chart buggy?
Buried deep in the stable/postgresql issue tracker is the source of this very-hard-to-debug problem.
When you run helm uninstall ... it errs on the side of caution and doesn't delete the storage associated with the database you got when you first ran helm install ....
This means that once you've installed Postgres once via Helm, the secrets will always be the same in subsequent installs, regardless of what the post-installation blurb tells you.
To fix this, you have to manually remove the persistent volume claim (PVC) which will free up the database storage.
kubectl delete pvc data-my-db-postgresql-0
(Or whatever the PVC associated with your initial Helm install was named.)
Now a subsequent helm install ... will create a brand-new PVC and login can proceed as expected.

Executing kubectl on AWS CodeBuild connecting Amazon EKS Cluster does not work because of authentication problem

kubectl version on CodeBuild prints error...
[Container] 2019/08/26 04:07:32 Running command kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-19T16:40:16Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"linux/amd64"}
error: You must be logged in to the server (the server has asked for the client to provide credentials)
error: You must be logged in to the server (the server has asked for the client to provide credentials)
I'm using Amazon EKS Cluster.
It seems some authentication setup missing...?
What I did:
Setup codebuild project (a new service role codebuild-hoge-service-role is created).
Added eks:DescribeCluster Policy to the role as inline policy because aws eks update-kubeconfig requires it.
Edit configmap/aws-auth to bind the role and RBAC by kubectl edit -n kube-system configmap/aws-auth on my local device, adding new config to mapRoles like:
mapRoles: |
- rolearn: .....
- rolearn: arn:aws:iam::999999999999:role/service-role/codebuild-hoge-service-role
¦ username: codebuild
¦ groups:
¦ - system:masters
That's all.
Not enough? Is there anything I missed?
Also I tried another approach to debug and it worked successfully..
Create a IAM User and IAM Role. He can switch to the role (assume role).
Edit configmap/aws-auth and Add config for the role. (same as failure process)
Switch role on local and execute kubectl version. It worked!!
buildspec.yml
version: 0.2
phases:
install:
runtime-versions:
docker: 18
commands:
- curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/linux/amd64/kubectl
- chmod +x ./kubectl
- mv -f ./kubectl /usr/local/bin/kubectl
pre_build:
commands:
- aws eks update-kubeconfig --name mycluster
- kubectl version
build:
commands:
- kubectl get svc -A
I had the same issue. I just had to create another role with same Trusted Relationship and policy as the original one. BUT it worked.
The only thing I did different was not add the path /service-role/, so the ARN looked like: arn:aws:iam::123456789012:role/another-codebuild-role.
I was facing the same issue today. The answers provided here do work, however in essence what you need to do is to remove the /service-role string from the role's ARN that you use in aws-auth config map. There is actually no need to create a separate role (in your IAM console you can keep the role with /service-role path in its ARN, remove it only in aws-auth config map).
Please see the following article for details: AWS Knowledge Center Article

How do I change Spinnaker configs after an installation with helm?

I'm new to using Spinnaker and Halyard. I'm following this guide by Google.
When installing Spinnaker, they use helm and attach on a spinnaker-config.yaml file that looks like this:
./helm install -n cd stable/spinnaker -f spinnaker-config.yaml --timeout 600 \
--version 1.1.6 --wait
spinnaker-config.yaml:
export SA_JSON=$(cat spinnaker-sa.json)
export PROJECT=$(gcloud info --format='value(config.project)')
export BUCKET=$PROJECT-spinnaker-config
cat > spinnaker-config.yaml <<EOF
gcs:
enabled: true
bucket: $BUCKET
project: $PROJECT
jsonKey: '$SA_JSON'
dockerRegistries:
- name: gcr
address: https://gcr.io
username: _json_key
password: '$SA_JSON'
email: 1234#5678.com
# Disable minio as the default storage backend
minio:
enabled: false
# Configure Spinnaker to enable GCP services
halyard:
spinnakerVersion: 1.10.2
image:
tag: 1.12.0
additionalScripts:
create: true
data:
enable_gcs_artifacts.sh: |-
\$HAL_COMMAND config artifact gcs account add gcs-$PROJECT --json-path /opt/gcs/key.json
\$HAL_COMMAND config artifact gcs enable
enable_pubsub_triggers.sh: |-
\$HAL_COMMAND config pubsub google enable
\$HAL_COMMAND config pubsub google subscription add gcr-triggers \
--subscription-name gcr-triggers \
--json-path /opt/gcs/key.json \
--project [project_guid] \
--message-format GCR
EOF
I need to add another pubsub with a different name than gcr-triggers and noticed that anything I try to add in a pipeline won't persist. I suspect this is because it needs to be added with hal like so:
note: I've already created and verified gcloud subscriptions and add-iam-policy-binding.
hal config pubsub google subscription add [new_trigger] \
--subscription-name [new_trigger] \
--json-path /opt/gcs/key.json \
--project $PROJECT \
--message-format GCR
I suspect installing spinnaker like so is kind of unconventional (correct me if I'm wrong). I've never ran a hal binary from my master machine where kubectl is run and this was not necessary in the guide. Spinnaker's architecture has a bunch of pods I can see. I've poked around in them and didn't see hal.
My question is: with this guide, how am I suppose to hal config new things? What's the normal way this is done?
Helm is a package manager similar to apt in some linux distros.
Because this is a Micro service Architecture running in Kubernetes you must access the Halyard Pod (actually it should be a stateful set)
Get Halyard Pod
export HALYARD=$(kubectl -n spinnaker get pod -l app=halyard -oname | cut -d'/' -f 2)
Access Halyard Pod in Spinnaker namespace kubectl -n spinnaker exec -it ${HALYARD} /bin/bash
Test access by running the command hal config you should get the full config of spinnaker
After you apply the changes you need dont forget to use hal deploy apply