How to set custom theme for master realm in keycloak? - keycloak

I have bundled a custom theme as part of keycloak docker image. On starting the keycloak cluster, the custom theme need to be set in master realm.
How can I configure the master realm to set with custom theme?

From the Admin Console, after you have correctly deployed everything, you just need to go:
The Master Realm;
Realm Settings;
Select Themes;
And then select your custom theme for each theme type (e.g., Login Theme) that you want to change;
This can be done using the Keycloak Admin CLI
./kcadm update realms/masterE -s "loginTheme=<YOUR_CUSTOM_THEME>"
to use that script you have to authenticate yourself first:
./kcadm config credentials --server <KEYCLOAK_HOST>auth --realm master --user admin --password <ADMIN_PASSOWRD>
You can add these commands into your docker configuration. Another option that you have, is to just override the folder that contains the base theme with your custom theme. Therefore, the default theme would be your custom theme. You can make a copy of the base theme with a different name, so that you can also explicitly selected if needed.

With the official Keycloak images from the docker hub you can pass an environment variable KEYCLOAK_DEFAULT_THEME to use a custom theme, provided it is residing in /opt/jboss/keycloak/themes/:
docker run -d -p 8080:8080 -v ./my-realm.json:/tmp/my-realm.json -v ./my-awesome-theme:/opt/jboss/keycloak/themes/my-awesome-theme -e KEYCLOAK_DEFAULT_THEME=my-awesome-theme -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=p#ssw0rd -e KEYCLOAK_IMPORT=/tmp/my-realm.json jboss/keycloak

Related

Nexus return 401 Unauthorized, after build image from Dockerfile

I'm new in docker, try to google this issue, bit found nothing.
I have to create nexus image from sonatype/nexus3 and change password in admin.password file after creating image.
It's my Dockerfile:
FROM sonatype/nexus3
WORKDIR /nexus-data
RUN ["/bin/bash", "-c", "echo root >> admin.password"]
and when i check the file admin.password (docker exec <container> cat admin.password) i have this result:
root
And Authorization works if i run continer from sonatype/nexus3 image from docker hub (with default UUID password).
What should i do?
I am thinking that maybe i rewrite admin profile or delete it somehow?
The way it works is that the sonatype/nexus3 image contains an already installed version and the random password has been written to admin.password. But it's just a log, not the password used to confgure nexus.
What you want to do has already been answered here How to set admin user/pwd when launching Nexus docker image
Here is a detailed walkthrough to change the admin password from the CLI after starting a fresh nexus3 docker container. You can easily script that once you understand how it works.
Important note to clear a possible misunderstanding: you don't build a nexus3 image containing predefined data like your admin password. You start a fresh image which will initialize fresh data when using an empty nexus-data volume, including a random admin password and you use that password to change it to your own value.
Start a docker container from the official image. Note: this is a minimal and trashable (i.e. --rm) start just for the example. Read the documentation to secure your data.
docker run -d --rm --name testnexus -p 8081:8081 sonatype/nexus3:latest
Wait a bit for nexus to start (you can check the logs with docker logs testnexus) then read the generated password into a variable:
CURRENT_PASSWORD=$(docker exec testnexus cat /nexus-data/admin.password)
Set the expected admin password into an other variable
NEW_PASSWORD=v3rys3cur3
Use the Nexus API to change the admin password:
curl -X PUT \
-u "admin:${CURRENT_PASSWORD}" \
-d "${NEW_PASSWORD}" \
-H 'accept: application/json' \
-H 'Content-Type: text/plain' \
http://localhost:8081/service/rest/v1/security/users/admin/change-password
Access Nexus GUI with your browser at http://localhost:8081, login with your newly changed password, enjoy.

How to push docker image to ghcr.io organization

I am trying to push images I have built locally to the GitHub Container Registry aka Packages.
I have authenticated GitHub using PAT and authorized access to the organization. Let's name this organization EXAMPLEORG.
used the following command:
export CR_PAT=ghp_example_pat ; echo $CR_PAT | sudo docker login ghcr.io -u exampleuser --password-stdin
After that, I used the following command to push the image to ghcr.io:
docker push ghcr.io/exampleorg/exampleapp:v0.5
Unfortunately, I am getting this message after trying to upload image layers:
unauthorized: unauthenticated: User cannot be authenticated with the token provided.
Does somebody knows what I am missing here?
Followed this guide:
https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry#authenticating-to-the-container-registry
Is there something more I need to do in order to manually push image to Org packages (not interested to do it from the workflow at the moment).
Apparently, it was due to the wrong content of the ~/.docker/config.json file. During the first command, it happens to fail while writing. So I used sudo to circumvent this, and indeed it was circumvented, but the new file is now written in /root/.docker/config.json which is not desired outcome. Using docker login afterward will not read the config file from the root's home.
The solution to this is not to use sudo instead delete ~/.docker/config.json and then execute:
export CR_PAT=ghp_example_pat ; echo $CR_PAT | docker login ghcr.io -u exampleuser --password-stdin

Can't create user for ceph dashboard

I'm trying to create a user for ceph dashboard with admin role. Version is Nautilus 14.2.19 and deployed with manuel installation.
I've installed dashboard module, installed all dependencies and enabled it. My dashboard is reachable from the monitor ip and default port of 8443.
When I run te command:
ceph dashboard ac-user-create <user> <pw> administrator
I get the following error:
Please specify the file containing the password/secret with "-i" option.
After digging for information about this it says there must be a file in bcrypt format. Is there a default created file for this? Or if it's needed to create one how can I do it?
Nevermind, it seems you just need to create a text file and write your password in it.
When you run the command like this:
ceph dashboard ac-user-create <user> -i /file/location administrator
It creates the user and applies the password in an encrypted format.

Adding a Second Service with AWS Copilot

I've very familiar with doing all of this (quite tedious) stuff manually with ECS.
I'm experimenting with Copilot - which is really working - I have one service up really easily, but my solution has multiple services/containers.
How do I now add a second service/container to my cluster?
Short answer: change to your second service's code directory and run copilot init again! If you need to specify a different dockerfile, you can use the --dockerfile flag. If you need to use an existing image, you can use --image with the name of an existing container registry.
Long answer:
Copilot stores metadata in SSM Parameter Store in the account which was used to run copilot app init or copilot init, so as long as you don't change the AWS credentials you're using when you run Copilot, everything should just work when you run copilot init in a new repository.
Some other use cases:
If it's an existing image like redis or postgres and you don't need to customize anything about the actual image or expose it, you can run
copilot init -t Backend\ Service --image redis --port 6379 --name redis
If your service lives in a separate code repository and needs to access the internet, you can cd into that directory and run
copilot init --app $YOUR_APP_NAME --type Load\ Balanced\ Web\ Service --dockerfile ./Dockerfile --port 1234 --name $YOUR_SERVICE_NAME --deploy
So all you need to do is run copilot init --app $YOUR_APP_NAME with the same AWS credentials in a new directory, and you'll be able to set up and deploy your second services.
Copilot also allows you to set up persistent storage associated with a given service by using the copilot storage init command. This specifies a new DynamoDB table or S3 bucket, which will be created when you run copilot svc deploy. It will create one storage addon per environment you deploy the service to, so as not to mix test and production data.

Setting up realms in Keycloak during kubernetes helm install

I'm trying to get keycloak set up as a helm chart requirement to run some integration tests. I can get it to bring it up and run it, but I can't figure out how to set up the realm and client I need. I've switched over to the 1.0.0 stable release that came out today:
https://github.com/kubernetes/charts/tree/master/stable/keycloak
I wanted to use the keycloak.preStartScript defined in the chart and use the /opt/jboss/keycloak/bin/kcadm.sh admin script to do this, but apparently by "pre start" they mean before the server is brought up, so kcadm.sh can't authenticate. If I leave out the keycloak.preStartScript I can shell into the keycloak container and run the kcadm.sh scripts I want to use after it's up and running, but they fail as part of the pre start script.
Here's my requirements.yaml for my chart:
dependencies:
- name: keycloak
repository: https://kubernetes-charts.storage.googleapis.com/
version: 1.0.0
Here's my values.yaml file for my chart:
keycloak:
keycloak:
persistence:
dbVendor: H2
deployPostgres: false
username: 'admin'
password: 'test'
preStartScript: |
/opt/jboss/keycloak/bin/kcadm.sh config credentials --server http://localhost:8080/auth --realm master --user admin --password 'test'
/opt/jboss/keycloak/bin/kcadm.sh create realms -s realm=foo -s enabled=true -o
CID=$(/opt/jboss/keycloak/bin/kcadm.sh create clients -r foo -s clientId=foo -s 'redirectUris=["http://localhost:8080/*"]' -i)
/opt/jboss/keycloak/bin/kcadm.sh get clients/$CID/installation/providers/keycloak-oidc-keycloak-json
persistence:
dbVendor: H2
deployPostgres: false
Also a side annoyance is that I need to define the persistence settings in both places or it either fails or brings up postgresql in addition to keycloak
I tried this too and also hit this problem so have raised an issue. I prefer to use -Dimport with a realm .json file but your points suggest a postStartScript option would make sense so I've included both in the PR on that issue
the Keycloak chart has been updated. Have a look at these PRs:
https://github.com/kubernetes/charts/pull/5887
https://github.com/kubernetes/charts/pull/5950