Keycloak exports only a single realm (the master) - keycloak

trying to use the keycloak export command
standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/opt/keycloak-export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Dkeycloak.migration.usersPerFile=100
As U can see -Dkeycloak.migration.realmName is not specified so it's expected that all the realms in keycloak will be exported.
interesting logs:
10:45:49,873 INFO [org.keycloak.exportimport.dir.DirExportProvider] (ServerService Thread Pool -- 60) Exporting into directory /opt/keycloak-export
10:45:56,004 INFO [org.keycloak.exportimport.dir.DirExportProvider] (ServerService Thread Pool -- 60) Realm 'master' - data exported
10:45:56,060 INFO [org.keycloak.services] (ServerService Thread Pool -- 60) KC-SERVICES0035: Export finished successfully
keycloak is running in a Kubernetes cluster, using bitnami helm chart
Getting access to keycloak bash using kubectl exec -it kaycloak-pod -- bash

I had the same issue.
add -c=standalone-ha.xml to your export command
bitnami uses this config file for the database resource
Note: bitnami keycloak helm uses bitnami keycloak docker image
bitnami standalone.sh command:
https://github.com/bitnami/bitnami-docker-keycloak/blob/542a52b98e4f29626140a696062755c8093a9014/16/debian-10/rootfs/opt/bitnami/scripts/keycloak/run.sh#L21
config_file:
https://github.com/bitnami/bitnami-docker-keycloak/blob/542a52b98e4f29626140a696062755c8093a9014/16/debian-10/rootfs/opt/bitnami/scripts/keycloak-env.sh#L106

Related

Kubernetes Control Plan - All kubectl commands fail with 403 Forbidden

OS: Redhat 7.9
Docker and Kubernetes (kubectl,kubelet,kubeadm) installed as per the documentation.
Kuberenetes cluster initialized using
sudo kubeadm init
After this all, on checking 'docker ps', find all the services up.
But all kubectl commands except for 'kubectl config view' fail with error
'Unable to connect to the server: Forbidden'
The issue was with corporate proxy. I had to set the 'no_proxy' as ENV variable and also as part of docker proxy and this issue got resolved.

Storing Docker PostgreSQL data to an Azure Storage Account

tell me how can I store PostgreSQL database data in an Azure Storage account. The PostgreSQL deploy to Azure Container Instance. When I restart the Azure Container instance all data disappears.
Dockerfile
FROM timescale/timescaledb:latest-pg12
ENV POSTGRES_USER=admin
POSTGRES_DB=dev-timescaledb
POSTGRES_PASSWORD=password
PGDATA=/var/lib/postgresql/data/pgdata
CMD ["postgres", "-c", "max_connections=500"]
Command for creating a Container Instance and mounting a Storage Account
az container create --resource-group test-env --name test-env --image
test-env.azurecr.io/timescale:latest --registry-username test-env
--registry-password "registry-password" --dns-name-label test-env --ports 5432 --cpu 2 --memory 5 --azure-file-volume-account-name testenv --azure-file-volume-account-key
'account-key'
--azure-file-volume-share-name 'postgres-data' --azure-file-volume-mount-path '/var/lib/postgresql/data'
but i got an error
data directory “/var/lib/postgresql/data/pgdata” has wrong ownership
The server must be started by the user that owns the data directory.
It caused by an existing issue that you cannot change the ownership of the mount point when you mount the Azure File Share to the Container Instance. And it cannot be solved currently. You can find the same issue in SO. I recommend you use the AKS with the disk volume and it will solve the problem for Postgres on persisting data.

Deploying sentry on AWS using fargate

I am trying to deploy sentry on aws-fargate using terraform. I am also launching Redis cluster and Postgres DB in RDS. I can launch the stack but as this is a new database i need to upgrade using exec command.(example command)
docker-compose exec sentry sentry upgrade
and then restart the sentry service to proceed. How can I achieve this using fargate.

How can KeyCloak on Kubernetes connect to PostgreSQL?

I am trying to run KeyCloak on Kubernetes using PostgreSQL as a database.
The files I am referring to are on the peterzandbergen/keycloak-kubernetes project on GitHub.
I used kompose to generate the yaml files, as a staring point, using the files that jboss published.
PostgreSQL is started first using:
./start-postgres.sh
Then I try to start KeyCloak:
kubectl create -f keycloak-deployment.yaml
The KeyCloak pod stops because it cannot connect to the database with the error:
10:00:40,652 SEVERE [org.postgresql.Driver] (ServerService Thread Pool -- 58) Error in url: jdbc:postgresql://172.17.0.4:tcp://10.101.187.192:5432/keycloak
The full log can be found on github. This is also the place to look at the yaml files that I use to create the deployment and the services.
After some experimenting I found out that using the name postgres in the keycloak-deployment.yaml file
- env:
- name: DB_ADDR
value: postgres
messes things up and results in a strange expansion. After replacing this part of the yaml file with:
- env:
- name: DB_ADDR
value: postgres-keycloak
makes it work fine. This also requires changing the postgres-service.yaml file. The new versions of the files are in github.

Openshift online zookeeper from dockerfile pod "Crash loop back off"

I want to deploy application on Openshift-origin online (next gen). There will be at least 4 pods communicating via services.
In 1st POD I have to run Zookeeper. So I created POD where my Zookeeper from docker image will be running, but POD's status is: Crash loop back off.
I created new project
oc new-project my-project
I created new app to deploy my zookeeper from docker
oc new-app mciz/zookeeper-docker-infispector --name zookeeper
And the output message was:
--> Found Docker image 51220f2 (11 minutes old) from Docker Hub for "mciz/zookeeper-docker-infispector"
* An image stream will be created as "zookeeper:latest" that will track this image
* This image will be deployed in deployment config "zookeeper"
* Ports 2181/tcp, 2888/tcp, 3888/tcp will be load balanced by service "zookeeper"
* Other containers can access this service through the hostname "zookeeper"
* This image declares volumes and will default to use non-persistent, host-local storage.
You can add persistent volumes later by running 'volume dc/zookeeper --add ...'
* WARNING: Image "mciz/zookeeper-docker-infispector" runs as the 'root' user which may not be permitted by your cluster administrator
--> Creating resources with label app=zookeeper ...
imagestream "zookeeper" created
deploymentconfig "zookeeper" created
service "zookeeper" created
--> Success
Run 'oc status' to view your app.
Then I ran pods list:
oc get pods
with output:
NAME READY STATUS RESTART AGE
zookeeper-1-mrgn1 0/1 CrashLoopBackOff 5 5m
Then I ran logs:
oc logs -p zookeeper-1-mrgn1
with output:
JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
grep: /opt/zookeeper/bin/../conf/zoo.cfg: No such file or directory
mkdir: can't create directory '': No such file or directory
log4j:WARN No appenders could be found for logger (org.apache.zookeeper.server.quorum.QuorumPeerConfig).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Invalid config, exiting abnormally
My dockerfile:
FROM openjdk:8-jre-alpine
MAINTAINER mciz
ARG MIRROR=http://apache.mirrors.pair.com
ARG VERSION=3.4.6
LABEL name="zookeeper" version=$VERSION
RUN apk add --no-cache wget bash \
&& mkdir /opt \
&& wget -q -O - $MIRROR/zookeeper/zookeeper-$VERSION/zookeeper- $VERSION.tar.gz | tar -xzf - -C /opt \
&& mv /opt/zookeeper-$VERSION /opt/zookeeper \
&& cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
EXPOSE 2181 2888 3888
WORKDIR /opt/zookeeper
VOLUME ["/opt/zookeeper/conf"]
ENTRYPOINT ["/opt/zookeeper/bin/zkServer.sh"]
CMD ["start-foreground"]
There is a warning in the new-app command output:
WARNING: Image "mciz/zookeeper-docker-infispector" runs as the 'root' user which may not be permitted by your cluster administrator
You should fix the docker image to not run as root (or tell OpenShift to allow this project containers to run as root).
There is an specific example of Zookeeper image and template that works in Openshift.
https://github.com/openshift/origin/tree/master/examples/zookeeper
Notice the Dockerfile changes to run the container as non root user