Keycloak 20.0.2: Error when exporting realm config for keycloak within a docker container - keycloak

Very similarly to Error when importing realm config for keycloak within a docker container, I'm running keycloak in docker-compose, using the image quay.io/keycloak/keycloak:20.0.2 and postgreSql.
I'd like to export the whole Keycloak's data.
The following command:
docker run `
-it `
--rm `
-v ${PWD}/keycloak-data:/export `
-e LOG_LEVEL=INFO `
-e KC_DB_URL_HOST=<containerName> `
-e KC_DB_URL_PORT=5432 `
-e KC_DB_URL_DATABASE=<dbName> `
-e KC_DB_USERNAME=<userName> `
-e KC_DB_PASSWORD=<password> `
--network <network> `
quay.io/keycloak/keycloak:20.0.2 `
export --realm <realmName> --dir /export
seems to correctly connect to the db, but I keep getting the following error:
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler] (main) ERROR: Failed to start server in (import_export) mode
The error occurs both while the Keycloak server is running (with the docker-compose up command), and when it is stopped and removed (though, the postgreSQL is running, of course!)
How can the Keycloak data be exported?

The error reported in the question
ERROR [org.keycloak.quarkus.runtime.cli.ExecutionExceptionHandler]
(main) ERROR: Failed to start server in (import_export) mode
seems not to be relevant to the purpose of exporting data.
Configuring a proper bind-mounted path in the docker-compose.yaml file for the keycloak container and calling the proper command should do the job:
volumes:
- ./myLocalPath:/export
then, perform the export, using the original container:
docker exec `
-it `
-e LOG_LEVEL=INFO `
-e KC_DB_URL_HOST=<containerName> `
-e KC_DB_URL_PORT=5432 `
-e KC_DB_URL_DATABASE=<dbName> `
-e KC_DB_USERNAME=<userName> `
-e KC_DB_PASSWORD=<password> `
<containerName> `
/opt/keycloak/bin/kc.sh export --realm <realmName> --dir /export
The exported data will then be available in local folder ./myLocalPath.

Related

Azure Devops Container Pipeline job is trying to redundantly give user:1000 sudo priveleges

I have a docker image, already made, that another pipeline uses for build jobs. That image already has a user:1000 with sudo (paswordless) permissions and a home directory. This was done to make manual use of the container more useful... there are applications in the image that prefer to run under a non-root user.
The pipeline using this image finds the existing user (great!) but then tries to give the user sudo permissions that it already has and this breaks the flow...
--<yaml pipeline code>--
container:
image: acr.url/foo/bar:v1
endpoint: <svc-connection>
--<pipeline run>--
...
/usr/bin/docker network create --label dc4b27 vsts_network_6b3e...
/usr/bin/docker inspect --format="{{index .Config.Labels \"com.azure.dev.pipelines.agent.handler.node.path\"}}" ***/foo/bar:v1
/usr/bin/docker create --name 9479... --label dc4b27 --network vsts_network_6b3ee... -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/opt/azagent/_work/9":"/__w/9" -v "/opt/azagent/_work/_temp":"/__w/_temp" -v "/opt/azagent/_work/_tasks":"/__w/_tasks" -v "/opt/azagent/_work/_tool":"/__t" -v "/opt/azagent/externals":"/__a/externals":ro -v "/opt/azagent/_work/.taskkey":"/__w/.taskkey" ***/foo/bar:v1 "/__a/externals/node/bin/node" -e "setInterval(function(){}, 24 * 60 * 60 * 1000);"
9056...
/usr/bin/docker start 9056...
9056...
/usr/bin/docker ps --all --filter id=9056... --filter status=running --no-trunc --format "{{.ID}} {{.Status}}"
9056... Up Less than a second
/usr/bin/docker exec 9056... sh -c "command -v bash"
/bin/bash
whoami
devops
id -u devops
1000
Try to create a user with UID '1000' inside the container.
/usr/bin/docker exec 9056... bash -c "getent passwd 1000 | cut -d: -f1 "
/usr/bin/docker exec 9056... id -u viv
1000
Grant user 'viv' SUDO privilege and allow it run any command without authentication.
/usr/bin/docker exec 9056... groupadd azure_pipelines_sudo
groupadd: Permission denied.
groupadd: cannot lock /etc/group; try again later.
##[error]Docker exec fail with exit code 10
Finishing: Initialize containers
I am OK working with user:1000 in the container as the azure agent runs on the host VM under user:1000('devops') and so the id's match inside and outside of the container, getting around a shortcoming of the docker volume mount system.
The question is: Is there a pipeline yaml method or control parameter to tell the run not to try and setup sudo permissions on the discovered user account (uid:1000) in the container?
I am getting around this issue right now by adding options: --user 0 to the container: section in the yaml script but I would prefer not to do that...
Thx.

GCP command not created ~/.kube/config

In my gitlab pipeline script, I'm executing below command to create ~/.kube/config.
- terraform init
- NAME=`echo 'var.name' | terraform console -var-file terraform.tfvars | sed -e 's/^"//' -e 's/"$//' `
- REGION=`echo 'var.region' | terraform console -var-file terraform.tfvars | sed -e 's/^"//' -e 's/"$//' `
- PROJECT=`echo 'var.project' | terraform console -var-file terraform.tfvars | sed -e 's/^"//' -e 's/"$//' `
- gcloud container clusters get-credentials $NAME --zone $REGION
Output
Fetching cluster endpoint and auth data. kubeconfig entry generated
for test-sb-cluster.
But the config file not created in the $HOME directory and pipeline fails with below error
Error: could not open kubeconfig "~/.kube/config": stat
/root/.kube/config: no such file or directory
If you are logged in as root ~/ refers to /root directory. Not /home/user

How to run schema registry container for sasl plain kafka cluster

I want to run the cp-schema-registry image on AWS ECS, so I am trying to get it to run on docker locally. I have a command like this:
docker run -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.address:9092,2.kafka.address:9092,3.kafka.address:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_PLAINTEXT \
confluentinc/cp-schema-registry:5.5.3
(I have replaced the kafka urls).
My consumers/producers connect to the cluster with params:
["sasl.mechanism"] = "PLAIN"
["sasl.username"] = <username>
["sasl.password"] = <password>
Docs seem to indicate there is a file I can create with these parameters, but I don't know how to pass this into the docker run command. Can this be done?
Thanks to OneCricketeer for the help above with the SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG var. The command ended up like this (I added port 8081:8081 so I could test with curl):
docker run -p 8081:8081 -e SCHEMA_REGISTRY_HOST_NAME=schema-registry \
-e SCHEMA_REGISTRY_KAFKASTORE_BOOTSTRAP_SERVERS="1.kafka.broker:9092,2.kafka.broker:9092,3.kafka.broker:9092" \
-e SCHEMA_REGISTRY_KAFKASTORE_SECURITY_PROTOCOL=SASL_SSL \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_MECHANISM=PLAIN \
-e SCHEMA_REGISTRY_KAFKASTORE_SASL_JAAS_CONFIG='org.apache.kafka.common.security.plain.PlainLoginModule required username="user" password="pass";' confluentinc/cp-schema-registry:5.5.3
Then test with curl localhost:8081/subjects and get [] as a response.

postgres on docker fails with `chown: cannot dereference ... : No such file or directory`

I am trying to deploy postgres on docker and it exits immediately after I run it. On checking the logs I get the following error:
chown: cannot dereference '/var/lib/postgresql/data/venv/bin/python3': No such file or directory
The command am running is here below:
sudo docker run -p 5432:5432 -e POSTGRES_USER=superset -e POSTGRES_PASSWORD=mypostgrespassword -e POSTGRES_DB=superset --volume $PWD:/var/lib/postgresql/data -d postgres
How can I fix that?
The problem here is, that the directory you are mounting, is not empty.
Create a empty directory, for example /opt/pgdata and then mount that one:
docker run -p 5432:5432 -e POSTGRES_USER=superset -e POSTGRES_PASSWORD=mypostgrespassword -e POSTGRES_DB=superset -v /opt/pgdata:/var/lib/postgresql/data -d postgres

How to copy docker volume from one machine to another?

I have created a docker volume for postgres on my local machine.
docker create volume postgres-data
Then I used this volume and run a docker.
docker run -it -v postgres-data:/var/lib/postgresql/9.6/main postgres
After that I did some database operations which got stored automatically in postgres-data. Now I want to copy that volume from my local machine to another remote machine. How to do the same.
Note - Database size is very large
If the second machine has SSH enabled you can use an Alpine container on the first machine to map the volume, bundle it up and send it to the second machine.
That would look like this:
docker run --rm -v <SOURCE_DATA_VOLUME_NAME>:/from alpine ash -c \
"cd /from ; tar -cf - . " | \
ssh <TARGET_HOST> \
'docker run --rm -i -v <TARGET_DATA_VOLUME_NAME>:/to alpine ash -c "cd /to ; tar -xpvf - "'
You will need to change:
SOURCE_DATA_VOLUME_NAME
TARGET_HOST
TARGET_DATA_VOLUME_NAME
Or, you could try using this helper script https://github.com/gdiepen/docker-convenience-scripts
Hope this helps.
I had an exact same problem but in my case, both volumes were in separate VPCs and couldn't expose SSH to outside world. I ended up creating dvsync which uses ngrok to create a tunnel between them and then use rsync over SSH to copy the data. In your case you could start the dvsync-server on your machine:
$ docker run --rm -e NGROK_AUTHTOKEN="$NGROK_AUTHTOKEN" \
--mount source=postgres-data,target=/data,readonly \
quay.io/suda/dvsync-server
and then start the dvsync-client on the target machine:
docker run -e DVSYNC_TOKEN="$DVSYNC_TOKEN" \
--mount source=MY_TARGET_VOLUME,target=/data \
quay.io/suda/dvsync-client
The NGROK_AUTHTOKEN can be found in ngrok dashboard and the DVSYNC_TOKEN is being shown by the dvsync-server in its stdout.
Once the synchronization is done, the dvsync-client container will stop.