"You need local access to create the initial admin user" error while keycloak startup in docker - postgresql

While starting keycloak server on docker, I am getting this error: "You need local access to create the initial admin user". But running it locally, it's working fine.
Another thing is that if I want to use Postgres db instead of embedded H2 db then should I create tables to store user, clients and scope, etc? If yes how can I get db structure for all tables?

You can let the container create the admin user by providing the environment variables KEYCLOAK_USER and KEYCLOAK_PASSWORD:
docker run -e KEYCLOAK_USER=<USERNAME> -e KEYCLOAK_PASSWORD=<PASSWORD> jboss/keycloak
Or add the account to an existing container( Service or container restart required afterwards) with:
docker exec <CONTAINER> /opt/jboss/keycloak/bin/add-user-keycloak.sh -u <USERNAME> -p <PASSWORD>
And either restart container
docker restart <container>
Or restart the service (#Madeo's answer)
docker exec -it <container> /opt/jboss/keycloak/bin/jboss-cli.sh --connect --command=:reload
The above commands come from the Keycloak Docker image page on Docker Hub.
Regarding your database question, you don't have to provide the tables by hand.
You can refer to chapter 6 (§6.4, §6.5) of the Keycloak documentation for the details of how to configure a PostgreSQL DB.

Open container bash console
cd /keycloak/bin
bash ./add-user-keycloak.sh -u admin
Enter desired password
Restart the container
Go to following URL for login
http://dockerIP:8080/auth/admin/

For Keycloak 17, you can use lynx locally to create the admin user:
lynx localhost:8080
Then just Tab to navigate fields and press Enter on the Create button:
Keycloak
Welcome to Keycloak
[user.png] Administration Console
Please create an initial admin user to get started.
Username ____________________
Password ____________________
Password confirmation ____________________
(BUTTON) Create
[user.png] Administration Console
Centrally manage all aspects of the Keycloak server
[admin-console.png] Documentation
User Guide, Admin REST API and Javadocs
[keycloak-project.png] Keycloak Project
[mail.png] Mailing List
[bug.png] Report an issue
JBoss and JBoss Community

None of the tips above worked. Finally I use Environment Variables:
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
The full code of the docker-compose.yml:
version: '3'
volumes:
postgres_data:
driver: local
services:
postgres:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: keycloak
POSTGRES_USER: keycloak
POSTGRES_PASSWORD: password
keycloak:
image: quay.io/keycloak/keycloak:17.0.1
environment:
DB_VENDOR: POSTGRES
DB_ADDR: postgres
DB_DATABASE: keycloak
DB_USER: keycloak
DB_SCHEMA: public
DB_PASSWORD: password
KEYCLOAK_USER: admin
KEYCLOAK_PASSWORD: admin
KEYCLOAK_ADMIN: admin
KEYCLOAK_ADMIN_PASSWORD: admin
entrypoint: ["/opt/keycloak/bin/kc.sh", "start-dev"]
ports:
- 8080:8080
depends_on:
- postgres

The answer with docker is incomplete and it won't work
If you add the user via docker container you must restart jboss server
docker exec -it keycloak-container /opt/jboss/keycloak/bin/add-user-keycloak.sh -u admin -p admin
and then:
docker exec -it keycloak-container /opt/jboss/keycloak/bin/jboss-cli.sh --connect --command=:reload

This worked for me:
cd /opt/keycloak/bin
sudo ./add-user-keycloak.sh -u admin -p yourpass

Open 'keycloak.conf' file from Keycloak folder (in my case keycloak-18.0.0/conf)
db-username=postgres
db-password=password
db-url=jdbc:postgresql://yourhostname:5432/keycloak-db-name
If you start keycloak service, postgres DB will be created automatically

Using the Operator https://www.keycloak.org/guides#operator, I had the same issue.
The username and password provided by this step
kubectl get secret example-kc-initial-admin -o jsonpath='{.data.username}' | base64 --decode
kubectl get secret example-kc-initial-admin -o jsonpath='{.data.password}' | base64 --decode
https://www.keycloak.org/operator/basic-deployment#_accessing_the_keycloak_deployment
did not work.
What apparently solved it for me was deleting all Keycloak CRs, deployments, services, etc. and starting the tutorial from the beginning. Then, I omitted this optional step:
We suggest you to first store the Database credentials in a separate Secret, you can do it for example by running:
kubectl create secret generic keycloak-db-secret \
--from-literal=username=[your_database_username] \
--from-literal=password=[your_database_password]
(with made up Postgres username and password filling in the brackets)
I am not sure how the Database secret relates to the Admin User secret, but now the username and password in example-kc-initial-admin work. Perhaps Postgres was inaccessible to Keycloak. This was not indicated in the Keycloak logs.
I don't believe starting fresh was the solution, because I already tried that. Omitting keycloak-db-secret seems to have been important. I will need to fully understand where the DB secret is set, now; it may be insecure.

Related

Why isn't Docker Compose honoring my POSTGRES_USER environment variable?

I know lots of questions sound like this, and they all have the same answer: delete your volumes to force it to reinitialize.
The problem is, I'm being careful to delete my volumes, but it's consistently spinning up the container incorrectly every time.
My docker-compose.yml
version: "3.1"
services:
db:
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=changeme
- POSTGRES_USER=myuser
image: postgres
My process:
$ docker volume ls
DRIVER VOLUME NAME
$ docker-compose up -v # or docker-compose up --force-recreate
yet it always creates the "postgres" user instead of myuser. The output when it starts up shows that it "will be owned by user 'postgres'" and I can only docker exec as postgres, not my user.
The instructions seem very straightforward. Am I missing something, or is this a bug?
What happens when you use the compose file above?
I can only docker exec as postgres, not myuser
The environment variable POSTGRES_USER controls the database user, not the linux user. Take a look at the chapter Arbitrary --user Notes in the documentation to learn how to change the linux user.

Can't create initial admin user in keycloak

it shows an error "we are sorry an internal error occurred" while entered username password and confirm password. How can i create initial admin user?
If you are running Keycloak in docker container then you can define admin name and password during startup:
docker run --name keycloak -p 8080:8080 -e KEYCLOAK_USER=admin -e KEYCLOAK_PASSWORD=admin jboss/keycloak
Otherwise, you can add the user as follows (this actually what is done in docker container behind the scenes):
/opt/jboss/keycloak/bin/add-user-keycloak.sh --user "$KEYCLOAK_USER" --password "$KEYCLOAK_PASSWORD"
The admin console is available at:
http://localhost:8080/admin

Postgres Docker image is not creating database with custom name

The documentation of the postgres Docker image says the following about the env var POSTGRES_DB:
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
I have found that this is not true at all. For example, with this config:
version: '3.7'
services:
db:
image: postgres:11.3-alpine
restart: always
container_name: store
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- 5432:5432
environment:
- POSTGRES_USER=custom
- POSTGRES_DB=customname
- POSTGRES_PASSWORD_FILE=/run/secrets/db_password
secrets:
- db_password
volumes:
postgres_data:
secrets:
db_password:
file: config/.secrets.db_password
The default database is called postgres, and not customname as I have specified:
$ docker exec -it store psql -U custom customname
psql: FATAL: database customname does not exist
$ docker exec -it store psql -U custom postgres
psql (11.3)
Type help for help.
postgres=# ^D
Am I missing something obvious?
Providing the environment variables, as you did, SHOULD create the customname database when the container is initialized. There is no need to create the username and database in the /docker-entrypoint-initdb.d/' init scripts.
I would make sure there isn't any hanging postgres_data volume. If you have previously started the container without specifing the environment variables, the volume gets created for the default postgres database. Next time you start the container (with the POSTGRES_DB env specified), the database creation part is skipped.
Just to make sure, remove any created volume (the name should be something like *_postgres_data)
docker volume ls
docker volume rm <volume_name>
See User and DB were not created from environment variable arguments as well. Hope that helps
You need to create the database first.
If you want to do that automatically for new data directories, then the official Docker Postgres image has an option to do so by placing Initialization Scripts with the extension .sql in the /docker-entrypoint-initdb.d/ directory.
For example, create a file with contents like:
CREATE USER custom_user;
CREATE DATABASE custom_db;
GRANT ALL PRIVILEGES ON DATABASE custom_db TO custom_user;
And save it to /docker-entrypoint-initdb.d/create-db.sql in the container, e.g. with COPY in the Dockerfile. Scripts with extension .sql inside that directory will only run if the DATA directory is empty, and multiple files will run in the alphabetical order of the file names.
If you want to set it up manually, you can also do that with the createdb utility
createdb [connection-option...] [option...] [dbname [description]]
Or by connecting to the postgres database and use the CREATE DATABASE ... command, e.g.
docker exec -it store psql -U postgres -c 'CREATE DATABASE customname;'
If you connect interactively as in your question, you can do the following:
$ docker exec -it store psql -U postgres
psql (11.3)
Type help for help.
postgres=# CREATE DATABASE customname;
CREATE DATABASE
postgres=# \c customname
The last command will connect you to the customname database.
If you've changed the username/password since the very first run, try to delete the prior volume created
docker volume rm <volume-name>
Then run the compose file again

Why is postgres container ignoring /docker-entrypoint-initdb.d/* in Gitlab CI

Gitlab CI keeps ignoring the sql-files in /docker-entrypoint-initdb.d/* in this project.
here is docker-compose.yml:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
here is .gitlab-ci.yml:
stages:
- deploy
deploy:
stage: deploy
image: debian:stable-slim
script:
- bash ./deploy.sh
The deployment script basically uses rsync to deploy the content of the repository to to the server via SSH:
rsync -rav --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw -e "ssh -l gitlab-ci" --exclude=".git" --delete ./ "gitlab-ci#$DEPLOY_SERVER:test/"
and then ssh's into the server to stop and restart the container:
ssh "gitlab-ci#$DEPLOY_SERVER" "cd test && docker-compose down && docker-compose up --build --detach"
This all goes well, but when the container starts up, it is supposed to run all the files that are in /docker-entrypoint-initdb.d/* as we can see here.
But instead, when doing docker logs -f lbsn-testdb on the server, I can see it stating
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
and I have no clue, why that happens. When running this container locally or even when I ssh to that server, clone the repo and bring up the containers manually, it all goes well and parses the sql-files. Just not when the Gitlab CI does it.
Any ideas on why that is?
This has been easier than I expected, and fatally nothing to do with Gitlab CI but with file permissions.
I passed --chmod=Du+rwx,Dgo-rwx,u+rw,go-rw to rsync which looked really secure because only the user can do stuff. I confess that I propably copypasted it from somewhere on the internet. But then the files are mounted to the Docker container, and in there they have those permissions as well:
-rw------- 1 1005 1004 314 May 8 15:48 100-create-database.sql
On the host my gitlab-ci user owns those files, they are obviously also owned by some user with ID 1005 in the container as well, and no permissions are given to other users than this one.
Inside the container the user who does things is postgres though, but it can't read those files. Instead of complaining about that, it just ignores them. That might be something to create an issue about…
Now that I pass --chmod=D755,F644 it looks like that:
-rw-r--r-- 1 1005 1004 314 May 8 15:48 100-create-database.sql
and the docker logs say
/usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/100-create-database.sql
Too easy to think of in the first place :-/
If you already run the postgres service before, the init files will be ignored when you restart it so try to use --build to build the image again
docker-compose up --build -d
and before you run again :
Check the existing volumes with
docker volume ls
Then remove the one that you are using for you pg service with
docker volume rm {volume_name}
-> Make sure that the volume is not used by a container, if so then remove the container as well
I found this topic discovering a similar problem with PostgreSQL installation using the docker-compose tool.
The solution is basically the same. For the provided configuration:
version: '3.6'
services:
testdb:
image: postgres:11
container_name: lbsn-testdb
restart: always
ports:
- "65432:5432"
volumes:
- ./testdb/init:/docker-entrypoint-initdb.d
Your deployment script should set 0755 permissions to your postgres container volume, like chmod -R 0755 ./testdb in this case. It is important to make all subdirectories visible, so chmod -R option is required.
Official Postgres image is running under internal postgres user with the UID 70. Your application user in the host is most likely has different UID like 1000 or something similar. That is the reason for postgres init script to miss installation steps due to permissions error. This issue appears several years, but still exist in the latest PostgreSQL version (currently is 12.1)
Please be aware of security vulnerability when having readable for all init files in the system. It is good to use shell environment variables to pass secrets into the init scrip.
Here is a docker-compose example:
postgres:
image: postgres:12.1-alpine
container_name: app-postgres
environment:
- POSTGRES_USER
- POSTGRES_PASSWORD
- APP_POSTGRES_DB
- APP_POSTGRES_SCHEMA
- APP_POSTGRES_USER
- APP_POSTGRES_PASSWORD
ports:
- '5432:5432'
volumes:
- $HOME/app/conf/postgres:/docker-entrypoint-initdb.d
- $HOME/data/postgres:/var/lib/postgresql/data
Corresponding script create-users.sh for creating users may looks like:
#!/bin/bash
set -o nounset
set -o errexit
set -o pipefail
POSTGRES_USER="${POSTGRES_USER:-postgres}"
POSTGRES_PASSWORD="${POSTGRES_PASSWORD}"
APP_POSTGRES_DB="${APP_POSTGRES_DB:-app}"
APP_POSTGRES_SCHEMA="${APP_POSTGRES_SCHEMA:-app}"
APP_POSTGRES_USER="${APP_POSTGRES_USER:-appuser}"
APP_POSTGRES_PASSWORD="${APP_POSTGRES_PASSWORD:-app}"
DATABASE="${APP_POSTGRES_DB}"
# Create single database.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE DATABASE ${DATABASE}"
# Create app user.
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "CREATE USER ${APP_POSTGRES_USER} SUPERUSER PASSWORD '${APP_POSTGRES_PASSWORD}'"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "GRANT ALL PRIVILEGES ON DATABASE ${DATABASE} TO ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --dbname "${DATABASE}" --command "CREATE SCHEMA ${APP_POSTGRES_SCHEMA} AUTHORIZATION ${APP_POSTGRES_USER}"
psql --variable ON_ERROR_STOP=1 --username "${POSTGRES_USER}" --command "ALTER USER ${APP_POSTGRES_USER} SET search_path = ${APP_POSTGRES_SCHEMA},public"

Mongodb docker container with client access control

I want to create a docker container with a mongodb configured with client access control (user authentication, see this).
I have successfully configured a docker container with mongo using this image. But it doesn't use mongo access control.
The problem is that to enable access control I have to run mongodb with a specific command line (--auth) but only after creating the first admin user.
With a standard mongodb installation I normally perform these steps:
run mongod without --auth
connect to mongo and add the admin user
restart mongo with --auth
How I'm supposed to do it with docker? Because mongo image always start without --auth. Should I create a new image? Or maybe modify the entry point?
Probably I'm missing something, I'm new to docker...
Ok, I have found a solution. Basically MongoDb has a feature that allow to setup access security (--auth) but permit localhost connection.
See mongo local exception.
So this is my final script:
# Create a container from the mongo image,
# run is as a daemon (-d), expose the port 27017 (-p),
# set it to auto start (--restart)
# and with mongo authentication (--auth)
# Image used is https://hub.docker.com/_/mongo/
docker pull mongo
docker run --name YOURCONTAINERNAME --restart=always -d -p 27017:27017 mongo mongod --auth
# Using the mongo "localhost exception" add a root user
# bash into the container
sudo docker exec -i -t YOURCONTAINERNAME bash
# connect to local mongo
mongo
# create the first admin user
use admin
db.createUser({user:"foouser",pwd:"foopwd",roles:[{role:"root",db:"admin"}]})
# exit the mongo shell
exit
# exit the container
exit
# now you can connect with the admin user (from any mongo client >=3 )
# remember to use --authenticationDatabase "admin"
mongo -u "foouser" -p "foopwd" YOURHOSTIP --authenticationDatabase "admin"
In case you are able to use other existing images, there is a well maintained image with default authentication enabled for MongoDB and easy to plug in, called tutum-docker-mongodb.
It also uses environmental variables which you can use in you app.
I included it in my tutum.yml (or docker-compose.yml) like so:
mongo:
image: 'tutum/mongodb:latest'
environment:
- MONGODB_PASS=<your-password-here>
ports:
- '27017:27017'
- '28017:28017'
Finally I linked the web service using:
web:
image: 'my-image'
links:
- 'mongo:mongo'
ports:
- '80:3000'
restart: always
Hope it helps!