Docker compose: MariaDB got an error with init.sql / entrypoint - docker-compose

have the following service in my docker compose file:
database:
image: mariadb
container_name: database
environment:
MYSQL_ROOT_PASSWORD: 7ctDGg5YUwkCPkCW
entrypoint:
sh -c "echo 'CREATE DATABASE IF NOT EXISTS users;
CREATE DATABASE IF NOT EXISTS data;
CREATE DATABASE IF NOT EXISTS wordpress;
CREATE USER IF NOT EXISTS 'keycloak'#localhost IDENTIFIED BY 'jk2zKvGkJXBsrNMV';
GRANT ALL PRIVILEGES ON 'users'.* TO 'keycloak'#localhost;
CREATE USER IF NOT EXISTS 'wordpress'#localhost IDENTIFIED BY 'QKJFUfZbv7jMB5ba';
GRANT ALL PRIVILEGES ON 'wordpress'.* TO 'wordpress'#localhost;' > /docker-entrypoint-initdb.d/init.sql;
/usr/local/bin/docker-entrypoint.sh --character-set-server=utf8mb4 --collation-server=utf8mb4_unicode_ci
"
ports:
- 3306:3306
volumes:
- database_data:/var/lib/mysql
networks:
- backend-network
The mariadb service start ups without problems, but when it comes to the point where it is trying to run the init.sql I got the following error message:
database | 2020-06-13 18:20:03+00:00 [Note] [Entrypoint]: /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/init.sql
database | ERROR 1064 (42000) at line 1: You have an error in your SQL syntax; check the manual that corresponds to your MariaDB server version for the right syntax to use near 'jk2zKvGkJXBsrNMV' at line 1
database exited with code 1
Hopefully there is some one out there, who is able to tell me whats wrong.
So far,
Daniel

Take the content of your echo statement and move it to a separate file; for example, init.sql in the current directory. You can bind-mount that specific file into the /docker-entrypoint-initdb.d directory:
database:
image: mariadb
environment:
MYSQL_ROOT_PASSWORD: 7ctDGg5YUwkCPkCW
ports:
- 3306:3306
volumes:
- database_data:/var/lib/mysql
- ./init.sql:/docker-entrypoint-initdb.d/init.sql
What's actually happening with the command you're showing is that you're using single quotes as both a shell quoting delimiter and an SQL quoting delimiter.
echo '...IDENTIFIED BY 'something';'
In this case the single quotes around the password end the shell quoting and just get lost
# same as:
echo ...IDENTIFIED\ BY\ something\;
If you really needed to do it this way you could use the list format for the command to spell out the words specifically (without needing to separately quote them), and then use YAML block scalar syntax to avoid needing to separately quote the command, at which point you can use double quotes to quote the echo argument
command:
- sh
- -c
- >-
echo "... IDENTIFIED BY 'password'; ..."...
But it will be more understandable and less fragile to just split this out into a separate file. Avoid writing complex scripts inline in YAML files.

Related

Docker Compose - Container Bash Forking

I am trying to run netbox based on their standard guide on Docker Hub with a slight difference that I need our existing postgres dump to be restored when the postgres container starts.
I have tried a few approaches like defining a command option in docker-compose file like (and a few more combinations):
sleep 2 && psql -U netbox -f netbox.sql
sleep is required to prevent psql command running before the postgres service is started.
Or defining a bash script that does the database restore but all these approaches cause the container to exit after that command/script is run.
My last resort was to utilize bash forking and this is what the postgres snippet of docker-compose looks like:
postgres:
image: postgres:13-alpine
env_file: env/postgres.env
command:
- sh
- -c
- (sleep 3 && cd /home && psql -U netbox -f netbox.sql) & su -c postgres postgres
volumes:
- ./my_db:/home/
- netbox-postgres-data:/var/lib/postgresql/data
Sadly this throws results in:
postgres: could not access the server configuration file
"/var/lib/postgresql/data/postgresql.conf": No such file or directory
If I omit the command section of docker-compose, the container starts up fine and I can navigate and ls the directory in the error message but it is not what I really need because this container will go on to be part of a much larger jungle of an ecosystem with little to no control over it afterwards.
Could it be my bash forking or the problem lies somewhere else?
Thanks in advance
I was able to find a solution by going through the thread that David Maze shared in the comments.
In my case, placing the *.sql file inside /docker-entrypoint-initdb.d did not work but I wrote a bash script, placed it in /docker-entrypoint-initdb.d directory and it got triggered.
The bash script was a very simple one, it would cd to the directory containing the sql dump and then restore it by running psql:
psql -U netbox -f netbox.sql

building a dockerfiile works, but referencing it from a compose and running it fails. Not sure if it is dockerfile file path err?

File Hierarch:
|- docker-compose.yml
|- database/
| |- Dockerfile
| |- db_schema.sql
I have a simple Dockerfile which will build and run.
FROM postgres:10
ENV POSTGRES_USER=foo
ENV POSTGRES_PASSWORD=password
ENV POSTGRES_DB=foo
COPY ./db_schema.sql /docker-entrypoint-initdb.d/
EXPOSE 5432
There is a db_schema.sql file in the same directory which get copied.
This works as expected.
Now I wanted to expand upon this with a docker-compose file
version: '2.0'
services:
database:
build: ./database/.
ports:
- "5432:5432"
It seemed simple enough.
The issue now is that when i say docker-compose up it will return:
database_1 | Error: Database is uninitialized and superuser password is not specified.
database_1 | You must specify POSTGRES_PASSWORD to a non-empty value for the
database_1 | superuser. For example, "-e POSTGRES_PASSWORD=password" on "docker run".
database_1 |
database_1 | You may also use "POSTGRES_HOST_AUTH_METHOD=trust" to allow all
database_1 | connections without a password. This is *not* recommended.
database_1 |
database_1 | See PostgreSQL documentation about "trust":
database_1 | https://www.postgresql.org/docs/current/auth-trust.html
backend_database_1 exited with code 1
But they were initialized in the dockerfile. Do ENV variables need to be set in the docker compose and not set in the dockerfile itself?
When looking at: https://docs.docker.com/compose/environment-variables/
it was saying that docker-compose takes presidence over dockerfile, but if they arent defined, that they just use the dockerfile env variables. Yet this error comes still, which seems to be throwing me off.
When I tried to reproduce, the error didn't occur, with the exact files provided in the question. The real cause was an old image being built incorrectly.
Edit after the problem was reproduced, with a faulty image:
To force the rebuild of the image, the following command can be used:
docker-compose up --build
The environment variables can also be provided thought the compose file:
environment:
- POSTGRES_USER=foo
- POSTGRES_PASSWORD=password
- POSTGRES_DB=foo
Deleting the image can also solve the problem.

Postgres docker: add additional database after initialization

I'm using Postgres within a docker-compose environment to host databases for multiple containers. I basically want to add a database per application directly from docker-compose without the need of manually creating the databases and users. For this, I'm using the init script feature of the Postgres docker image and copy the following bash script by mounting a volume:
db:
image: postgres:9.6
container_name: db
restart: always
volumes:
- postgres-data:/var/lib/postgresql/data
- /opt/docker/pgsql-entrypoint:/docker-entrypoint-initdb.d
environment:
- POSTGRES_PASSWORD={{ vault_pgsql_root_password }}
- POSTGRES_MULTIPLE_DATABASES=confluence-{{ confluence_pgsql_password }},keycloak-{{ keycloak_pgsql_password }},gitlab-{{ gitlab_pgsql_password }},jira-{{ jira_pgsql_password }}
Basically the POSTGRES_MULTIPLE_DATABASESenvironment variable contains all the databases and users that should be created. The password is as follows:
#!/bin/bash
set -e
set -u
function create_user_and_database() {
local database=$1
local password=$2
echo " Creating user and database '$database'"
psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" <<-EOSQL
CREATE USER $database WITH PASSWORD '$password';
CREATE DATABASE $database;
GRANT ALL PRIVILEGES ON DATABASE $database TO $database;
EOSQL
}
if [ -n "$POSTGRES_MULTIPLE_DATABASES" ]; then
echo "Multiple database creation requested: $POSTGRES_MULTIPLE_DATABASES"
for entry in $(echo $POSTGRES_MULTIPLE_DATABASES | tr ',' ' '); do
db=$(echo $entry | cut -f1 -d-)
pw=$(echo $entry | cut -f2 -d-)
create_user_and_database $db $pw
done
echo "Multiple databases created"
fi
My problem is: at a certain point (now ;) ) I may want to add an additional service. Just adding an additional pair to the environment variable does not work, as the Postgres image is skipping the init step if data already exists. Is there a way to still achieve this behaviour?
Edit: I should have specified that i want to do it automatically from the compose file, by just changing the environment variable. It’s clear that it can be done manually of course.
You can always connect to the server with a database client and do it manually. If you don't want to expose the database port to the host then you can run the postgres client from the terminal in the postgres container. To open the terminal in the container:
> docker exec -it <container_name> /bin/sh
Then change users to postgres and start the client
# su postgres
postgres#1778e9755f65:/$ psql
Once inside just create a database:https://www.postgresql.org/docs/9.0/sql-createdatabase.html
I ended up not to use docker-compose but using sensible to deploy the database container directly and also make sure the appropriate users, databases and permissions are present. I could not find a meaningful way to do that during startup and environment variables.

.sql file in Docker Postgres Container not automatically executed

I have a .sql file created from a pg_dump command that I would like to have automatically inserted into my PostgreSQL instance in my Docker container. I mount my .sql file into the docker-entrypoint-initdb.ddirectory using volume. My Docker YML file looks like the following:
postgres:
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pw
- POSTGRES_DB=db
volumes:
- ./out.sql:/docker-entrypoint-initdb.d/out.sql
image: "postgres:9.6"
Since the sql file has been mounted into docker-entrypoint-initdb.d I expect it to automatically run and insert my data when I run docker-compose -f myyml.yml up -d. It does not execute. However, when I run bash in my container I do see my .sql file. Also, I can manually run the file with a command similar to psql -U user -a -f out.sql. Why is it not automatically running when I run docker-compose?
So after a lot of trial and error, I realized that I needed to remove all images and containers and restart. I ran a docker-compose myyml.yml down and reran and it seemed to work.
i think you can add command in docker compose, and as following psql -U user -a -f out.sql
postgres:
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pw
- POSTGRES_DB=db
volumes:
- ./out.sql:/docker-entrypoint-initdb.d/out.sql
image: "postgres:9.6"
command: bash -c "psql -U user -a -f out.sql"
note : if with use command bash -c is not work, remove it

Run a PostgreSQL script using Ansible

I am looking for a way to run a Postgres script using Ansible. While I found a reasonably good example Here, I need to:
Run the script as user postgres
I don't necessarily need to keep a copy of the script on the server so if I need to have a copy, it will only be for temp use.
Can anyone tell me if this is possible and if so an example of running it. Here is what I tried so far using Ansible and it just hung at these points:
- name: Testing DB to make sure it is available
command: psql -U bob image
register: b
- debug: b
- name: Verifying Tables exist in Image
shell: \d image
register: c
- debug: c
- name: Exiting Image DB
shell: \q
register: d
- debug: d
- name: Going to Agent DB
command: psql -U bob agent
register: e
- debug: e
This always hangs at the first part of it when logging into the image DB.
Why it doesn't work
This:
- name: Testing DB to make sure it is available
command: psql -U bob image
register: b
- debug: b
- name: Verifying Tables exist in Image
shell: \d image
register: c
- debug: c
doesn't do what you think it does.
The first command runs psql -U bob image. This starts a psql session. psql waits for input from stdin. Ansible will never send any, it is simply waiting for the command you specified to exit, so it can check the exit code.
So Ansible waits for psql to exit, and psql waits for Ansible to send some input.
Each task in Ansible is independent. The shell or command modules do not change the shell that subsequent commands run in. You simply can't do this the way you expect.
Even if psql exited after the first task (or went to the background), you'd just get an error from the second task like:
bash: d: command not found
So the way you're trying to do this just isn't going to work.
How to do it
You need to run each task as a separate psql command, with a command string:
- name: Testing DB to make sure it is available
command: psql -U bob image -c 'SELECT 1;'
- name: Verifying Tables exist in Image
command: psql -U bob image -c '\d image'
... or with standard input, except that Ansible doesn't seem to support supplying a variable as stdin to a command.
... or with a (possibly templated) SQL script:
- name: Template sql script
template: src="my.sql.j2" dest="{{sometemplocation}}/my.sql"
- name: Execute sql script
shell: "psql {{sometemplocation}}/my.sql"
- name: Delete sql script
file: path="{{sometemplocation}}/my.sql" state=absent
Alternately you can use Ansible's built-in support for querying PostgreSQL to do it, but in that case you cannot use the psql client's backslash commands like \d, you'd have to use only SQL. Query information_schema for table info, etc.
Here's how some of my code looks
Here's an example from an automation module I wrote that does a lot with PostgreSQL.
Really, I should just suck it up and write a psql Ansible task that runs commands via psql, rather than using shell, which is awful and clumsy. For now, though, it works. I use connection strings that're assigned from variables or generated using set_fact to reduce the mess a bit and make connections more flexible.
- name: Wait for the target node to be ready to be joined
shell: "{{postgres_install_dir}}/bin/psql '{{bdr_join_target_dsn}}' -qAtw 'SELECT bdr.bdr_node_join_wait_for_ready();'"
- name: Template pre-BDR-join SQL script
template: src="{{bdr_pre_join_sql_template}}" dest="{{postgres_install_dir}}/bdr_pre_join_{{inventory_hostname}}.sql"
- name: Execute pre-BDR-join SQL script
shell: "{{postgres_install_dir}}/bin/psql '{{bdr_node_dsn}}' -qAtw -f {{postgres_install_dir}}/bdr_pre_join_{{inventory_hostname}}.sql"
- name: Delete pre-BDR-join SQL script
file: path="{{postgres_install_dir}}/bdr_pre_join_{{inventory_hostname}}.sql" state=absent
- name: bdr_group_join
shell: "{{postgres_install_dir}}/bin/psql '{{bdr_node_dsn}}' -qAtw -c \"SELECT bdr.bdr_group_join(local_node_name := '{{inventory_hostname}}', node_external_dsn := '{{bdr_node_dsn}}', join_using_dsn := '{{bdr_join_target_dsn}}');\""
- name: Template post-BDR-join SQL script
template: src="{{bdr_post_join_sql_template}}" dest="{{postgres_install_dir}}/bdr_post_join_{{inventory_hostname}}.sql"
- name: Execute post-BDR-join SQL script
shell: "{{postgres_install_dir}}/bin/psql '{{bdr_node_dsn}}' -qAtw -f {{postgres_install_dir}}/bdr_post_join_{{inventory_hostname}}.sql"
- name: Delete post-BDR-join SQL script
file: path="{{postgres_install_dir}}/bdr_post_join_{{inventory_hostname}}.sql" state=absent
The answer that Craig gives is good, but fails to solve the problem of running the commands as a specific user. That can be done with my additions to his code:
- name: Testing DB to make sure it is available
become: true
become_user: postgres
command: psql -U bob image -c 'SELECT 1;'
- name: Verifying Tables exist in Image
become: true
become_user: postgres
command: psql -U bob image -c '\d image'
Note the "become" and "become_user" parameters. These will tell Ansible to change to the correct user before running the commands.
IMPORTANT: Ansible Version 1.9 and earlier use sudo: yes and sudo_user: postgres instead of become: true and become_user: postgres
Building on the excellent responses above, you can also specify environment variables in your Ansible task as shown below. Note that this assumes you have set up a .pgpass file with the password for the target db.
- name: Execute some sql via psql
command: psql -f /path/to/your/sql
environment:
PGUSER: "{{ db_user }}"
PGDATABASE: "{{ db_name }}"
PGHOST: "{{ db_host }}"
PGPASS: "{{ pgpass_filepath }}"