I have a Dockerfile with below code
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA=Y
ENV sa_password=##$wo0RD!
CMD sqlcmd -i create-db.sql
and I can create image but when I run container with the image I don't see created database on the SQL Server because the script is executed before SQL Server was started.
Can I do that the script will be execute after start the service with SQL Server?
RUN gets used to build the layers in an image. CMD is the command that is run when you launch an instance (a "container") of the built image.
Also, if your script depends on those environment variables, if it's an older version of Docker, it might fail because those variables are not defined the way you want them defined!
In older versions of docker the Dockerfile ENV command uses spaces instead of "="
Your Dockerfile should probably be:
FROM microsoft/mssql-server-windows-express
COPY ./create-db.sql .
ENV ACCEPT_EULA Y
ENV SA_PASSWORD ##$wo0RD!
RUN sqlcmd -i create-db.sql
This will create an image containing the database with your password inside it.
(If the SQL file somehow uses the environment variables, this wouldn't make sense as you might as well update the SQL file before you copy it over.) If you want to be able to override the password between the docker build and docker run steps, by using docker run --env sa_password=##$wo0RD! ..., you will need to change the last line to:
CMD sqlcmd -i create-db.sql && .\start -sa_password $env:SA_PASSWORD \
-ACCEPT_EULA $env:ACCEPT_EULA -attach_dbs \"$env:attach_dbs\" -Verbose
Which is a modified version of the CMD line that is inherited from the upstream image.
You can follow this link https://github.com/microsoft/mssql-docker/issues/11.
Credits to Robin Moffatt.
Change your docker-compose.yml file to contain the following
mssql:
image: microsoft/mssql-server-windows-express
environment:
- SA_PASSWORD=##$wo0RD!
- ACCEPT_EULA=Y
volumes:
# directory with sql script on pc to /scripts/
# - ./data/mssql:/scripts/
- ./create-db.sql:/scripts/
command:
- /bin/bash
- -c
- |
# Launch MSSQL and send to background
/opt/mssql/bin/sqlservr &
# Wait 30 seconds for it to be available
# (lame, I know, but there's no nc available to start prodding network ports)
sleep 30
# Run every script in /scripts
# TODO set a flag so that this is only done once on creation,
# and not every time the container runs
for foo in /scripts/*.sql
do /opt/mssql-tools/bin/sqlcmd -U sa -P $$SA_PASSWORD -l 30 -e -i $$foo
done
# So that the container doesn't shut down, sleep this thread
sleep infinity
Related
I am trying to run netbox based on their standard guide on Docker Hub with a slight difference that I need our existing postgres dump to be restored when the postgres container starts.
I have tried a few approaches like defining a command option in docker-compose file like (and a few more combinations):
sleep 2 && psql -U netbox -f netbox.sql
sleep is required to prevent psql command running before the postgres service is started.
Or defining a bash script that does the database restore but all these approaches cause the container to exit after that command/script is run.
My last resort was to utilize bash forking and this is what the postgres snippet of docker-compose looks like:
postgres:
image: postgres:13-alpine
env_file: env/postgres.env
command:
- sh
- -c
- (sleep 3 && cd /home && psql -U netbox -f netbox.sql) & su -c postgres postgres
volumes:
- ./my_db:/home/
- netbox-postgres-data:/var/lib/postgresql/data
Sadly this throws results in:
postgres: could not access the server configuration file
"/var/lib/postgresql/data/postgresql.conf": No such file or directory
If I omit the command section of docker-compose, the container starts up fine and I can navigate and ls the directory in the error message but it is not what I really need because this container will go on to be part of a much larger jungle of an ecosystem with little to no control over it afterwards.
Could it be my bash forking or the problem lies somewhere else?
Thanks in advance
I was able to find a solution by going through the thread that David Maze shared in the comments.
In my case, placing the *.sql file inside /docker-entrypoint-initdb.d did not work but I wrote a bash script, placed it in /docker-entrypoint-initdb.d directory and it got triggered.
The bash script was a very simple one, it would cd to the directory containing the sql dump and then restore it by running psql:
psql -U netbox -f netbox.sql
I created Azure Container Instance and ran postgresql in it. Mounted an azure container instance storage account. How can I start backup work, possibly by sheduler?
When I run the command
az container exec --resource-group Vitalii-demo --name vitalii-demo --exec-command "pg_dumpall -c -U postgrace > dump.sql"
I get an error error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \ "pg_dumpall -c -U postgrace > dump.sql\": executable file not found in $PATH"
I read that
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
Perhaps there is an opportunity to run as a task? Thanks.
Unfortunately - and as you already mentioned - it's not possible to run any commands with arguments like echo FOO or chain multiple commands together with &&.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec#run-a-command-with-azure-cli
You should be able to run an interactive shell by using --exec-command /bin/bash.
But this will not help if you want to schedule the backups programatically.
pg_dumpall can also be configured by environment variables:
https://www.postgresql.org/docs/9.3/libpq-envars.html
You could launch your backup-container with the correct environment variables in order to connect your database service:
PGHOST
PGPORT
PGUSER
PGPASSWORD
When having these variables set, a simple pg_dumpall should totally do what you want.
Hope that helps.
UPDATE:
Yikes, even when configuring the connection via environment-variables you won't be able to state the desired output file... Sorry.
You could create your own Dockerimage with a pre-configured script for dumping your PostgreSQL-database.
Doing it that way, you can configure the output-file in your script and then simply execute the script with --exec-command dump_my_db.sh.
Keep in mind that your script has to be located somewhere in the default $PATH - e.g. /usr/local/bin.
I've got a custom image based on the official postgres image and I want to extend the entrypoint of the parent image so that it would create new users and databases if they don't exist yet every time a container starts up. Is it possible? Like my image would execute all the commands from the standard entrypoint plus my own shell script.
I know about putting my own scripts into the /docker-entrypoint-initdb.d directory, but it seems that they get executed only when the volume is created the first time.
What you need to do is something like below
setup_user.sh
sleep 10
echo "execute commands to setup user"
setup.sh
sh setup_user.sh &
./docker-entrypoint.sh postgres
And your image should use the ENTRYPOINT as
ENTRYPOINT ["/setup.sh"]
You need to start your setup script in background and let the origin entryscript do its works to start the database
In addition to the accepted answer Docker - extend the parent's ENTRYPOINT, instead of sleeping a specific time, you may want to consider executing your script similar to how ''docker-entrypoint.sh'' of the postgres docker image does it (docker-entrypoint.sh; to init the DB, they start the server, execute initialization commands, and shut it down again). Thus:
setup_user.sh
su - "$YOUR_PG_USER" -c '/usr/local/bin/pg_ctl -D /var/lib/postgresql/data -o "-c listen_addresses='localhost'" -w start'
psql -U "$YOUR_PG_USER" "$YOUR_PG_DATABASE" < "$YOUR_SQL_COMMANDS"
su - "$YOUR_PG_USER" -c '/usr/local/bin/pg_ctl -D /var/lib/postgresql/data -m fast -w stop'
setup.sh
./setup_user.sh && ./docker-entrypoint.sh postgres
Trying to make Dockerfile for postgres db need for my app.
Dockerfile
FROM postgres:9.4
RUN mkdir /sql
COPY src/main/resources/sql_scripts/* /sql/
RUN psql -f /sql/create_user.sql
RUN psql -U user -W 123 -f create_db.sql
RUN psql -U user -W 123 -d school_ats -f create_tables.sql
run
docker build .
result:
Sending build context to Docker daemon 3.367 MB
Step 1 : FROM postgres:9.4
---> 6196bca94565
Step 2 : RUN mkdir /sql
---> Using cache
---> 6f57c1e759b7
Step 3 : COPY src/main/resources/sql_scripts/* /sql/
---> Using cache
---> 3b496bfb28cd
Step 4 : RUN psql -a -f /sql/create_user.sql
---> Running in 33b2230a12fa
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The command '/bin/sh -c psql -a -f /sql/create_user.sql' returned a non-zero code: 2
How can I specify db in docker for my project?
When building your docker image postgres is not running. Database is started when container is starting, any sql files can be executed after that. Easiest solution is to put your sql files into special directory:
FROM postgres:9.4
COPY *.sql /docker-entrypoint-initdb.d/
When booting startup script will execute all files from this dir. You can read about this in docs https://hub.docker.com/_/postgres/ in section How to extend this image.
Also, if you need different user you should set environment variables POSTGRES_USER and POSTGRES_PASSWORD. It's easier then using custom scripts for creating user.
As the comment above says during the image build you don't get a running instance of Postgres.
You could take slightly different approach. Instead of trying to execute SQL scripts yourself you could copy them to /docker-entrypoint-initdb.d/ directory. They will be executed when the container starts up.
Have a look how postgres:9.4 image is build:
Dockerfile
docker-entrypoint.sh
Also in your Dockerfile use variables to set database details:
POSTGRES_DB
POSTGRES_USER
POSTGRES_PASSWORD
I'm trying to pass a host variable to a Dockerfile when running docker-compose build
I would like to run
RUN usermod -u $USERID www-data
in an apache-php7 Dockerfile. $USERID being the ID of the current host user.
I would have thought that the following might work:
commandline
EXPORT USERID=$(id -u); docker-compose build
docker-compose.yml
...
environment:
- USERID=$USERID
Dockerfile
ENV USERID
RUN usermod -u $USERID www-data
But no luck yet.
For Docker in general it is generally not possible to use host environment variables during the build phase; this is because it is desirable that if you run docker build and I run docker build using the same Dockerfile (or Docker Hub runs `docker build with the same Dockerfile), we end up with the same image, regardless of our local environment.
While passing in variables at runtime is easy with the docker command line (using -e <var>=<value>), it's a little trickier with docker-compose, because that tool is designed to create self-contained environments.
A simple solution would be to drop the host uid into an environment file before starting the container. That is, assuming you have:
version: "2"
services:
shell:
image: alpine
env_file: docker-compose.env
command: >
env
You can then:
echo HOST_UID=$UID > docker-compose.env; docker-compose up
And the HOST_UID environment variable will be available to your
container:
Recreating vartest_shell_1
Attaching to vartest_shell_1
shell_1 | HOSTNAME=17423d169a25
shell_1 | HOST_UID=1000
shell_1 | HOME=/root
vartest_shell_1 exited with code 0
You would then to have something like an ENTRYPOINT script that
would set up the container environment (creating users, modifying file
ownership) to operate correctly with the given UID.