I'm using cookiecutter-django project template which includes nice docker-compose integration. However when I run a manage.py command that creates a file via docker-compose e.g:
sudo docker-compose -f dev.yml run django python manage.py dumpdata --output ./myapp/fixtures/data.json
the file is owned by root:root so I do not initially have write permissions on my host filesystem. I'm still learning about docker-compose and am struggling to find the tidiest way to chown any created files back to my local user.
I did try setting the user: flag in my django service in dev.yml like so..
django:
user: $USER
but this didn't take.
I also read about using
django:
user: ${UID}
but this also fails with a warning
WARNING: The UID variable is not set. Defaulting to a blank string.
despite echo $UID returning the correct value. So for some reason, env variables are not being passed through properly, though I don't know how to go about debugging that. possibly related: https://github.com/docker/compose/issues/2613
edit: more updates.
so running docker-compose -f dev.yml run django env shows that $UID isn't present. I tried assigning it in entrypoint.sh but as that file is run by root, the $UID will be 0 at runtime. I'm out of ideas now as to how to pass the user ID in.
Related
I have a scenario where a certain data set comes from a CSV and I need to allow a non-dev to hit PG Admin and update this data set. I want to be able to put this CSV in a mapped folder from the host system and then use the PG Admin GUI to run a COPY command. So far PG Admin is telling me:
ERROR: could not open file "/var/lib/pgadmin/data-files/some_data.csv" for reading: No such file or directory
Here are my steps so far, along with a sanity check inspect:
docker volume create --name=data-files
docker run -e PGADMIN_DEFAULT_EMAIL="pgadmin#example.com" -e PGADMIN_DEFAULT_PASSWORD=some_pass -v data-files:/var/lib/pgadmin/data-files -d -p 5050:80 --name pgadmin dpage/pgadmin4
docker volume inspect data-files --format '{{.Mountpoint}}'
/app/docker/volumes/data-files/_data
docker cp ./updated-data.csv pgadmin:/var/lib/pgadmin/data-files
And, now I think that PG Admin could see the updated-data.csv, so I try COPY, which I know works locally on my dev system where PG Admin is on baremetal:
COPY foo.bar(
...
)
FROM '/var/lib/pgadmin/data-files/updated-data.csv'
DELIMITER ','
CSV HEADER
ENCODING 'windows-1252';
Is there any glaring mistake here? When I do docker cp there's no feedback to stdout. No error, no mention of success or a hash or anything.
It looks like you thought the file should be inside the pgadmin container however the file you are going to copy must be inside the postgres container so the query you run will find the file. I suggest you copy the file to postgres container :
docker cp <path_from_your_local>/file.csv <postgres_container_name>:/file.csv
Then in the query tool from your pgadmin you can copy without problems !
I hope this help to others came here...
I created Azure Container Instance and ran postgresql in it. Mounted an azure container instance storage account. How can I start backup work, possibly by sheduler?
When I run the command
az container exec --resource-group Vitalii-demo --name vitalii-demo --exec-command "pg_dumpall -c -U postgrace > dump.sql"
I get an error error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \ "pg_dumpall -c -U postgrace > dump.sql\": executable file not found in $PATH"
I read that
Azure Container Instances currently supports launching a single process with az container exec, and you cannot pass command arguments. For example, you cannot chain commands like in sh -c "echo FOO && echo BAR", or execute echo FOO.
Perhaps there is an opportunity to run as a task? Thanks.
Unfortunately - and as you already mentioned - it's not possible to run any commands with arguments like echo FOO or chain multiple commands together with &&.
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-exec#run-a-command-with-azure-cli
You should be able to run an interactive shell by using --exec-command /bin/bash.
But this will not help if you want to schedule the backups programatically.
pg_dumpall can also be configured by environment variables:
https://www.postgresql.org/docs/9.3/libpq-envars.html
You could launch your backup-container with the correct environment variables in order to connect your database service:
PGHOST
PGPORT
PGUSER
PGPASSWORD
When having these variables set, a simple pg_dumpall should totally do what you want.
Hope that helps.
UPDATE:
Yikes, even when configuring the connection via environment-variables you won't be able to state the desired output file... Sorry.
You could create your own Dockerimage with a pre-configured script for dumping your PostgreSQL-database.
Doing it that way, you can configure the output-file in your script and then simply execute the script with --exec-command dump_my_db.sh.
Keep in mind that your script has to be located somewhere in the default $PATH - e.g. /usr/local/bin.
I have a simple dockerfile:
FROM postgres:latest
ENV POSTGRES_PASSWORD password
ENV POSTGRES_USER postgres
ENV POSTGRES_DB evesde
COPY init.sh /docker-entrypoint-initdb.d/
and my init file is chmod' to 777:
#!/bin/bash
psql -U "postgres" -d "evesde" -e "create role yaml with login encrypted password 'password';"
when running a container it will say:
psql: warning: extra command-line argument "create role yaml with
login encrypted password 'password';" ignored
Im not sure why this is happening, and when doing an interactive terminal, this command seemingly worked. I dont see any additional information and wasnt sure what was going wrong.
The postgres docker page is: https://hub.docker.com/_/postgres
When looking at it deeper, I was noticing that running the command itself fails in an interactive Terminal with the same error, but the command runs when I am in postgres: psql -U "postgres" -d "evesde" and run the command, it works.
I think it may be related to passing the command in through the exec command is where it fails. likely related to '.
You want -c instead of -e.
-e turns on "echo queries"
-c runs the command and exits
Have you considered putting just the create role command in a file called create_role.sql and copying that into /docker-entrypoint-initdb.d/?
Based on testing, it looks like an equivalent but simpler solution is to put the SQL command as one line in a file, 00_roles.sql, and copy that into the container instead of the init.sh script.
I am using 4D for front-end and postgresql for back-end. So i have the requirement to take database backups from front-end.
Here what i have done so far for taking backups in 4D.
C_LONGINT(i_pg_connection)
i_pg_connection:=PgSQL Connect ("localhost";"admin";"admin";"test_db")
LAUNCH EXTERNAL PROCESS("C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db")
PgSQL Close (i_pg_connection)
But the it's not taking the backup.
The backup command is ok because it works perfectly while firing on command prompt.
What's wrong in my code?
Thanks in advance.
Unneeded commands in your code
If you are using LAUNCH EXTERNAL PROCESS to do the backup then you do not need the PgSQL CONNECT and PgSQL CLOSE.
These plug-in commands do not execute in the same context as LAUNCH EXTERNAL PROCESS so they are unneeded in this situation.
Make sure you have write access
If the 4D Database is running as a Service, or more specifically as a user that does not have write access to C:\Users\Admin_user\..., then it could be failing due to a permissions issue.
Make sure that you are writing to a location that you have write access to, and also be sure to check the $out and $err parameters to see what the Standard Output and Error Streams are.
You need to specify a password for pg_dump
Another problem is that you are not specifying the password.
You could either use the PGPASSWORD environment variable or use a pgpass.conf file in the user's profile directory.
Regarding the PGPASSWORD environment variable; the documentation has the following warning:
Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file
Example using pgpass.conf
The following example assumes you have a pgpass.conf file in place:
C_TEXT($c;$in;$out;$err)
$c:="C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F"
$c:=$c+" c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db"
LAUNCH EXTERNAL PROCESS($c;$in;$out;$err)
TRACE
Example using PGPASSWORD environment variable
The following example sets the PGPASSWORD environment variable before the call to pg_dump and then clears the variable after the call:
C_TEXT($c;$in;$out;$err)
SET ENVIRONMENT VARIABLE ( "PGPASSWORD" ; "your postgreSQL password" )
$c:="C:\\Program Files\\PostgreSQL\\9.5\\bin\\pg_dump.exe -h localhost -p 5432 -U admin -F"
$c:=$c+" c -b -v -f C:\\Users\\Admin_user\\Desktop\\backup_test\\db_backup.backup test_db"
LAUNCH EXTERNAL PROCESS($c;$in;$out;$err)
SET ENVIRONMENT VARIABLE ( "PGPASSWORD" ; "" ) // clear password for security
TRACE
Debugging
Make sure to use the debugger to check the $out and $err to see what the underlying issue is.
I'm trying to pass a host variable to a Dockerfile when running docker-compose build
I would like to run
RUN usermod -u $USERID www-data
in an apache-php7 Dockerfile. $USERID being the ID of the current host user.
I would have thought that the following might work:
commandline
EXPORT USERID=$(id -u); docker-compose build
docker-compose.yml
...
environment:
- USERID=$USERID
Dockerfile
ENV USERID
RUN usermod -u $USERID www-data
But no luck yet.
For Docker in general it is generally not possible to use host environment variables during the build phase; this is because it is desirable that if you run docker build and I run docker build using the same Dockerfile (or Docker Hub runs `docker build with the same Dockerfile), we end up with the same image, regardless of our local environment.
While passing in variables at runtime is easy with the docker command line (using -e <var>=<value>), it's a little trickier with docker-compose, because that tool is designed to create self-contained environments.
A simple solution would be to drop the host uid into an environment file before starting the container. That is, assuming you have:
version: "2"
services:
shell:
image: alpine
env_file: docker-compose.env
command: >
env
You can then:
echo HOST_UID=$UID > docker-compose.env; docker-compose up
And the HOST_UID environment variable will be available to your
container:
Recreating vartest_shell_1
Attaching to vartest_shell_1
shell_1 | HOSTNAME=17423d169a25
shell_1 | HOST_UID=1000
shell_1 | HOME=/root
vartest_shell_1 exited with code 0
You would then to have something like an ENTRYPOINT script that
would set up the container environment (creating users, modifying file
ownership) to operate correctly with the given UID.