custom DDEV pull provider to update local database and user generated files - typo3

I'm trying to create a custom DDEV Provider, to import the current database and also user generated files from the web server.
I want to use it with TYPO3 Projects, where I develop the EXT locally with DDEV (because its awesome :) ) and I want to update my local database and also the "fileadmin" files with the help of the ddev pull function.
I've read the docs: Introduction to Hosting Provider Integration and I tested the bash commands locally within the DDEV Container (ddev ssh) and I'm able to connect to the remote Webserver and make a database dump and transfer it to the local DDEV container.
So I added the bash commands to the my custom provider .yaml file in the /provider/ folder.
Here is the current file:
environment_variables:
DB_NAME: db_name
DB_USER: password
DB_PASSWORD: password
HOST_IP: 11.11.11.11
SSH_USERNAME: username
SSH_PASSWORD: password
SSH_PORT: 22
db_pull_command:
command: |
# Creates the .download folder if it doesn't exist
mkdir -p /var/www/html/.ddev/.downloads
# execute the mysqldump on the remote webserver via SSH
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} 'mysqldump -h 127.0.0.1 -u ${DB_USER} -p ${DB_PASSWORD} ${DB_NAME} > /tmp/${DB_NAME}.sql.gz'
# download to sql file to the ddev folder
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql.gz /var/www/html/.ddev/.downloads/db.sql.gz.
If I execute the pull with ddev pull my-provider I get the following Error:
Downloading database...
bash: 03: command not found
Pull failed: Failed to exec mkdir -p /var/www/html/.ddev/.downloads
I assumed that the commands are executed like I would within the DDEV Container (with ddev ssh). What am I missing?
My Environment:
TYPO3 v10.4.20
Windows 10 (WSL)
Docker Desktop 3.5.2
DDEV-Local version v1.17.7
architecture amd64
db drud/ddev-dbserver-mariadb-10.3:v1.17.7
dba phpmyadmin:5
ddev-ssh-agent drud/ddev-ssh-agent:v1.17.0
docker 20.10.7
docker-compose 1.29.2
The web server is running on Plesk.
Note: I only tried to implement the db pull command so far.
UPDATE 09.11.21:
So I've gotten this far that I'm able update and also download the files. However I'm only able to do it, if I hardcode the variables. Everytime I'm trying to setup the environment_variables: I get the following error, if I run the ddev pull myProvider:
Downloading database...
bash: 03: command not found
Here is my current .yaml file with the environment_variables:, which currently don't work. I've tested all the commands within ddev ssh
and it works if I call them manually.
environment_variables:
DB_NAME: db_name
DB_USER: db_user
DB_PASSWORD: 'Password$'
HOST_IP: 10.10.10.10
SSH_USERNAME: username
SSH_PORT: 21
auth_command:
command: |
ssh-add -l >/dev/null || ( echo "Please 'ddev auth ssh' before running this command." && exit 1 )
db_pull_command:
command: |
mkdir -p /var/www/html/.ddev/.downloads
ssh -p ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP} "mysqldump -h 127.0.0.1 -u ${DB_USER} -p'${DB_PASSWORD}' ${DB_NAME} > /tmp/${DB_NAME}.sql"
scp -P ${SSH_PORT} ${SSH_USERNAME}#${HOST_IP}:/tmp/${DB_NAME}.sql /var/www/html/.ddev/.downloads/db.sql
gzip -f /var/www/html/.ddev/.downloads/db.sql
files_pull_command:
command: |
scp -P ${SSH_PORT} -r ${SSH_USERNAME}#${HOST_IP}:/path/to/public/fileadmin/user_upload /var/www/html/.ddev/.downloads/files
Do I declare the variables the wrong way? Or what is it that I'm missing?
For anyone who has trouble connecting via ssh without the password promt, you can run the following commands:
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub -p 22 username#host
Afterward you should be able to connect without a password promt. Try the following: ssh -p 22 username#host
before you try to ddev puul you have to execute ddev auth ssh

Thanks to #rfay for pointing me into the right direction.
The Problem was, that my password containted a special charater (not a $ though) which needed to be escaped.
After escpaing it correctly like so
environment_variables:
DB_PASSWORD: 'Password\&\'
the ddev pull works.
I hope my .yaml file helps someone else that needs to pull from a webserver.

Related

OWASP/ZAP dangling when trying to scan

I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html

Configure dockerfile with postgres

Trying to make Dockerfile for postgres db need for my app.
Dockerfile
FROM postgres:9.4
RUN mkdir /sql
COPY src/main/resources/sql_scripts/* /sql/
RUN psql -f /sql/create_user.sql
RUN psql -U user -W 123 -f create_db.sql
RUN psql -U user -W 123 -d school_ats -f create_tables.sql
run
docker build .
result:
Sending build context to Docker daemon 3.367 MB
Step 1 : FROM postgres:9.4
---> 6196bca94565
Step 2 : RUN mkdir /sql
---> Using cache
---> 6f57c1e759b7
Step 3 : COPY src/main/resources/sql_scripts/* /sql/
---> Using cache
---> 3b496bfb28cd
Step 4 : RUN psql -a -f /sql/create_user.sql
---> Running in 33b2230a12fa
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
The command '/bin/sh -c psql -a -f /sql/create_user.sql' returned a non-zero code: 2
How can I specify db in docker for my project?
When building your docker image postgres is not running. Database is started when container is starting, any sql files can be executed after that. Easiest solution is to put your sql files into special directory:
FROM postgres:9.4
COPY *.sql /docker-entrypoint-initdb.d/
When booting startup script will execute all files from this dir. You can read about this in docs https://hub.docker.com/_/postgres/ in section How to extend this image.
Also, if you need different user you should set environment variables POSTGRES_USER and POSTGRES_PASSWORD. It's easier then using custom scripts for creating user.
As the comment above says during the image build you don't get a running instance of Postgres.
You could take slightly different approach. Instead of trying to execute SQL scripts yourself you could copy them to /docker-entrypoint-initdb.d/ directory. They will be executed when the container starts up.
Have a look how postgres:9.4 image is build:
Dockerfile
docker-entrypoint.sh
Also in your Dockerfile use variables to set database details:
POSTGRES_DB
POSTGRES_USER
POSTGRES_PASSWORD

Getting User name + password to docker container

I've really been struggling over the past few days trying to setup some docker containers and shell scripts to create an environment for my application to run in.
The tall and short is that I have a web server which requires a database to operate. My aim is to have end users unzip the content onto their docker machine, run a build script (which just builds the relevant docker images), then run a OneTime.sh script (which creates the volumes and databases necessary), during this script, they are prompted for what user name and password they would like for the super user of the database.
The problem I'm having is getting those values to the docker image. Here is my script:
# Create the volumes for the data backend database.
docker volume create --name psql-data-etc
docker volume create --name psql-data-log
docker volume create --name psql-data-lib
# Create data store database
echo -e "\n${TITLE}[Data Store Database]${NC}"
docker run -v psql-data-etc:/etc/postgresql -v psql-data-log:/var/log/postgresql -v psql-data-lib:/var/lib/postgresql -p 9001:5432 -P --name psql-data-onetime postgres-setup
# Close containers
docker stop psql-data-onetime
docker rm psql-data-onetime
docker stop psql-transactions-onetime
docker rm psql-transactions-onetime
And here is the docker file:
FROM ubuntu
#Required environment variables: USERNAME, PASSWORD, DBNAME
# Add the PostgreSQL PGP key to verify their Debian packages.
# It should be the same key as https://www.postgresql.org/media/keys/ACCC4CF8.asc
RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys B97B0AFCAA1A47F044F244A07FCC7D46ACCC4CF8
# Add PostgreSQL's repository. It contains the most recent stable release
# of PostgreSQL, ``9.3``.
RUN echo "deb http://apt.postgresql.org/pub/repos/apt/ precise-pgdg main" > /etc/apt/sources.list.d/pgdg.list
# Install ``python-software-properties``, ``software-properties-common`` and PostgreSQL 9.3
# There are some warnings (in red) that show up during the build. You can hide
# them by prefixing each apt-get statement with DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y python-software-properties software-properties-common postgresql-9.3 postgresql-client-9.3 postgresql-contrib-9.3
# Note: The official Debian and Ubuntu images automatically ``apt-get clean``
# after each ``apt-get``
# Run the rest of the commands as the ``postgres`` user created by the ``postgres-9.3`` package when it was ``apt-get installed``
USER postgres
# Complete configuration
USER root
RUN echo "host all all 0.0.0.0/0 md5" >> /etc/postgresql/9.3/main/pg_hba.conf
RUN echo "listen_addresses='*'" >> /etc/postgresql/9.3/main/postgresql.conf
# Expose the PostgreSQL port
EXPOSE 5432
# Add VOLUMEs to allow backup of config, logs and databases
RUN mkdir -p /var/run/postgresql && chown -R postgres /var/run/postgresql
VOLUME ["/etc/postgresql", "/var/log/postgresql", "/var/lib/postgresql"]
# Run setup script
ADD Setup.sh /
CMD ["sh", "Setup.sh"]
The script 'Setup.sh' is the following:
echo -n " User name: "
read user
echo -n " Password: "
read password
echo -n " Database Name: "
read dbname
/etc/init.d/postgresql start
/usr/lib/postgresql/9.3/bin/psql --command "CREATE USER $user WITH SUPERUSER PASSWORD '$password';"
/usr/lib/postgresql/9.3/bin/createdb -O $user $dbname
exit
Why doesn't this work? (I don't get prompted to enter the text, and it throws an error that the parameters are bad). What is the proper way to do something like this? It feels like it's probably a pretty common problem to solve, but I cannot for the life of me find any non convoluted examples of this behaviour.
The main purpose of this is to make life easier for the end user, so if I could just prompt them for the user name, password, and dbname, (plus calling the correct scripts), that would be ideal.
EDIT:
After running the log file looks like this:
User name:
Password:
Database Name:
Usage: /etc/init.d/postgresql {start|stop|restart|reload|force-reload|status} [version ..]
EDIT 2:
After updating to CMD ["sh", "-x", "Setup.sh"]
I get:
echo -n User name:
+read user
:bad variable nameuser
echo -n Password:
+read password
:bad variable namepassword
echo -n Database Name:
+read dbname
:bad variable dbname

How do I handle passwords and dockerfiles?

I've created an image for docker which hosts a postgresql server. In the dockerfile, the environment variable 'USER', and I pass a constant password into the a run of psql:
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD 'docker';" && createdb -O docker docker
Ideally either before or after calling 'docker run' on this image, I'd like the caller to have to input these details into the command line, so that I don't have to store them anywhere.
I'm not really sure how to go about this. Does docker have any support for reading stdin into an environment variable? Or perhaps there's a better way of handling this all together?
At build time
You can use build arguments in your Dockerfile:
ARG password=defaultPassword
USER postgres
RUN /etc/init.d/postgresql start && psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';" && createdb -O docker docker
Then build with:
$ docker build --build-arg password=superSecretPassword .
At run time
For setting the password at runtime, you can use an environment variable (ENV) that you can evaluate in an entrypoint script (ENTRYPOINT):
ENV PASSWORD=defaultPassword
ADD entrypoint.sh /docker-entrypoint.sh
USER postgres
ENTRYPOINT /docker-entrypoint.sh
CMD ["postgres"]
Within the entrypoint script, you can then create a new user with the given password as soon as the container starts:
pg_ctl -D /var/lib/postgresql/data \
-o "-c listen_addresses='localhost'" \
-w start
psql --command "CREATE USER docker WITH SUPERUSER PASSWORD '$password';"
postgres pg_ctl -D /var/lib/postgresql/data -m fast -w stop
exec $#
You can also have a look at the Dockerfile and entrypoint script of the official postgres image, from which I've borrowed most of the code in this answer.
A note on security
Storing secrets like passwords in environment variables (both build and run time) is not incredibly secure (unfortunately, to my knowledge, Docker does not really offer any better solution for this, right now). An interesting discussion on this topic can be found in this question.
You could use environment variable in your Dockerfile and override the default value when you call docker run using -e or --env argument.
Also you will need to amend the init script to run psql command on startup referenced by the CMD instruction.

.pgpass for PostgreSQL replication in Dockerized environment

I try to set up an PostgreSQL slave using Docker and a bash script (I use Coreos). I have not found any way to supply a valid .pgpass.
I know I could create a PGPASSWORD environment variable, but do not wish to do so for security reasons (as stated here, http://www.postgresql.org/docs/current/static/libpq-envars.html),, and because this password should be accessible every time the recovery.conf file is used (for the primary_conninfo variable).
Dockerfile
# ...
# apt-get installs and other config
# ...
USER postgres
# Create role and db
RUN /etc/init.d/postgresql start &&\
psql --command "CREATE USER replicator WITH ENCRYPTED PASSWORD 'THEPASSWORD';" &&\
psql --command "CREATE DATABASE db WITH OWNER replicator;"
# Set the pg_pass to allow connection to master
ADD ./pgpass.conf /home/postgres/.pgpass # pgpass.conf comes my root git folder
USER root
RUN chmod 0600 /home/postgres/.pgpass
In my bash file
# ...
pg_basebackup -h host.of.master.ip -D /var/pgbackup/backup_data -U replicator -v -P
# ...
The problems seems to be that the pgpass file is not read. It seems I should use the password of the user I'm sudoing to (https://serverfault.com/questions/526170/psql-fe-sendauth-no-password-supplied), but in this case the replicator role is naturally not an available bash user. (Note that neither copying the pgpass to /home/root not /home/postgres works).
Note: my pgpass file and by remote database conf
# pgpass.conf
host.of.master.ip:5432:replication:replicator:THEPASSWORD
host.of.master.ip:5432:*:replicator:THEPASSWORD
# pg_hba.conf
host replication replicator host.of.slave.ip/24 md5
You have to create a .pgpass on the home folder of the user who's going to be running the commands (in this case, postgres). Each line of the file has to be in the format hostname:port:database:username:password and supports wildcards, so you can just set the database to "*" for example.
In my case, I have something like this...
$ sudo echo "${PRIMARY_IP}:5432:*:${REPL_USER}:${REPL_PASS}" > /var/lib/postgresql/.pgpass
$ sudo chown postgres:postgres /var/lib/postgresql/.pgpass
$ sudo chmod 0600 /var/lib/postgresql/.pgpass
$ sudo -u postgres pg_basebackup -h $PRIMARY_IP -D /var/lib/postgresql/9.4/main -U ${REPL_USER} -v -P --xlog-method=stream
Those variables (e.g. PRIMARY_IP) are set when I run the docker container with -e PRIMARY_IP=x.x.x.x