docker varnish cmd error - no such file or directory - docker-compose

I'm trying to get a Varnish container running as part of a multicontainer Docker environment.
I'm using https://github.com/newsdev/docker-varnish as a base.
My Dockerfile looks like:
FROM newsdev/varnish:4.1.0
COPY start-varnishd.sh /usr/local/bin/start-varnishd
ENV VARNISH_VCL_PATH /etc/varnish/default.vcl
ENV VARNISH_PORT 80
ENV VARNISH_MEMORY 64m
EXPOSE 80
CMD [ "exec /usr/local/sbin/varnishd -j unix,user=varnishd -F -f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384" ]
When I run this as part of a docker-compose setup, I get:
ERROR: for eventsapi_varnish_1 Cannot start service varnish: oci
runtime error: container_linux.go:262: starting container process
caused "exec: \"exec /usr/local/sbin/varnishd -j unix,user=varnishd -F
-f /etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p http_req_hdr_len=16384 -p http_resp_hdr_len=16384\": stat exec
/usr/local/sbin/varnishd -j unix,user=varnishd -F -f
/etc/varnish/default.vcl -s malloc,64m -a 0.0.0.0:80 -p
http_req_hdr_len=16384 -p http_resp_hdr_len=16384: no such file or
directory"
I get the same if I try
CMD ["start-varnishd"]
(as it is in the base newsdev/docker-varnish)
or
CMD [/usr/local/bin/start-varnishd]
But if I run a bash shell on the container directly:
docker run -t -i eventsapi_varnish /bin/bash
and then run the varnishd command from there, varnish starts up fine (and starts complaining that it can't find the web container, obviously).
What am I doing wrong? What file can't it find? Again looking around the running container directly, it seems that Varnish is where it thinks it should be, the VCL file is where it thinks it should be... what's stopping it running from within docker-compose?
Thanks!

I didn't get to the bottom of why I was getting this error, but "fixed" it by using the (more recent?) fork: https://hub.docker.com/r/tripviss/varnish/. My Dockerfile is now just:
FROM tripviss/varnish:5.1
COPY default.vcl /usr/local/etc/varnish/

Related

OWASP/ZAP dangling when trying to scan

I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html

Gitlab CI & Docker Can't make any operation on postgres container

I try to configure continuous testing/integration with odoo and postgres docker container.
But I stuck on a problem, Gitlab CI can't do any operations on docker postgres.
My goal is to put a database template into a postgres container after run it and before testing.
I tried to use ssh executor after shell executor, but I always stuck on the same problem.
Notice that all commands here can be complete on the runner without problems, I test it.
I wrote this yml files:
variables:
# Configure postgres service (https://hub.docker.com/_/postgres/)
POSTGRES_DB: db
POSTGRES_USER: odoo
POSTGRES_PASSWORD: odoo
before_script:
# Pull container version
- docker pull postgres:9.5
- docker pull odoo:8.0
after_script:
# Remove all used container
- docker stop $(docker ps -a -q) && docker rm $(docker ps -aq)
stages:
- prepare
job1:
stage: prepare
# prepare postgres db
script:
#Launch postgres container
- docker run -d -e POSTGRES_USER=$POSTGRES_USER -e POSTGRES_PASSWORD=$POSTGRES_PASSWORD --name db postgres:9.5
# Copy and restore db template
- docker cp /home/myuser/odoov8_test.dump db:/home
- docker exec -i db su -s /bin/sh - postgres -c "createdb odoov8_test && pg_restore -d odoov8_test --no-owner --verbose /home/odoov8_test.dump"
# launch odoo with own addons folder (/own/path/to/addons:/mnt/extra-addons) testdatabase (-d) module to install without space with all dependances (-i) test unable (--test-enable) and stop after init (--stop-after-init)
- docker run -v $CI_PROJECT_DIR:/mnt/extra-addons -p 8069:8069 --name odoo --link db:db -t odoo:8.0 -- -d odoov8_test.dump -i crm,sale --test-enable --stop-after-init
I got this result:
[0KRunning with gitlab-ci-multi-runner 1.11.2 (0489844)
on Test docker odoo (7fafb15a)
[0;m[0KUsing Shell executor...
[0;mRunning on debian-8-clean...
[32;1mFetching changes...[0;m
HEAD est maintenant à 7d196ea Update .gitlab-ci.yml
Depuis https://myserver.com/APSARL/addons-ext
7d196ea..47591ac master -> origin/master
[32;1mChecking out 47591ac6 as master...[0;m
[32;1mSkipping Git submodules setup[0;m
[32;1m$ docker pull postgres:9.5[0;m
9.5: Pulling from library/postgres
Digest: sha256:751bebbc12716d7d9818678e91cbec8138e52dc4a084f0e81c58cd8b419930e5
Status: Image is up to date for postgres:9.5
[32;1m$ docker pull odoo:8.0[0;m
8.0: Pulling from library/odoo
Digest: sha256:9deda039e0df28aaf515001dd1606ab74a16ed25504447edc2912bca9019cd43
Status: Image is up to date for odoo:8.0
[32;1m$ docker run -d -e POSTGRES_USER=$POSTGRES_USER -e POSTGRES_PASSWORD=$POSTGRES_PASSWORD --name db postgres:9.5[0;m
60a0c75fd55e953e6a25a3cc0f13093ec2f1ee96bfb8384ac19d00f740dd1d4e
[32;1m$ docker cp /home/myuser/odoov8_test.dump db:/home
[0;m[32;1m$ docker exec -i db su -s /bin/sh - postgres -c "createdb odoov8_test && pg_restore -d odoov8_test --no-owner --verbose /home/odoov8_test.dump"
[0;m
createdb: could not connect to database template1: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
[32;1mRunning after script...[0;m
[32;1m$ docker stop $(docker ps -a -q) && docker rm $(docker ps -aq)[0;m
60a0c75fd55e
60a0c75fd55e
[31;1mERROR: Job failed: exit status 1
[0;m

Creating multiple PostgreSQL containers in docker in fedora

I want to create 2 containers of postgrSQL so that one can be used as DEV and other as DEV_STAGE.
I was able to successfully create one container and it is been assigned to port 5432. But when I'm trying to the second container, it is getting created(sometimes shows the status as EXITED) but not getting started because of the port number issue.
The below are the commands which I ran.
sudo docker run -v "pwd/data:/var/lib/pgsql/data:Z" -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5432:5432 fedora/postgresql
sudo docker run -v "pwd/data_stage:/var/lib/pgsql/data_stage:Z" -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5432:5433 fedora/postgresql
I think the port mapping which I'm using is incorrect. But not able to get the correct one.
You have an error in volume definition of the second container. Don't change path after colon, it is mandatory the path is set to /var/lib/pgsql/data.
Also you fliped ports mapping. The correct command is like this:
sudo docker run -v "`pwd`/data_stage:/var/lib/pgsql/data:Z" -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5433:5432 fedora/postgresql
If anything goes wrong inspect container logs with docker logs CONTAINER_ID
Thanks for the answer. I corrected the path. I think flipping the port number will not work too. Because I already have one container which is mapped to 5432. So I can't map the port to 5432 again. The below command with worked for me. First, I modified Postgres default port to 5433 using export variable PGPORT=5433.
sudo docker run -v "`pwd`/data_stg:/var/lib/pgsql/data:Z" -e PGPORT=5433 -e POSTGRESQL_USER=user1 -e POSTGRESQL_PASSWORD=password -e POSTGRESQL_DATABASE=test_db -d -p 5433:5433 fedora/postgresql

docker exec -it returns "cannot enable tty mode on non tty input"

docker exec -it command returns following error "cannot enable tty mode on non tty input"
level="fatal" msg="cannot enable tty mode on non tty input"
I am running docker(1.4.1) on centos box 6.6.
I am trying to execute the following command
docker exec -it containerName /bin/bash
but I am getting following error
level="fatal" msg="cannot enable tty mode on non tty input"
Running docker exec -i instead of docker exec -it fixed my issue. Indeed, my script was launched by CRONTAB which isn't a terminal.
As a reminder:
Usage: docker exec [OPTIONS] CONTAINER COMMAND [ARG...]
Run a command in a running container
-i, --interactive=false Keep STDIN open even if not attached
-t, --tty=false Allocate a pseudo-TTY
If you're getting this error in windows docker client then you may need to use the run command as below
$ winpty docker run -it ubuntu /bin/bash
just use "-i"
docker exec -i [your-ps] [command]
If you're on Windows and using docker-machine and you're using GIT Bash or Cygwin, to "get inside" a running container you'll need to do the following:
docker-machine ssh default to ssh into the virtual machine (Virtualbox most likely)
docker exec -it <container> bash to get into the container.
EDIT:
I've recently discovered that if you use Windows PowerShell you can docker exec directly into the container, with Cygwin or Git Bash you can use winpty docker exec -it <container> bash and skip the docker-machine ssh step above.
I get "cannot enable tty mode on non tty input" for the following command on windows with boot2docker
docker exec -it <containerIdOrName> bash
Below command fixed the problem
winpty docker exec -it <containerIdOrName> bash
docker exec runs a new command in an already-running container. It is not the way to start a new container -- use docker run for that.
That may be the cause for the "non tty input" error. Or it could be where you are running docker. Is it a true terminal? That is, is a full tty session available? You might want to check if you are in an interactive session with
[[ $- == *i* ]] && echo 'Interactive' || echo 'Not interactive'
from https://unix.stackexchange.com/questions/26676/how-to-check-if-a-shell-is-login-interactive-batch
I encountered this same error message in Windows 7 64bit using Mintty shipped with Git for Windows.
$docker run -i -t ubuntu /bin/bash
cannot enable tty mode on non tty input
I tried to prefix the above command with winpty as other answers suggested but running it showed me another error message below:
$ winpty docker run -i -t ubuntu /bin/bash
exec: "D:\\Git\\usr\\bin\\bash": executable file not found in $PATH
docker: Error response from daemon: Container command not found or does not exist..
Then I happened to run the following command which gave me what I want:
$ winpty docker run -i -t ubuntu bash
root#512997713d49:/# ls
bin dev home lib64 mnt proc run srv tmp var
boot etc lib media opt root sbin sys usr
root#512997713d49:/#
I'm running docker exec -it under jenkins jobs and getting error 'cannot enable tty mode on non tty input'. No output to docker exec command is returned. My job login sequence was:
jenkins shell -> ssh user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -it <container>
I made a change to use -T flag in the initial ssh from jenkins. "-T - Disable pseudo-terminal allocation". And use -i flag with docker exec instead of -it. "-i - interactive. -t - allocate pseudo tty.". This seems to have solved my problem.
jenkins shell -> ssh -T user#<testdriver> -> ssh root#<sut> -> su - <user> -> docker exec -i <container>
Behaviour kindof matches this docker exec tty bug: https://github.com/docker/docker/issues/8755. Workaround on that docker bug discussion suggests using this:
docker exec -it <CONTAINER> script -qc <COMMAND>
Using that workaround didn't solve my problem. It is interesting though. Try these using different flags and under different ssh invocations, you can see 'not a tty' even with using -t with docker exec:
$ docker exec -it <CONTAINER> script -qc 'tty'
/dev/pts/0
$ docker exec -it <CONTAINER> 'tty'
not a tty
$ docker exec -it <CONTAINER> bash -c 'tty'
not a tty

Why can't you start postgres in docker using "service postgres start"?

All the tutorials point out to running postgres in the format of
docker run -d -p 5432 \
-t <your username>/postgresql \
/bin/su postgres -c '/usr/lib/postgresql/9.2/bin/postgres \
-D /var/lib/postgresql/9.2/main \
-c config_file=/etc/postgresql/9.2/main/postgresql.conf'
Why can't we in our Docker file have:
ENTRYPOINT ["/etc/init.d/postgresql-9.2", "start"]
And simply start the container by
docker run -d psql
Is that not the purpose of Entrypoint or am I missing something?
the difference is that the init script provided in /etc/init.d is not an entry point. Its purpose is quite different; to get the entry point started, in the background, and then report on the success or failure to the caller. that script causes a postgres process, usually indirectly via pg_ctl, to be started, detached from the controlling terminal.
for docker to work best, it needs to run the application directly, attached to the docker process. that way it can usefully and generically terminate it when the user asks for it, or quickly discover and respond to the process crashing.
Exemplify that IfLoop said.
Using CMD into Dockerfiles:
USE postgres
CMD ["/usr/lib/postgresql/9.2/bin/postgres", "-D", "/var/lib/postgresql/9.2/main", "-c", "config_file=/etc/postgresql/9.2/main/postgresql.conf"]
To run:
$docker run -d -p 5432:5432 psql
Watching PostgeSQL logs:
$docker logs -f POSTGRES_CONTAINER_ID