I want to build an image and I have already build my Dockerfile .I already have the container ,but the image is difficult for me to buiold it .this is code for Dockerfile:
FROM nginx:alpine
COPY . /usr/share/nginx/html
But the problem is , I tried to run it on my terminal (I am using MacBook Terminal) with this command.This code what I wrote in the terminal :
Build an image from a Dockerfile
tiadem_tatie#SIT-SMBP1606 ~ % docker run -p 80:80 countly_new
Unable to find image 'countly_new:latest' locally
docker: Error response from daemon: Get https://registry-1.docker.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers).
See 'docker run --help'.
tiadem_tatie#SIT-SMBP1606 ~ % docker build -t Countly_new .
invalid argument "Countly_new" for "-t, --tag" flag: invalid reference format: repository name must be lowercase
See 'docker build --help'.
tiadem_tatie#SIT-SMBP1606 ~ % docker build -t COUNTLY_NEW .
invalid argument "COUNTLY_NEW" for "-t, --tag" flag: invalid reference format: repository name must be lowercase
See 'docker build --help'.
tiadem_tatie#SIT-SMBP1606 ~ % docker build -t countlynew .
error checking context: 'can't stat '/Users/tiadem_tatie/.Trash''.
tiadem_tatie#SIT-SMBP1606 ~ % docker build -t <countlynew> .
zsh: no such file or directory: countlynew
tiadem_tatie#SIT-SMBP1606 ~ % docker build -t countlynew path
unable to prepare context: path "path" not found
tiadem_tatie#SIT-SMBP1606 ~ % docker build -t image-name path
unable to prepare context: path "path" not found
tiadem_tatie#SIT-SMBP1606 ~ % docker build -t image-name path
unable to prepare context: path "path" not found
tiadem_tatie#SIT-SMBP1606 ~ % docker run --help
The document Dockerfile is in my downloads folder. Can I have there command and if it is possible , I need help to how to build a mongoDB and Backend .Then I want to connect both together .
Thanky you
Why would you keep your project files in Downloads folder? It sounds like a horrible idea.
Still, assuming that both your Dockerfile and your website files are located there run the following command to build the image:
docker build -t countly_new:latest ~/Downloads
Related
i build myself docker image use jre1.8.0_341 + alpine + glibc(2.33)
first:
I downloaded the jar package of jre1.8.0_341
Below is my Dockerfile:
FROM alpine:latest
ADD jre1.8.0_341/jre8.tar.gz /usr/java/jdk/
ENV JAVA_HOME /usr/java/jdk
ENV PATH ${PATH}:${JAVA_HOME}/bin
RUN wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub
RUN wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.33-r0/glibc-2.33-r0.apk
RUN apk add glibc-2.33-r0.apk
WORKDIR /
Next build the docker image
docker build -f Dockerfile -t myself:latest .
when i run this imgaes
docker run -it myself:latest /bin/bash
Get java version
but i use "FeignClient" to Request other services
report an error: “javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure”
so i add Java startup parameters -Djavax.net.debug=all
and Compare TLS packets using "openjdk:8-jre-alpine" as the base image
I found that there is no "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" in the "cipher suites" parameter during the "Produced ClientHello handshake message" phase
How should I set up cipher suites for Java TLS?
I try to run locustfile in locustio/locust docker image and it cannot find the locustfile, despite the locustfile exists in the locust directory.
~ docker run -p 8089:8089 -v $PWD:/locust locustio/locust locust -f /locust/locustfile.py
Could not find any locustfile! Ensure file ends in '.py' and see --help for available options.
(I'm reposting this question as my own, because the original poster deleted it immediately after getting an answer!)
Remove the extra "locust" from your command, so that it becomes:
docker run ... locustio/locust -f /locust/locustfile.py
When testing API using locust distributed mode without UI in docker. The distribution.csv, requests.csv are getting generated but the failures.csv and expection.csv are not getting generated but the requests.csv show failures as given below.
"Method","Name","# requests","# failures","Median response time","Average response time","Min response time","Max response time","Average Content Size","Requests/s"
"POST","/api/something/something",197009,56,470,559,78,156714,1,436.31
Can you please help.
The problem is that file need to be written to a folder that it has permission to, and a volume that is mounted to your host. If you add a mounted folder before the file name, it should work. For example:
Docker file:
# Set base image
FROM locustio/locust
ADD locustfile.py locustfile.py
Docker create Command:
docker build -t mykey/myimage:1.0 .
Docker run command (on Windows, replace with %CD% with $pwd on linux):
docker run --volume "%CD%:/mnt/locust" -e LOCUSTFILE_PATH=/mnt/locust/locustfile.py -e TARGET_URL=https://example.com -e LOCUST_OPTS="--clients=10 --no-web --run-time=600 --csv=/mnt/locust/output" mykey/myimage:1.0
The files will now write to the same folder where locustfile.py is located.
I am trying out OWASP/ZAP to see if it is something we can use for our project, but I cannot make it work I don't know what I am doing wrong and the documentation really does not help. What I am trying is to run a scan on my api running in a docker container locally on my windows machine so I run the command:
docker run -v $(pwd):/zap/wrk/:rw -t owasp/zap2docker-stable zap-baseline.py -t http://172.21.0.2:8080/swagger.json -g gen.conf -r testreport.html the ip 172.21.0.2 is the IPAddress of my api container even tried with localhost and 127.0.0.1
but it just hangs in the following log message:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 1:43:31 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Nothing happens and my zap docker container is in a unhealthy state, after some time it just crashes and ends up with a bunch of NullPointerExceptions. Is zap docker only working for linux, something specifically I need to do when running it on a windows machine? I don't get why this is not working even when I am following specifically the guideline in https://github.com/zaproxy/zaproxy/wiki/Docker
Edit 1
My latest try where I am trying to target my host ip address directly and the port that I am exposing my api to gives me the following error:
_XSERVTransmkdir: ERROR: euid != 0,directory /tmp/.X11-unix will not be created.
Feb 14, 2019 2:12:07 PM java.util.prefs.FileSystemPreferences$1 run
INFO: Created user preferences directory.
Total of 3 URLs
ERROR Permission denied
2019-02-14 14:12:57,116 I/O error(13): Permission denied
Traceback (most recent call last):
File "/zap/zap-baseline.py", line 347, in main
with open(base_dir + generate, 'w') as f:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
Found Java version 1.8.0_151
Available memory: 3928 MB
Setting jvm heap size: -Xmx982m
213 [main] INFO org.zaproxy.zap.DaemonBootstrap
When you run docker with: docker run -v $(pwd):/zap/wrk/:rw ...
you are mapping the /zap/wrk/ directory in the docker image to the current working directory (cwd) of the machine in which you are running docker.
I think the problem is that your current user doesn't have write access to the cwd.
Try below command, hope it resolves issue.
$docker run --user $(id -u):$(id -g) -v $(pwd):/zap/wrk/:rw --rm -t owasp/zap2docker-stable zap-baseline.py -t https://your_url -g gen.conf -r testreport.html
The key error here is:
IOError: [Errno 13] Permission denied: '/zap/wrk/gen.conf'
This means that the script cannot write to the gen.conf file that you have mounted on /zap/wrk
Do you have write access to the cwd when its not mounted?
The reason for that is, if you use -r parameter, zap will attempt to generate the file report.html at location /zap/wrk/. In order to make this work, we have to mount a directory to this location /zap/wrk.
But when you do so, it is important that the zap container is able to perform the write operations on the mounted directory.
So, below is the working solution using gitlab ci yml. I started with this approach of using image: owasp/zap2docker-stable however then had to go to the vanilla docker commands to execute it.
test_site:
stage: test
image: docker:latest
script:
# The folder zap-reports created locally will be mounted to owasp/zap2docker docker container,
# On execution it will generate the reports in this folder. Current user is passed so reports can be generated"
- mkdir zap-reports
- cd zap-reports
- docker pull owasp/zap2docker-stable:latest || echo
- docker run --name zap-container --rm -v $(pwd):/zap/wrk -u $(id -u ${USER}):$(id -g ${USER}) owasp/zap2docker-stable zap-baseline.py -t "https://example.com" -r report.html
artifacts:
when: always
paths:
- zap-reports
allow_failure: true
So the trick in the above code is
Mount local directory zap-reports to /zap/wrk as in $(pwd):/zap/wrk
Pass the current user and group on the host machine to the docker container so the process is using the same user / group. This allows writing of reports on the directory mounted from local host. This is done by -u $(id -u ${USER}):$(id -g ${USER})
Below is the working code with image: owasp/zap2docker-stable
test_site:
variables:
GIT_STRATEGY: none
stage: test
image:
name: owasp/zap2docker-stable:latest
before_script:
- mkdir -p /zap/wrk
script:
- zap-baseline.py -t "https://example.com" -g gen.conf -I -r testreport.html
- cp /zap/wrk/testreport.html testreport.html
artifacts:
when: always
paths:
- zap.out
- testreport.html
I'm new to Docker.
I'm trying to run my node app tests in a Docker container.
I want to run the tests with a real postgres db.
I'm creating this container with the following Dockerfile:
# Set image
FROM postgres:alpine
# Install node latest
RUN apk add --update nodejs nodejs-npm
# Set working dir
WORKDIR .
# Copy the current directory contents into the container at .
ADD src src
ADD .env.testing .env
ADD package.json .
ADD package-lock.json .
# Run tests
CMD npm install && npm run coverage
From the image docs, when I run the container with:
$ docker run build-name -d postgres
I see that the container takes time to start postgresql service.
When I run the container without the "-d postgres" param:
$ docker run build-name
The service does not start and the tests fail due to "could not connect to server".
Questions:
A. How can I run the tests AFTER the postgresql service starts?
B. I saw some examples using docker-composer but can I do this without composer?
Thanks
Thanks to #Bogdan I found the complete solution:
Dockerfile should be:
# Set image
FROM postgres:alpine
# Install node latest
RUN apk add --update nodejs nodejs-npm
# Set working dir
WORKDIR .
# Copy the current directory contents into the container at .
ADD src src
ADD .env.testing .env
ADD package.json .
ADD package-lock.json .
# Install
RUN npm install
# Init container
CMD psql -U postgres -c "SELECT 1;" postgres
Build container:
$ docker build -t test .
Run container:
$ docker run --name startedtest -d test -d postgres
Run tests after conatiner is running:
$ docker exec startedtest some_create_schema_script && npm run coverage
If the goal is just to run the tests in the Postgres container, one solution could be to install NodeJs in your postgres:alpine derived image and run the container normally. Once the database is up, you can run npm using docker exec like this:
docker exec <container_id> npm run coverage