Makefile: Terminates after running commands "go test ./..." - postgresql

I encountered a problem running "go test" from a makefile. The idea behind all this is to start a docker container, run all tests against it and then stop & remove the container.
The container gets started and the tests run, but the last two commands (docker stop & rm) aren't executed.
Make returns this message:
make: *** [test] Error 1
Is it "go test" which terminates the makefile execution?
.PHONY: up down test
up:
docker-compose up
down:
docker-compose down
test:
docker run -d \
--name dev \
--env-file $${HOME}/go/src/test-api/testdata/dbConfigTest.env \
-p 5432:5432 \
-v $${HOME}/go/src/test-api/testdata/postgres:/var/lib/postgresql/data postgres
# runs all tests including integration tests.
go test ./... --tags=integration -failfast -v
# stop and remove container
docker stop `docker ps -aqf "name=dev"`
docker rm `docker ps -aqf "name=dev"`

Assuming that you want the 'make test' to return the test status consider the following change to the makefile
test:
docker run -d \
--name dev \
--env-file $${HOME}/go/src/test-api/testdata/dbConfigTest.env \
-p 5432:5432 \
-v $${HOME}/go/src/test-api/testdata/postgres:/var/lib/postgresql/data postgres
# runs all tests including integration tests.
go test ./... --tags=integration -failfast -v ; echo "$$?" > test.result
# stop and remove container
docker stop `docker ps -aqf "name=dev"`
docker rm `docker ps -aqf "name=dev"
exit $$(cat test.result)
It uses the test.result file to capture the exit code from the test

Related

Azure Devops Container Pipeline job is trying to redundantly give user:1000 sudo priveleges

I have a docker image, already made, that another pipeline uses for build jobs. That image already has a user:1000 with sudo (paswordless) permissions and a home directory. This was done to make manual use of the container more useful... there are applications in the image that prefer to run under a non-root user.
The pipeline using this image finds the existing user (great!) but then tries to give the user sudo permissions that it already has and this breaks the flow...
--<yaml pipeline code>--
container:
image: acr.url/foo/bar:v1
endpoint: <svc-connection>
--<pipeline run>--
...
/usr/bin/docker network create --label dc4b27 vsts_network_6b3e...
/usr/bin/docker inspect --format="{{index .Config.Labels \"com.azure.dev.pipelines.agent.handler.node.path\"}}" ***/foo/bar:v1
/usr/bin/docker create --name 9479... --label dc4b27 --network vsts_network_6b3ee... -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/opt/azagent/_work/9":"/__w/9" -v "/opt/azagent/_work/_temp":"/__w/_temp" -v "/opt/azagent/_work/_tasks":"/__w/_tasks" -v "/opt/azagent/_work/_tool":"/__t" -v "/opt/azagent/externals":"/__a/externals":ro -v "/opt/azagent/_work/.taskkey":"/__w/.taskkey" ***/foo/bar:v1 "/__a/externals/node/bin/node" -e "setInterval(function(){}, 24 * 60 * 60 * 1000);"
9056...
/usr/bin/docker start 9056...
9056...
/usr/bin/docker ps --all --filter id=9056... --filter status=running --no-trunc --format "{{.ID}} {{.Status}}"
9056... Up Less than a second
/usr/bin/docker exec 9056... sh -c "command -v bash"
/bin/bash
whoami
devops
id -u devops
1000
Try to create a user with UID '1000' inside the container.
/usr/bin/docker exec 9056... bash -c "getent passwd 1000 | cut -d: -f1 "
/usr/bin/docker exec 9056... id -u viv
1000
Grant user 'viv' SUDO privilege and allow it run any command without authentication.
/usr/bin/docker exec 9056... groupadd azure_pipelines_sudo
groupadd: Permission denied.
groupadd: cannot lock /etc/group; try again later.
##[error]Docker exec fail with exit code 10
Finishing: Initialize containers
I am OK working with user:1000 in the container as the azure agent runs on the host VM under user:1000('devops') and so the id's match inside and outside of the container, getting around a shortcoming of the docker volume mount system.
The question is: Is there a pipeline yaml method or control parameter to tell the run not to try and setup sudo permissions on the discovered user account (uid:1000) in the container?
I am getting around this issue right now by adding options: --user 0 to the container: section in the yaml script but I would prefer not to do that...
Thx.

Cannot specify current directory with docker run -v

I am trying to execute the following docker command in PowerShell but I cannot get it to recognize the $(PWD) for the current directory. Help please.
docker run -it -v $(PWD):/app --workdir /app samgentile\aspnetcore
I get:
C:\Program Files\Docker\Docker\Resources\bin\docker.exe: invalid reference format.
See 'C:\Program Files\Docker\Docker\Resources\bin\docker.exe run --help'.
You should use "/" instead of "\" in the image name:
docker run -it -v $PWD:/app --workdir /app samgentile/aspnetcore
Mihai is correct to point out the parentheses. These signify that you want to run the command PWD and use its output, whereas without the parentheses PWD is considered a variable. A correct invocation would take the form:
docker run -it -v $(pwd):/app --workdir /app samgentile/aspnetcore
Or:
docker run -it -v $PWD:/app --workdir /app samgentile/aspnetcore

How to pass arguments to spark-submit using docker

I have a docker container running on my laptop with a master and three workers, I can launch the typical wordcount example by entering the ip of the master using a command like this:
bash-4.3# spark/bin/spark-submit --class com.oreilly.learningsparkexamples.mini.scala.WordCount --master spark://spark-master:7077 /opt/spark-apps/learning-spark-mini-example_2.11-0.0.1.jar /opt/spark-data/README.md /opt/spark-data/output-5
I can see how the files have been generated inside output-5
but when I try to launch the process from outside, using the command:
docker run --network docker-spark-cluster_spark-network -v /tmp/spark-apps:/opt/spark-apps --env SPARK_APPLICATION_JAR_LOCATION=$SPARK_APPLICATION_JAR_LOCATION --env SPARK_APPLICATION_MAIN_CLASS=$SPARK_APPLICATION_MAIN_CLASS -e APP_ARGS="/opt/spark-data/README.md /opt/spark-data/output-5" spark-submit:2.4.0
Where
echo $SPARK_APPLICATION_JAR_LOCATION
/opt/spark-apps/learning-spark-mini-example_2.11-0.0.1.jar
echo $SPARK_APPLICATION_MAIN_CLASS
com.oreilly.learningsparkexamples.mini.scala.WordCount
And when I enter the page of the worker where the task is attempted, I can see that in line 11, the first of all, where the path for the first argument is collected, I have an error like this:
Caused by: java.lang.ArrayIndexOutOfBoundsException: 0
at com.oreilly.learningsparkexamples.mini.scala.WordCount$.main(WordCount.scala:11)
It is clear, in the zero position is not collecting the path of the first parameter, the one of the input file of which I want to do the wordcount.
The question is, why is docker not using the arguments passed through -e APP_ARGS="/opt/spark-data/README.md /opt/spark-data-output-5" ?
I already tried to run the job in a traditional way, loging to driver spark-master and running spark-submit command, but when i try to run the task with docker, it doesn't work.
It must be trivial, but i still have any clue. Can anybody help me?
SOLVED
I have to use a command like this:
docker run --network docker-spark-cluster_spark-network -v /tmp/spark-apps:/opt/spark-apps --env SPARK_APPLICATION_JAR_LOCATION=$SPARK_APPLICATION_JAR_LOCATION --env SPARK_APPLICATION_MAIN_CLASS=$SPARK_APPLICATION_MAIN_CLASS --env SPARK_APPLICATION_ARGS="/opt/spark-data/README.md /opt/spark-data/output-6" spark-submit:2.4.0
Resuming, i had to change -e APP_ARGS to --env SPARK_APPLICATION_ARGS
-e APP_ARGS is the suggested docker way...
This is the command that solves my problem:
docker run --network docker-spark-cluster_spark-network -v /tmp/spark-apps:/opt/spark-apps --env SPARK_APPLICATION_JAR_LOCATION=$SPARK_APPLICATION_JAR_LOCATION --env SPARK_APPLICATION_MAIN_CLASS=$SPARK_APPLICATION_MAIN_CLASS --env SPARK_APPLICATION_ARGS="/opt/spark-data/README.md /opt/spark-data/output-6" spark-submit:2.4.0
I have to use --env SPARK_APPLICATION_ARGS="args1 args2 argsN" instead of -e APP_ARGS="args1 args2 argsN".

Running new meteorhacks/meteord application, don't have meteor app

I try to create a new project based on the meteor with docker.
I found the repository for this:
https://github.com/meteorhacks/meteord
I created Dockerfile and put there
FROM meteorhacks/meteord:onbuild
And then run:
docker run meteorhacks/meteord
docker run mongo
After downloading all packages so finally I run
docker run -i -t 807754a01782 -d
-e ROOT_URL=http://localhost:3000
-e MONGO_URL=mongodb://127.0.0.1:27017/
-e MONGO_OPLOG_URL=mongodb://127.0.0.1:27017/
-p 8080:80 myapp
Based on this example:
docker run -d \
-e ROOT_URL=http://yourapp.com \
-e MONGO_URL=mongodb://url \
-e MONGO_OPLOG_URL=mongodb://oplog_url \
-p 8080:80 \
yourname/app
Inside myapp folder, I have fresh meteor project.
But as a result, I received
> You don't have an meteor app to run in this image.
Can anyone help me and give me some clues what I'm doing wrong? Or I misunderstanding how Docker with this repository works?
EDIT:
The problem was in command correct command is:
docker run -d
-e ROOT_URL=http://localhost:3000
-e MONGO_URL=mongodb://127.0.0.1:27017/
-e MONGO_OPLOG_URL=mongodb://127.0.0.1:27017/ -p 8080:80
meteorhacks/meteord:base
But now when I check the status of this container I see it is excited. How Can I check what causing the problem?

mongo disconnect after connecting to executing thru shell in docker

I want to close the mongo shell after executing the following in a docker command:
#!/bin/bash
docker run -it --link sonams-mongo:mongo --rm mongo sh -c 'exec mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"'
if [ $? -eq 0 ]; then \
echo "connected to mongo successful"; \
else \
echo "mongo connection NOT successful"; \
fi; \
When it connects it goes to a shell prompt within mongo. Is there a way to pass a shell command to do an exit right in or after the docker command?
thanks
Usually (of course it depends on the base image you're using) you wouldn't need to invoke "sh -c". Also, the -it combination is usually what makes the shell open and wait for input. Try to change your command a little bit, like below, without -it and sh -c:
docker run --link sonams-mongo:mongo --rm mongo mongo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test"
if that doesn't help, try this:
echo "$MONGO_PORT_27017_TCP_ADDR:$MONGO_PORT_27017_TCP_PORT/test" | docker run --link sonams-mongo:mongo --rm mongo mongo