VSCode-Docker Not Invoking "CMD" - visual-studio-code

TL;DR Image built by VSCode only executes the CMD command when I press the Run button in the Docker Desktop UI.
Hello Folks,
I'm playing around with a Drools image along with Docker Desktop and VSCode.
My devcontainer.json file looks like the following:
{
"name": "Existing Dockerfile",
"build": {
// Sets the run context to one level up instead of the .devcontainer folder.
"context": "..",
// Update the 'dockerFile' property if you aren't using the standard 'Dockerfile' filename.
"dockerfile": "../Dockerfile"
},
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [8001,8080]
}
My Dockerfile is minimalist and looks like the following:
FROM quay.io/kiegroup/business-central-workbench:latest
And my compose.yaml file looks like so:
services:
app:
entrypoint:
- sleep
- infinity
image: docker/dev-environments-default:stable-1
init: true
volumes:
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
The issue is that when VSCode sends the image to Docker Desktop, the
CMD ["./start_business-central-wb.sh"] found in the parent image does not seem to be getting set off as seen in the logs
However, when I click "Run" the command gets kicked off after spawning a new instance
What concept am I missing as to why the Docker image doesn't immediately begin running when VSCode sends it to Docker Desktop? I'm super inexperienced with both techs.
Any help is greatly appreciated.

Related

how to set an environment variable when run - az container app compose create

It seems when my docker compose yml is run via "az containerapp compose create", environment variables are not picked up. Is there a way I can set an env variable so the command picks it up?
I'm seeing this error:
ERROR: The following field(s) are either invalid or missing. Invalid value: "${DOCKER_REGISTRY-}/sample-blazorapp": could not parse reference: ${DOCKER_REGISTRY-}/sample-blazorapp: template.containers.blazorapp.image.
I have set the variable with: export DOCKER_REGISTRY="myregistry"
And when I echo $DOCKER_REGISTRY, the value is returned. So in the bash session it is set (I tried powershell first, I thought that was the issue because $(envvar-) is bash syntax, however the error is the same.)
This is what I have in my compose file (alignment is correct in the file):
blazorapp:
container_name: "blazorapp"
image: ${DOCKER_REGISTRY-}sample-blazorapp
build:
context: .
dockerfile: BlazorApp/BlazorApp/Dockerfile
depends_on:
- redis
ports:
- "55000:443"
If I explicitly set the image name, i.e. not use an env var, then it works. i.e. this change to the image line works:
image: myregistry/sample-blazorapp
I also tried adding the forward slash, this makes no difference (as expected, it works fine without the slash when running docker compose up).
I can set it explicitly but that would be annoying. I feel like I'm missing something. Any help or guidance is greatly appreciated :)
If the image is defined like this into you docker compose file:
image: ${DOCKER_REGISTRY-}sample-blazorapp
then you must export using a slash at the end of the value:
export DOCKER_REGISTRY="myregistry/"
I discovered the issue, I was missing a colon.
Does not work (produces the error described in the question):
image: ${DOCKER_REGISTRY-}sample-blazorapp
Also does not work:
image: ${DOCKER_REGISTRY-mydefault}sample-blazorapp
Add the magic : in and it works:
image: ${DOCKER_REGISTRY:-}sample-blazorapp
Also works:
image: ${DOCKER_REGISTRY:-mydefault}sample-blazorapp

DevSpace hook for running tests in container after an update to the container

My ultimate goal is to have tests run automatically anytime a container is updated. For example, if update /api, it should sync the changes between local and the container. After that it should automatically run the tests... ultimately.
I'm starting out with Hello World! though per the example:
# DevSpace --version = 5.16.0
version: v1beta11
...
hooks:
- command: |
echo Hello World!
container:
imageSelector: ${APP-NAME}/${API-DEV}
events: ["after:initialSync:${API}"]
...
I've tried all of the following and don't get the desired behavior:
stop:sync:${API}
restart:sync:${name}
after:initialSync:${API}
devCommand:after:sync
At best I can just get Hello World! to print on the initial run of devspace dev -b, but nothing after I make changes to the files for /api which causes files to sync.
Suggestions?
You will need a post-sync hook for this, which is separate from the DevSpace lifecycle hooks. You can define it with the dev.sync directly and it looks like this:
dev:
sync:
- imageSelector: john/devbackend
onUpload:
execRemote:
onBatch:
command: bash
args:
- -c
- "echo 'Hello World!' && other commands..."
More information in the docs: https://devspace.sh/cli/docs/configuration/development/file-synchronization#onupload

ECS Task Definition - When overriding ENTRYPOINT, Docker image's CMD is dropped

I have a Docker Image built with the following CMD
# Dockerfile
...
CMD ["nginx", "-g", "daemon off;"]
When my task definition does not include entryPoint or command the task successfully enters a running state.
{
"containerDefinitions": [
{
"image": "<myregistry>/<image>",
...
}
]
}
I need to run an agent in some instances of this container, so I am using an entrypoint for this task to run my agent. The problem is when I add an entryPoint parameter to the task definition, the container starts and immediately stops.
This is what I'm doing to add the entryPoint:
{
"containerDefinitions": [
{
"image": "<myregistry>/<image>",
...
"entryPoint": [
"custom-entry-point.sh"
],
}
]
}
And here is the contents of custom-entry-point.sh:
#!/bin/bash
/myagent &
echo "CMD is: $#"
exec "$#"
To confirm my suspicion that CMD is dropped, the logs just show:
CMD is:
If I add the CMD array from the Dockerfile to the task definition with the command parameter, it works fine and the task starts:
{
"containerDefinitions": [
{
"image": "<myregistry>/<image>",
...
"entryPoint": [
"custom-entry-point.sh"
],
"command": [
"nginx",
"-g",
"daemon off;"
}
]
}
And the logs show the expected:
CMD is: nginx -g daemon off;
I have many docker images with various iterations of CMD, I do not want to have to copy these into my task definitions. It seems that just adding only an entryPoint to a task definition should not override a docker image's CMD with an empty value.
Hoping some ECS / fargate experts can help shed some light on a path forward.
Some tips:
Check if your entrypoint script is executable
Use absolute path to your entrypoint script
Check logs to see the error. hopefully you autoconfigured awslog driver?
Have you successfully run the entrypoint version on your local?
Also have a read of this for some useful background:
https://aws.amazon.com/blogs/opensource/demystifying-entrypoint-cmd-docker/
I don't think this has anything to do with ECS. This is how Docker behaves, and there's no way to change it as far as I know.
See https://docs.docker.com/engine/reference/builder/
If CMD is defined from the base image, setting ENTRYPOINT will reset CMD to an empty value. In this scenario, CMD must be defined in the current image to have a value.
This particular snippet only refers to defining a new ENTRYPOINT in the image, but this Github discussion confirms the same behavior holds when overriding ENTRYPOINT at runtime.
I got the same problem, with my entrypoint and command attributes being something like sh -c .... I needed to delete sh -c, put the commands directly, and add #!/bin/sh at the top of my scripts.

Cypress and Docker - Can't run because no spec files were found

I'm trying to run cypress tests inside a docker container. I've simplified my setup so I can just try to get a simple container instance running and a few tests executed.
I'm using docker-compose
version: '2.1'
services:
e2e:
image: test/e2e:v1
command: ["./node_modules/.bin/cypress", "run", "--spec", "integration/mobile/all.js"]
build:
context: .
dockerfile: Dockerfile-cypress
container_name: cypress
network_mode: host
and my Dokerfile-cypress
FROM cypress/browsers:chrome69
RUN mkdir /usr/src/app
WORKDIR /usr/src/app
RUN npm install cypress#3.1.0
COPY cypress /usr/src/app/cypress
COPY cypress.json /usr/src/app/cypress
RUN ./node_modules/.bin/cypress verify
when I run docker-compose up after I build my image I see
cypress | name_to_handle_at on /dev: Operation not permitted
cypress | Can't run because no spec files were found.
cypress |
cypress | We searched for any files matching this glob pattern:
cypress |
cypress | integration/mobile/all-control.js
cypress exited with code 1
I've verified that my cypress files and folders have been copied over and I can verify that my test files exist. I've been stuck on this for a while and i'm unsure what to do besides giving up.
Any guidance appreciated.
Turns out cypress automatically checks for /cypress/integration folder. Moving all my cypress files inside this folder got it working.
The problem: No cypress specs files were found on your automation suite.
Solution: Cypress Test files are located in cypress/integration by default, but can be configured to another directory.
In this folder insert your suites per section:
for example:
- cypress/integration/billing-scenarios-suite
- cypress/integration/user-management-suite
- cypress/integration/proccesses-and-handlers-suite
I assume that these suite directories contains sub directories (which represents some micro logic) ,therefore you need to run it recursively to gather all files:
cypress run --spec \
cypress/integration/<your-suite>/**/* , \
cypress/integration/<your-suite>/**/**/*
If you run in cypress on docker verify that cypress volume contains the tests and mapped and mounted in container on volume section (on Dockerfile / docker-compose.yml file) and run it properly:
docker exec -it <container-id> cypress run --spec \
cypress/integration/<your-suite>/**/* , \
cypress/integration/<your-suite>/**/**/*
I noticed that if you CLICK AND DRAG file method to get file path in VSC, then it generates path with SMALL c-drive letter and this causes error: "Can't run because no spec files were found. + We searched for specs matching this glob pattern:"
e.g. by click and drag I get:
cypress run --spec c:\Users\dmitr\Desktop\cno-dma-replica-for-cy-test\cypress\integration\dma-playground.spec.js
Notice SMALL c, in above.
BUT if I use right click 'get path', I get BIG C, and it works for some reason:
cypress run --spec C:\Users\dmitr\Desktop\cno-dma-replica-for-cy-test\cypress\integration\dma-playground.spec.js
and this causes it to work.
Its strange I know, but there you go.
but if you just use:

Docker-compose mount directory

ok this is driving me nuts already. What I just want to do is launch php:5.6-apache image and mount my ./web to /var/www/html by having the following docker-compose.yml file:
version: '2'
services:
apache:
image: php:5.6-apache
volumes:
- ./web:/var/www/html
ports:
- 8081:80
Launching it with docker-compose up.
For some unknown reason this results in empty /var/www/html folder, although it should contain what I have in ./web.
Or I am doing it wrong?
Well, it turned out that for some reason windows firewall prevented folder sharing. It seems that it was because DockerNat network was listed among Public networks, so I had to run the following commands in elevated power shell:
$Profile = Get-NetConnectionProfile -InterfaceAlias "vEthernet (DockerNAT)"
$Profile.NetworkCategory = "Private"
Set-NetConnectionProfile -InputObject $Profile
Then I was able to enable drive sharing in docker settings and then mounted folders became filled with files.
[UPDATE 2018-05-03] There's a good gist that will put dockerNat network to private when you restart docker. All you have to do is modify MobyLinux.ps1 file located at C:\Program Files\Docker\Docker\resources by adding include at 86, function at 182-186 and modifying lines try/catch statement at 399-409 to include Set-Switch-Private function calls.