Fig up error exec: "bundle": executable file not found in $PATH - sinatra

I'm trying to run a Dockerized sinatra app with no database using fig, but I keep getting this error:
$ fig up
Recreating my_web_1...
Cannot start container 93f4a091bd6387bd28d8afb8636d2b14623a08d259fba383e8771fee811061a3: exec: "bundle": executable file not found in $PATH
Here is the Dockerfile
FROM ubuntu-nginx
MAINTAINER Ben Bithacker ben#bithacker.org
COPY Gemfile /app/Gemfile
COPY Gemfile.lock /app/Gemfile.lock
WORKDIR /app
RUN ["/bin/bash", "-l", "-c", "bundle install"]
ADD config/container/start-server.sh /usr/bin/start-server
RUN chmod +x /usr/bin/start-server
ADD . /app
EXPOSE 9292
CMD ["/usr/bin/start-server"]
The config/container/start-server.sh looks like this
#!/bin/bash
cd /app
source /etc/profile.d/rvm.sh
bundle exec rackup config.ru
The fig.yml looks like this:
web:
build: .
command: bundle exec rackup config.ru
volumes:
- .:/app
ports:
- "3000:3000"
environment:
- SOME_VAR=adsfasdfgasdfdfd
- SOME_VAR2=ba2gezcjsdhwzhlz24zurg5ira

I think there are a couple problems with this setup. Where is bundler installed? Normally you would apt-get install ruby-bundler and it would always be on your path.
I believe your immediate problem is that you're overriding the CMD from the Dockerfile with the command in the fig.yml. I'm assuming (based on the contents of start-server.sh) that you need the path to be set? You should remove the command line from the fig.yml.
You're also overriding the /app directory in the container with the volumes: .:/app in the fig.yml. You probably also want to remove that line.

Related

Moving a file from host with docker-compose volume before Dockerfile is built

I have a few Dockerfiles that are dependant on a "pat" (Personal Access Token) file to be able to access a private nuget feed. I have taken some inspiration from somakdas to get this working.
To run my single Dockerfile I first create a "pat" file containing my token and build with docker build -f Services/User.API/Dockerfile -t userapi:dev --secret id=pat,src=pat .
This works as intended, but my issue is getting this to work using a docker-compose.yml file.
First I took a look at using docker-compose secrets, but it came to my attention that docker-compose secrets are access at runtime, not build-time. https://github.com/docker/compose/issues/6358
So now I'm trying to create a volume containing my pat file but I get cat: /pat: No such file or directory when the command RUN --mount=type=secret... is running. This may not be secure but it will only be running locally.
My Dockerfile
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
WORKDIR /src
COPY ["User.API.csproj", "/Services/User.API"]
RUN --mount=type=secret,id=pat,dst=/pat export ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS="{\"endpointCredentials\": [{\"endpoint\":\"<feed>\", \"username\":\"<user>\", \"password\":\"`cat /pat`\"}]}" \
&& dotnet restore "User.API.csproj" \
&& unset VSS_NUGET_EXTERNAL_FEED_ENDPOINTS
...
My docker-compose.yml
services:
user.api:
container_name: User.API
image: ${DOCKER_REGISTRY-}userapi
build:
context: .
dockerfile: Services/User.API/Dockerfile
networks:
- app_network
volumes:
- ./pat:/app/src/pat
Am I only able to access docker-compose volumes after the Dockerfile is built?
I solved this by attacking the problem in a different way. As the main goal was to get this working locally I created Dockerfile.Local and docker-compose.local.yml. Together with this I created an .env file containing the "pat".
The docker-compose.local.yml passes the "pat" as an argument to the Dockerfile.Local where it's used. I also discarded --mount=type=secret and set the value to VSS_NUGET_EXTERNAL_FEED_ENDPOINTS directly.
.env file:
PAT_TOKEN=<personal access token>
docker-compose.local.yml:
services:
user.api:
container_name: User.API
image: ${DOCKER_REGISTRY-}userapi
build:
context: .
dockerfile: Services/User.API/Dockerfile
args:
- PAT=${PAT_TOKEN}
networks:
- app_network
volumes:
- ./pat:/app/src/pat
Dockerfile.Local:
FROM mcr.microsoft.com/dotnet/aspnet:5.0 AS base
WORKDIR /app
EXPOSE 80
FROM mcr.microsoft.com/dotnet/sdk:5.0 AS build
RUN wget -qO- https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
WORKDIR /src
COPY ["User.API.csproj", "/Services/User.API"]
ARG PAT
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS="{\"endpointCredentials\": [{\"endpoint\":\"<feed>\", \"username\":\"<user>\", \"password\":\"${PAT}\"}]}" \
&& dotnet restore "User.API.csproj" \
&& unset VSS_NUGET_EXTERNAL_FEED_ENDPOINTS
...
Note: The .env file was added to .gitignore because it is containing sensitive information. We don't want that in our repository.

Can't run EF in a docker container as non root

I have the following docker file:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["myapp.database/src/myapp.database.csproj", "myapp.database/"]
COPY ["myapp.database/src/NuGet.config", "myapp.database/"]
RUN dotnet restore "myapp.database/myapp.database.csproj"
COPY myapp.database/src myapp.database
WORKDIR /src/myapp.database
RUN dotnet build "myapp.database.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myapp.database.csproj" -c Release -o /app/publish
FROM build AS final
RUN groupadd -g 500 dotnetuser && \
useradd -r -u 500 -g dotnetuser dotnetuser
RUN dotnet tool install --global dotnet-ef --version 3.1.0
ENV PATH="${PATH}:/root/.dotnet/tools"
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myapp.database.dll"]
I create a user "dotnetuser" with uid as 500. I specify the following in my docker-compose:
version: '3.4'
services:
postgres:
image: postgres:12.1-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: myapp
POSTGRES_PASSWORD: myapp
POSTGRES_DB: myapp
volumes:
- postgresvolume:/var/lib/postgresql/data
networks:
- dockercomposedatabase_default
myapp.database:
depends_on:
- postgres
user: "500"
build:
context: ..
dockerfile: myapp.database/build/Dockerfile
environment:
DOTNET_CLI_HOME: "/tmp/DOTNET_CLI_HOME"
networks:
- dockercomposedatabase_default
volumes:
postgresvolume:
external: false
networks:
dockercomposedatabase_default:
external: true
However, I can only run EF commands from my container if I run the container as root.
If I run as dotnetuser, then I get the following error:
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET Core program, but dotnet-ef does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
I've tried various ways to get the dotnetuser to run ef commands as non-root, but without any success :(
Even if I install the tools to the dotnetuser home path, I still get issues with permissions.
How can I run dotnet ef database update if I run the docker as non-root?
It turns out that what I wanted to do wasn't possible. That is because EF will try to build even if you ship the compiled binaries. On immutable images, you won't be able to execute the EF tools. https://github.com/dotnet/efcore/issues/13339
However, there is a workaround I found on this Blog mentioned on the github issue: https://mattmillican.com/blog/ef-core-migrations-on-linux
Of course in my case, things were a little different. I changed my docker and created a migration.sh script.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["myapp.database/src/myapp.database.csproj", "myapp.database/"]
COPY ["myapp.database/src/NuGet.config", "myapp.database/"]
COPY ["myapp.database/src/migration.sh", "myapp.database/"]
RUN dotnet restore "myapp.database/myapp.database.csproj"
COPY myapp.database/src myapp.database
WORKDIR /src/myapp.database
RUN dotnet build "myapp.database.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myapp.database.csproj" -c Release -o /app/publish
RUN chmod +x /app/publish/migration.sh
FROM build AS final
RUN groupadd -g 500 dotnetuser && \
useradd -r -u 500 -g dotnetuser dotnetuser
RUN mkdir /tools
RUN dotnet tool install dotnet-ef --tool-path /tools --version 3.1.0
ENV PATH="${PATH}:/tools:/app"
WORKDIR /app
COPY --chown=dotnetuser:dotnetuser --from=publish /app/publish .
RUN rm -r /src
RUN chown dotnetuser:dotnetuser -R /tools
RUN chown dotnetuser:dotnetuser -R /app
USER dotnetuser
ENTRYPOINT ["migration.sh"]
I used my migration.sh script to execute the migration and then execute my myapp.database.dll to run my fixtures.
#!/bin/sh
# MIGRATION_NAME is an environmental variable for specifying the which migration to migrate to. It will allow you to migrate forward and back. Don't set the environmental value if you want to migrate to the latest migration.
# This path is important but may change between upgrades!!!
EF_DLL_PATH=/tools/.store/dotnet-ef/3.1.0/dotnet-ef/3.1.0/tools/netcoreapp3.1/any/tools/netcoreapp2.0/any/ef.dll
# standard compiled files for dotnet
DEPS_FILE=myapp.database.deps.json
RUNTIME_CONFIG=myapp.database.runtimeconfig.json
# assembly name
PROJECT_NAME=myapp.database
cd /app
echo "Executing the migration script..."
dotnet exec --depsfile ${DEPS_FILE} --runtimeconfig ${RUNTIME_CONFIG} "${EF_DLL_PATH}" database update ${MIGRATION_NAME} --context MyAppDbContext --assembly ${PROJECT_NAME}.dll --startup-assembly ${PROJECT_NAME}.dll --root-namespace ${PROJECT_NAME} || {
echo "Could not execute the migration!"
exit 1
}
echo "migration to ${MIGRATION} complete!"
echo "Run the fixtures..."
dotnet myapp.database.dll
echo "Fixtures applied!"
Hope this can helps someone.

How to seed a docker container in Windows

I intended to install a mongodb docker container from Docker Hub, and then insert some data into it. Obviously, a mongodb seed container is needed. So I did the following:
created a Dockerfile of Mongo seed container in mongo_seed/Dockerfile and the code in Dockerfile is the following:
FROM mongo:latest
WORKDIR /tmp
COPY data/shops.json .
COPY import.sh .
CMD ["/bin/bash", "-c", "source import.sh"]
The code of import.sh is the following:
#!/bin/bash
ls .
mongoimport --host mongodb --db data --collection shops --file shops.json
the shops.json file contains the data to be imported to Mongo
created a docker-compose.yml file in the current working directory, and the code is the following:
version: '3.4'
services:
mongodb:
image: mongo:latest
ports:
- "27017:27017"
container_name: mongodb
mongodb_seed:
build: mongodb_seed
links:
- mongodb
The code above successfully made the mongodb service execute the import.sh to import the json data - shops.json. It works perfectly in my Ubuntu. However, when I tried to run command docker-compose up -d --build mongodb_seed in Windows, the import of data failed with errors logs:
Attaching to linux_mongodb_seed_1
mongodb_seed_1 | ls: cannot access '.'$'\r': No such file or directory
: no such file or directory2T08:33:45.552+0000 Failed: open shops.json
mongodb_seed_1 | 2019-04-02T08:33:45.552+0000 imported 0 documents
Anyone has any ideas why it was like that? and how to fix it so that it can work in Windows as well?
You can try to change line endings to UNIX in your script file.
Notice the error ls: cannot access '.'$'\r': No such file or directory.
One of the issues with Docker (or any Linux/macOS based system) on Windows is the difference in how line endings are handled.
Windows ends lines in a carriage return and a linefeed \r\n while Linux and macOS only use a linefeed \n. This becomes a problem when you try to create a file in Windows and run it on a Linux/macOS system, because those systems treat the \r as a piece of text rather than a newline.
Make sure to run dos2unix on script file whenever anyone edit anything on any kind of editor on Windows. Even if script file is being created on Git Bash don’t forget to run dos2unix
dos2unix import.sh
See https://willi.am/blog/2016/08/11/docker-for-windows-dealing-with-windows-line-endings/
In your case:
FROM mongo:latest
RUN apt-get update && apt-get install -y dos2unix
WORKDIR /tmp
COPY data/shops.json .
COPY import.sh .
RUN dos2unix /import.sh && apt-get --purge remove -y dos2unix
CMD ["/bin/bash", "-c", "source import.sh"]

Problems connecting Docker container to PostgreSQL in App Engine using Flask

I want to deploy a Flask application that uses Orator as the ORM and I'm having problems connecting to a SQL instance in Google Cloud Platform. I've already set up the IAM permissions needed as explained here but I'm still not being able to connect to the instance. If I manually set the firewall permission of the instance's IP the connection succeeds, but if the IP changes (it does several times) I cannot connect anymore.
This is my Dockerfile:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
CMD gunicorn -b :$PORT main:app
This is my app.yaml:
runtime: custom
env: flex
env_variables:
POSTGRES_HOST: <SQL-INSTANCE-IP>
POSTGRES_DB: <MY-POSTGRES-DB>
POSTGRES_USER: <MY-POSTGRES-USER>
POSTGRES_PASSWORD: <MY-POSTGRES-PASSWORD>
automatic_scaling:
min_num_instances: 1
max_num_instances: 1
The problem was that the cloud_sql_proxy was not being executed in my docker image. For this I had to create a script like this:
run_app.sh
#!/bin/bash
/app/cloud_sql_proxy -dir=/cloudsql -instances=<INSTANCE-CONNECTION-NAME> -credential_file=<CREDENTIAL-FILE> &
gunicorn -b :$PORT main:app
Then give it execution permission:
chmod +x run_app.sh
Then changed my Dockerfile so it downloads the cloud_sql_proxy, creates the /cloudsql directory and executes the new_script:
FROM gcr.io/google-appengine/python
RUN virtualenv /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
RUN wget https://dl.google.com/cloudsql/cloud_sql_proxy.linux.amd64 -O /app/cloud_sql_proxy
RUN chmod +x /app/cloud_sql_proxy
RUN mkdir /cloudsql; chmod 777 /cloudsql
ADD . /app
CMD /app/run_app.sh
And finally changed the POSTGRES_HOST in my app.yaml:
runtime: custom
env: flex
env_variables:
POSTGRES_HOST: "/cloudsql/<INSTANCE-CONNECTION-NAME>"
POSTGRES_DB: <MY-POSTGRES-DB>
POSTGRES_USER: <MY-POSTGRES-USER>
POSTGRES_PASSWORD: <MY-POSTGRES-PASSWORD>
automatic_scaling:
min_num_instances: 1
max_num_instances: 1
Cheers

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT