Dockerfile for Go and Postgres - postgresql

I need to make a Docker container with Go-app and Postgres db with migrated tables. All what I could find is combinations of these topics (1, 2)
Dockerfile
FROM golang:alpine as builder
RUN apk update && apk add --no-cache git
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download
COPY . .
RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 go build -a -installsuffix cgo -o main .
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
COPY --from=builder /app/main .
COPY --from=builder /app/.env .
EXPOSE 8080
RUN chmod +x ./main
CMD ["./main"]
And I've got error standard_init_linux.go:219: exec user process caused: exec format error
I found that this is probably due to incompatibility of systems and architectures (I have Windows nd Intel x64), but none of the decisions helped.
Can you help me with Dockerfile or with my error?

Related

running elixir on buildkite with docker-compose fails with dependencies

i have the following dockerfile for an elixir+phoenix app
FROM elixir:latest as build_base
RUN apt-get -y update
RUN apt-get -y install inotify-tools curl
ARG TARGETARCH
RUN if [ ${TARGETARCH} = arm64 ]; then \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-${TARGETARCH}.tar.gz \
;else \
curl -L -o /tmp/dart-sass.tar.gz https://github.com/sass/dart-sass/releases/download/1.54.5/dart-sass-1.54.5-linux-x64.tar.gz \
;fi
RUN tar -xvf /tmp/dart-sass.tar.gz -C /tmp
RUN mv /tmp/dart-sass/sass /usr/local/bin/sass
RUN mkdir -p /app
WORKDIR /app
COPY mix.* ./
RUN mix local.hex --force
RUN mix archive.install hex phx_new --force
RUN mix local.rebar --force
RUN mix deps.clean --all
RUN mix deps.get
RUN mix --version
RUN mix deps.compile
COPY assets assets
COPY vendor vendor
COPY lib lib
COPY config config
COPY priv priv
COPY test test
RUN mix compile
the docker-compose file looks like the following
services:
web:
build:
context: .
dockerfile: Dockerfile
target: build_base
volumes:
- ./:/app
ports:
- "80:80"
command: mix phx.server
I'm trying to run docker-compose as part of the build step in buildkite, this is an extract of the step in buildkite
- label: "run web"
key: "web"
commands:
- mix phx.server
plugins:
- docker-compose#v4.9.0:
run: web
config: docker-compose.yml
however when running web i see everything happens properly including the package installation, however when running the application i see the following error
web_1 | Unchecked dependencies for environment dev:
web_1 | * telemetry_metrics (Hex package)
web_1 | the dependency is not available, run "mix deps.get"
and the list goes on and on, this works fine on my local machine, its only when running on buildkite. does anyone have any idea on how to fix this ?

Docker-Compose up Failed Because `Service 'nginx' failed to build`

I'm new to docker, and have been trying to troubleshoot this error for a while. I've read similar posts and nothing seems to work.
Full error:
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to copy: httpReadSeeker: failed open: could not fetch content descriptor sha256:eff196a3849ad6541fd3afe676113896be214753740e567575bb562986bd2cd4 (application/vnd.docker.distribution.manifest.v1+json) from remote: not found
ERROR: Service 'nginx' failed to build : Build failed
I have three Dockerfiles, one for my react frontend, one for my django backend, and one for nginx.
Frontend dockerfile:
COPY ./react_app/package.json .
RUN apk add --no-cache --virtual .gyp \
python \
make \
g++ \
&& npm install \
&& apk del .gyp
COPY ./react_app .
ARG API_SERVER
ENV REACT_APP_API_SERVER=${API_SERVER}
RUN REACT_APP_API_SERVER=${API_SERVER} \
npm run build
WORKDIR /usr/src/app
RUN npm install -g serve
COPY --from=builder /usr/src/app/build ./build
Django Python backend Dockerfile
WORKDIR /usr/src/app
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
FROM python:3.7.9-slim-stretch
RUN apt-get update && apt-get install -y --no-install-recommends netcat && \
apt-get autoremove -y && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
WORKDIR /usr/src/app
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
COPY ./django_app .
RUN chmod +x /usr/src/app/entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
and the nginx dockerfile
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
WORKDIR /usr/src/app
Backend Dockerfile
###########
# BUILDER #
###########
# pull official base image
FROM python:3.7.9-slim-stretch as builder
# set work directory
WORKDIR /usr/src/app
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
COPY ./requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /usr/src/app/wheels -r requirements.txt
#########
# FINAL #
#########
# pull official base image
FROM python:3.7.9-slim-stretch
# installing netcat (nc) since we are using that to listen to postgres server in entrypoint.sh
RUN apt-get update && apt-get install -y --no-install-recommends netcat && \
apt-get autoremove -y && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install dependencies
COPY --from=builder /usr/src/app/wheels /wheels
COPY --from=builder /usr/src/app/requirements.txt .
RUN pip install --no-cache /wheels/*
# set work directory
WORKDIR /usr/src/app
# copy entrypoint.sh
COPY ./entrypoint.sh /usr/src/app/entrypoint.sh
# copy our django project
COPY ./django_app .
# run entrypoint.sh
RUN chmod +x /usr/src/app/entrypoint.sh
ENTRYPOINT ["/usr/src/app/entrypoint.sh"]
Nginx Dockerfile
FROM nginx:1.19.0-alpine
RUN rm /etc/nginx/conf.d/default.conf
COPY nginx.conf /etc/nginx/conf.d
WORKDIR /usr/src/app
I don't know where to go from here. I've tried following 5 or 6 similar stack overflows and many more github issues, to no avail. Thanks, please let me know.

Can't run EF in a docker container as non root

I have the following docker file:
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["myapp.database/src/myapp.database.csproj", "myapp.database/"]
COPY ["myapp.database/src/NuGet.config", "myapp.database/"]
RUN dotnet restore "myapp.database/myapp.database.csproj"
COPY myapp.database/src myapp.database
WORKDIR /src/myapp.database
RUN dotnet build "myapp.database.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myapp.database.csproj" -c Release -o /app/publish
FROM build AS final
RUN groupadd -g 500 dotnetuser && \
useradd -r -u 500 -g dotnetuser dotnetuser
RUN dotnet tool install --global dotnet-ef --version 3.1.0
ENV PATH="${PATH}:/root/.dotnet/tools"
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "myapp.database.dll"]
I create a user "dotnetuser" with uid as 500. I specify the following in my docker-compose:
version: '3.4'
services:
postgres:
image: postgres:12.1-alpine
ports:
- "5432:5432"
environment:
POSTGRES_USER: myapp
POSTGRES_PASSWORD: myapp
POSTGRES_DB: myapp
volumes:
- postgresvolume:/var/lib/postgresql/data
networks:
- dockercomposedatabase_default
myapp.database:
depends_on:
- postgres
user: "500"
build:
context: ..
dockerfile: myapp.database/build/Dockerfile
environment:
DOTNET_CLI_HOME: "/tmp/DOTNET_CLI_HOME"
networks:
- dockercomposedatabase_default
volumes:
postgresvolume:
external: false
networks:
dockercomposedatabase_default:
external: true
However, I can only run EF commands from my container if I run the container as root.
If I run as dotnetuser, then I get the following error:
Could not execute because the specified command or file was not found.
Possible reasons for this include:
* You misspelled a built-in dotnet command.
* You intended to execute a .NET Core program, but dotnet-ef does not exist.
* You intended to run a global tool, but a dotnet-prefixed executable with this name could not be found on the PATH.
I've tried various ways to get the dotnetuser to run ef commands as non-root, but without any success :(
Even if I install the tools to the dotnetuser home path, I still get issues with permissions.
How can I run dotnet ef database update if I run the docker as non-root?
It turns out that what I wanted to do wasn't possible. That is because EF will try to build even if you ship the compiled binaries. On immutable images, you won't be able to execute the EF tools. https://github.com/dotnet/efcore/issues/13339
However, there is a workaround I found on this Blog mentioned on the github issue: https://mattmillican.com/blog/ef-core-migrations-on-linux
Of course in my case, things were a little different. I changed my docker and created a migration.sh script.
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-buster AS build
WORKDIR /src
COPY ["myapp.database/src/myapp.database.csproj", "myapp.database/"]
COPY ["myapp.database/src/NuGet.config", "myapp.database/"]
COPY ["myapp.database/src/migration.sh", "myapp.database/"]
RUN dotnet restore "myapp.database/myapp.database.csproj"
COPY myapp.database/src myapp.database
WORKDIR /src/myapp.database
RUN dotnet build "myapp.database.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "myapp.database.csproj" -c Release -o /app/publish
RUN chmod +x /app/publish/migration.sh
FROM build AS final
RUN groupadd -g 500 dotnetuser && \
useradd -r -u 500 -g dotnetuser dotnetuser
RUN mkdir /tools
RUN dotnet tool install dotnet-ef --tool-path /tools --version 3.1.0
ENV PATH="${PATH}:/tools:/app"
WORKDIR /app
COPY --chown=dotnetuser:dotnetuser --from=publish /app/publish .
RUN rm -r /src
RUN chown dotnetuser:dotnetuser -R /tools
RUN chown dotnetuser:dotnetuser -R /app
USER dotnetuser
ENTRYPOINT ["migration.sh"]
I used my migration.sh script to execute the migration and then execute my myapp.database.dll to run my fixtures.
#!/bin/sh
# MIGRATION_NAME is an environmental variable for specifying the which migration to migrate to. It will allow you to migrate forward and back. Don't set the environmental value if you want to migrate to the latest migration.
# This path is important but may change between upgrades!!!
EF_DLL_PATH=/tools/.store/dotnet-ef/3.1.0/dotnet-ef/3.1.0/tools/netcoreapp3.1/any/tools/netcoreapp2.0/any/ef.dll
# standard compiled files for dotnet
DEPS_FILE=myapp.database.deps.json
RUNTIME_CONFIG=myapp.database.runtimeconfig.json
# assembly name
PROJECT_NAME=myapp.database
cd /app
echo "Executing the migration script..."
dotnet exec --depsfile ${DEPS_FILE} --runtimeconfig ${RUNTIME_CONFIG} "${EF_DLL_PATH}" database update ${MIGRATION_NAME} --context MyAppDbContext --assembly ${PROJECT_NAME}.dll --startup-assembly ${PROJECT_NAME}.dll --root-namespace ${PROJECT_NAME} || {
echo "Could not execute the migration!"
exit 1
}
echo "migration to ${MIGRATION} complete!"
echo "Run the fixtures..."
dotnet myapp.database.dll
echo "Fixtures applied!"
Hope this can helps someone.

Can't run command from docker file

I'm currently trying to dockerize a .net core API on my laptop but when I'm building the docker file I have some issue.
I have an issue when it's trying to execute commands like choco or wget. It says that they are not recognized, I did install them though and added as variables into my environment. They do work when I try to execute them independently in a terminal.
Here my script:
FROM microsoft/dotnet:2.2-sdk AS dotnet-builder
ARG nuget_pat
# Set environment variables
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS '{"endpointCredentials":[{"endpoint":"https://isirac.pkgs.visualstudio.com/_packaging/RentacarMicroserviceNuget/nuget/v3/index.json","username":"NoRealUserNameAsIsNotRequired","password":"'${nuget_pat}'"}]}'
RUN choco install wget
# Get and install the Artifact Credential provider
RUN wget -O - https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
# Restore your nugets from nuget.org and your private feed.
# RUN dotnet restore -s "https://isirac.pkgs.visualstudio.com/_packaging/RentacarMicroserviceNuget/nuget/v3/index.json" -s "https://api.nuget.org/v3/index.json" "Suzuki.csproj"
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1909 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
FROM mcr.microsoft.com/dotnet/core/sdk:3.1-nanoserver-1909 AS build
WORKDIR "/src/Suzuki/Suzuki/"
COPY ["*.csproj", "./"]
# COPY --from=nuget-config NuGet.config ./
RUN dotnet restore --interactive "Suzuki.csproj" -s "https://isirac.pkgs.visualstudio.com/_packaging/RentacarMicroserviceNuget/nuget/v3/index.json" -s "https://api.nuget.org/v3/index.json"
COPY . .
WORKDIR "/src/Suzuki/Suzuki/"
RUN dotnet build "Suzuki.csproj" -c Release -o /app/build
FROM build AS publish
RUN dotnet publish "Suzuki.csproj" -c Release -o /app/publish
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Suzuki.dll"]
Does anyone have a clue?
thanks
Hi and welcome on Stackoverflow!
I could see some issues with your Dockerfile and the first one should be related to your issue. You use choco but you didn't install it before.
You could add a RUN to add it:
RUN powershell -Command \
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')); \
choco feature disable --name showDownloadProgress
Another problem I see is how you use the multistage build of Docker. It's not a bug or something like that, but it could be better and easier to read.
Everytime you add a FROM instruction, you start a new image and you have the ability to copy some files from previous images.
In your Dockerfile, you have one step that is not really used:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1909 AS base
WORKDIR /app
EXPOSE 80
EXPOSE 443
From this stage, you didn't copy anything and do nothing. You will reuse it later but we need to read up to find the step as we may miss it. Later I will merge it with the final steps.
Another stage I don't understand:
FROM build AS publish
RUN dotnet publish "Suzuki.csproj" -c Release -o /app/publish
Why not running the command directly on the build image?
You have your file available in the publish image because you start from build that copy them, but, in this case, you can continue to use build directly.
And finally, the last stage:
FROM base AS final
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Suzuki.dll"]
If we use the base image declare previously, you just add some instructions to it and have a duplicate one: WORKDIR is the declare in the two place with the same value. I think, the best is to have a final stage that merge the two and be like this:
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1909
EXPOSE 80
EXPOSE 443
WORKDIR /app
COPY --from=publish /app/publish .
ENTRYPOINT ["dotnet", "Suzuki.dll"]
You don't need to name it as you will not use it after.
So, if we catch up, this is the Dockerfile I would purpose:
FROM microsoft/dotnet:2.2-sdk AS dotnet-builder
ARG nuget_pat
# Set environment variables
ENV NUGET_CREDENTIALPROVIDER_SESSIONTOKENCACHE_ENABLED true
ENV VSS_NUGET_EXTERNAL_FEED_ENDPOINTS '{"endpointCredentials":[{"endpoint":"https://isirac.pkgs.visualstudio.com/_packaging/RentacarMicroserviceNuget/nuget/v3/index.json","username":"NoRealUserNameAsIsNotRequired","password":"'${nuget_pat}'"}]}'
RUN powershell -Command \
iex ((new-object net.webclient).DownloadString('https://chocolatey.org/install.ps1')); \
choco feature disable --name showDownloadProgress; \
choco install wget
# Get and install the Artifact Credential provider
RUN wget -O - https://raw.githubusercontent.com/Microsoft/artifacts-credprovider/master/helpers/installcredprovider.sh | bash
# Restore your nugets from nuget.org and your private feed.
# RUN dotnet restore -s "https://isirac.pkgs.visualstudio.com/_packaging/RentacarMicroserviceNuget/nuget/v3/index.json" -s "https://api.nuget.org/v3/index.json" "Suzuki.csproj"
WORKDIR "/src/Suzuki/Suzuki/"
COPY *.csproj ./
# COPY --from=nuget-config NuGet.config ./
RUN dotnet restore --interactive Suzuki.csproj -s "https://isirac.pkgs.visualstudio.com/_packaging/RentacarMicroserviceNuget/nuget/v3/index.json" -s "https://api.nuget.org/v3/index.json"
COPY . .
RUN dotnet build "Suzuki.csproj" -c Release -o /app/build
RUN dotnet publish "Suzuki.csproj" -c Release -o /app/publish
FROM mcr.microsoft.com/dotnet/core/aspnet:3.1-nanoserver-1909
EXPOSE 80
EXPOSE 443
WORKDIR /app
COPY --from=dotnet-builder /app/publish .
ENTRYPOINT ["dotnet", "Suzuki.dll"]

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT