I'm trying to run a database in docker and a python script with it to store MQTT messages. This gave me the idea to use Docker Compose since it sounded logical that both were somewhat connected. The issue I'm having is that the Docker Containers do indeed run, but they do not store anything in the database.
When I run my script locally it does store messages so my hunch is that the Compose File is not correct.
Is this the correct way to compose a python file which stores message in a DB and the database itself (with a .js file for the credentials). Any feedback would be appreciated!
version: '3'
services:
storing_script:
build:
context: .
dockerfile: Dockerfile
depends_on: [mongo]
mongo:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: xx
MONGO_INITDB_ROOT_PASSWORD: xx
MONGO_INITDB_DATABASE: motionDB
volumes:
- ${PWD}/mongo-data:/data/db
- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
ports:
- 27018:27018
restart: unless-stopped
The DockerFile im using to build:
# set base image (host OS)
FROM python:3.8-slim
# set the working directory in the container
WORKDIR /code
# copy the dependencies file to the working
directory
COPY requirements.txt .
# install dependencies
RUN pip install -r requirements.txt
# copy the content of the local src directory to
the working directory
COPY src/ .
# command to run on container start
CMD [ "python", "./main.py"]
I think this may be due to user permission.
What I did for my docker-compose for docker deployment, is I also mount the passwd file after creating a mongodb user
volumes:
/etc/passwd:/etc/passwd:ro
This worked for me as the most straight forward solution.
Related
I have created a fresh SilverStripe project using composer and I'm wanting to have my containers up and running via docker-compose up.
I have written a very basic Dockerfile:
FROM brettt89/silverstripe-web:7.4-apache
ENV DOCUMENT_ROOT /var/www/html/public
COPY . $DOCUMENT_ROOT
WORKDIR $DOCUMENT_ROOT
RUN chown www-data:www-data $DOCUMENT_ROOT
USER www-data
as well as a simple compose yaml file which specifies almost all the required services for it to work. here's what it looks like:
version: "3.8"
services:
silverstripe:
build:
context: .
volumes:
- .:/var/www/html
depends_on:
- database
environment:
- DOCUMENT_ROOT=/var/www/html/public
- SS_TRUSTED_PROXY_IPS=*
- SS_ENVIRONMENT_TYPE=dev
- SS_DATABASE_SERVER=database
- SS_DATABASE_NAME=SS_mysite
- SS_DATABASE_USERNAME=root
- SS_DATABASE_PASSWORD=
- SS_DEFAULT_ADMIN_USERNAME=admin
- SS_DEFAULT_ADMIN_PASSWORD=password
ports:
- 8088:80
database:
image: mysql:5.7
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
I can get my containers up and running. But when I go to 127.0.0.1:8080:/dev/build, It is raising the mkdir():permission denied warning.
I can see my files in the container have 1000:1000 ownership which I assume is still root?
So wondering how I can fix this. I have seen examples of setting up things such that containers could be created via docker build, but I just want to be able to run things via docker-compose up.
I am using Ubuntu-20.04 and project has been created by $USER.
The quickest trick to fix this, for setting up your local environment, is to change your user UID from 1000 to www-data using the usermod command:
RUN usermod -u 1000 www-data
then, of course, you can skip your last two lines.
You can find more info here:
https://blog.gougousis.net/file-permissions-the-painful-side-of-docker/
I'm trying to build a docker-compose file that will spin up my EF Core web api project, connecting to my Postgres database.
I'm having a hard time getting the EF project connecting to the database.
This is what I currently have for my docker-compose.yml:
version: '3.8'
services:
web:
container_name: 'mybackendcontainer'
image: 'myuser/mybackend:0.0.6'
build:
context: .
dockerfile: backend.dockerfile
ports:
- 8080:80
depends_on:
- postgres
networks:
- mybackend-network
postgres:
container_name: 'postgres'
image: 'postgres:latest'
environment:
- POSTGRES_USER=username
- POSTGRES_PASSWORD=MySuperSecurePassword!
- POSTGRES_DB=MyDatabase
networks:
- mybackend-network
expose:
- 5432
volumes:
- ./db-data/:/var/lib/postgresql/data/
pgadmin:
image: dpage/pgadmin4
ports:
- 15433:80
env_file:
- .env
depends_on:
- postgres
networks:
- mybackend-network
volumes:
- ./pgadmin-data/:/var/lib/pgadmin/
networks:
mybackend-network:
driver: bridge
And my web project docker file looks like this:
# Get base DSK Image from Microsoft
FROM mcr.microsoft.com/dotnet/sdk:6.0 AS build-env
WORKDIR /app
# Copy the CSPROJ file and restore any dependencies (via NUGET)
COPY *.csproj ./
RUN dotnet restore
# Copy the project files and build our release
COPY . ./
RUN dotnet publish -c Release -o out
# Generate runtime image - do not include the whole SDK to save image space
FROM mcr.microsoft.com/dotnet/aspnet:6.0
WORKDIR /app
EXPOSE 80
COPY --from=build-env /app/out .
ENTRYPOINT ["dotnet", "MyBackend.dll"]
And my connection string looks like this:
User ID =bootcampdb;Password=MySuperSecurePassword!;Server=postgres;Port=5432;Database=MyDatabase; Integrated Security=true;Pooling=true;
Currently I have two problems:
I'm getting Npgsql.PostgresException (0x80004005): 57P03: the database system is starting up when I do docker-compose -up. I tried to add the healthcheck to my postgress db but that did not work. When I go to my Docker desktop app, and start my backend again, that message goes away and I get my second problem...
Secondly after the DB started it's saying: FATAL: password authentication failed for user "username". It looks like it's not creating my user for the database. I even changed not to use .env files but have the value in my docker-compose file, but its still not working. I've tried to do docker-compose down -v to ensure my volumes gets deleted.
Sorry these might be silly questions, I'm still new to containerization and trying to get this to work.
Any help will be appreciated!
Problem 1: Having depends_on only means that docker-compose will wait until your postgres container is started before it starts the web container. The postgres container needs some time to get ready to accept connections and if you attempt to connect before it's ready, you get the error you're seeing. You need to code your backend in a way that it'll wait until Postgres is ready by retrying the connection with a delay.
Problem 2: Postgres only creates the user and database if no database already exists. You probably have an existing database in ./db-data/ on the host. Try deleting ./db-data/ and Postgres should create the user and database using the environment variables you've set.
I have the following docker-compose file
version: '3'
services:
web:
container_name: chatt-friends-api
volumes:
- .:/usr/src/appDev
ports:
- 3000:3000
env_file: docker/web-Dockerfile.env
build:
context: .
dockerfile: docker/web-Dockerfile.yml
with the following directory structure
Now when I run docker-compose up and move into the chatt-friends-api container and navigate into the appDev folder. I see two folders inside which are docker, and node_modules. I have no idea why this is so. For your reference this is what web-Dockerfile.yml looks like. I'm copying the whole src folder here anyway, but was trying to see if the volumes get mounted properly as well.
from node:latest
# Create app directory
WORKDIR /usr/src/app
# Install app dependencies
# A wildcard is used to ensure both package.json AND package-lock.json are copied
# where available (npm#5+)
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3000
CMD [ "npm", "start" ]
I am going to post the answer here in case someone else has the problem. I was primarily using VirtualBox on Windows 8 and this has to do with shared folders. You need to share a folder and map it to a version unix understands.
The answer is explained here quite well
Docker volumes mounting on Windows 8 is not working
I have built a RESTful API web service using Flask framework, Redis as main database, MongoDB as a backup store and Celery as task queue to store data into MongoDB in background
Then I dockerize my application using docker-compose. Here is my docker-compose.yml:
version: '3'
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- .:/app
redis:
image: "redis:alpine"
ports:
- "6379:6379"
mongo:
image: "mongo:3.6.5"
ports:
- "27017:27017"
environment:
MONGO_INITDB_DATABASE: syncapp
Here is my Dockerfile:
# base image
FROM python:3.5-alpine
MAINTAINER xhoix <145giakhang#gmail.com>
# copy just the requirements.txt first to leverage Docker cache
# install all dependencies for Python app
COPY ./requirements.txt /app/requirements.txt
WORKDIR /app
# install dependencies in requirements.txt
RUN pip install -r requirements.txt
# copy all content to work directory /app
COPY . /app
# specify the port number the container should expose
EXPOSE 5000
# run the application
CMD ["python", "/app/app.py"]
After run command docker-compose up, the app server, Redis and Mongo server just run well. But when I use Postman or curl to call the API, for example http://127.0.0.1:5000/sync/api/v1.0/users, which should return JSON format of all users, but the result is Could not get any response: There was an error connecting to http://127.0.0.1:5000/sync/api/v1.0/users.
I have no idea why this happens.
Thanks for any help and suggestion!
I found the cause of the issue:
After an hour debug, it turns out that I only need to change the app host to 0.0.0.0. Maybe when mapping port, docker default will be 0.0.0.0, since when I run command docker-compose ps, the PORTS column of each container has format 0.0.0.0:<port> -> <port>. I don't know this is the cause of the issue or not, but I did it and the problem is solved
If operating system Linux then use :
ifconfig -a
If operating system Windows then use :
ipconfig /all
Then check the interface like docker or something with virtualization, and use the ipv4 or inet
Or Just use the docker command:
docker network inspect bridge
Then use the gateway ip on IPAM
I utilize HapiJS via docker compose 2+
.env
NODE_VIEWS_PATH=../
NODE_PUBLIC_PATH=../
MONGODB_URI=mongodb://127.0.0.1:27017/mahrio
WEB_DOMAIN=http://127.0.0.1:6085
deep down somewhere I am setting the HapiJS stuff via these .env files, but for Docker I understand I need to do some changes.. no problem I made a docker specific version
docker.env
NODE_VIEWS_PATH=../
NODE_PUBLIC_PATH=../
MONGODB_URI=mongodb://mongo:27017/mahrio
WEB_DOMAIN=http://0.0.0.0:6085
I've tried 0.0.0.0 and 127.0.0.1 , neither work
I can see everything seems to work however when I goto localhost:6085 I get no response.
127.0.0.1 didn’t send any data.
Dockerfile
FROM node:carbon
# Create app directory
RUN mkdir -p /usr/src/mahrio
WORKDIR /usr/src/mahrio
COPY package*.json /usr/src/mahrio
RUN npm install
# If you are building your code for production
# RUN npm install --only=production
COPY . /usr/src/mahrio
EXPOSE 6085
CMD ["npm", "start"]
docker-compose.yml
version: "2"
services:
app:
build: .
container_name: mahrio
depends_on:
- mongo
env_file:
- docker.env
ports:
- "6085:6085"
restart: always
mongo:
container_name: mongo
image: mongo
volumes:
- ./tmp:/data/db
ports:
- "27017:27017"
any ideas ? No errors from nodejs are coming, everything looks A-OKAY at the console and I know it works outside docker just fine.
Edit: added the docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
25e7a4c3f350 mahriomedium_app "npm start" 24 hours ago Up About a minute 0.0.0.0:6085->6085/tcp mahrio
c8d691777aa0 mongo "docker-entrypoint..." 3 days ago Up About a minute 0.0.0.0:27017->27017/tcp mongo
docker logs
> mahrio-medium#0.0.1 start /usr/src/mahrio
> node server/index.js
Running Development!
MongoDB Config...
Server running at: http://127.0.0.1:6085
MongoDB connected!
db connection opened
it turned out WEB_DOMAIN was the wrong env var
the right var is NODE_URI to set to 0.0.0.0
all works now