I want to run my image that is a Bert model with FastAPI that can use the Gpus power to run, but it failed to use it, is there anything wrong with the yml file?
version: '3.9'
services:
fastapi:
image: bert-with-fastapi:latest
build: .
ports:
- 8000:8000
deploy:
resources:
reservations:
devices:
- driver: nvidia
capabilities: [gpu]
command: serve
Related
General Information:
Os : Ubuntu 20.04 LTS
docker : v. 20.10.6
docker-compose: v. 1.29.2
Soundcard : Steinberg UR22mkII
Drivers : Alsa
We are developing a web service to record audio signals and display various properties and analysis for anomaly detection. For some analyses larger window sizes are necessary but also some realtime plots are included. The realtime plots are done via java-script(p5-Module) while everything else is processed via flask and python and visualized via grafana.
We have now encountered the problem that these two different clients cannot access the same audio device at the same time. In the main system this behavior can be solved with the dsnoop plugin from asoundrc (https://www.alsa-project.org/wiki/Asoundrc). But until now we were not able to implement this functionality in the docker-environment.
We have already tried to tunnel the virtual audio devices via the docker-compose file, but without success. (The compose file is enclosed). The Alsa drivers inside the container are also installed correctly. We suspect that it has something to do with the setup of the internal docker-environment, but are stuck at this point.
We are grateful for any tips or hints!
Docker-Compose File:
version: "3.8"
services:
webapp:
build: .
restart: always
depends_on:
- influxdb
- grafana
ports:
- 5000:5000
volumes:
- ./:/aad
devices:
- /dev/snd:/dev/snd
environment:
# output gets written to the docker-compse console without buffering it
- PYTHONUNBUFFERED=1
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
influxdb:
image: influxdb
restart: always
ports:
- 8086:8086
volumes:
# mount volumes to store data and configuration files. Dirs are createt if necessary
- ./influxdb/data:/var/lib/influxdb2
- ./influxdb/config:/etc/influxdb2
# mount script to be executed (only) after initial setup is done
- ./influxdb/scripts:/docker-entrypoint-initdb.d
environment:
# setup of database is only executet if no boltdb file is found in the specified path so the container with influx can be rebooted same as once setup
DOCKER_INFLUXDB_INIT_USERNAME: aad
DOCKER_INFLUXDB_INIT_PASSWORD: .......
DOCKER_INFLUXDB_INIT_ORG: aaddev
DOCKER_INFLUXDB_INIT_BUCKET: training
DOCKER_INFLUXDB_INIT_ADMIN_TOKEN: .......
DOCKER_INFLUXDB_INIT_MODE: setup
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
grafana:
image: grafana/grafana
# port mapping only need for easieser debugging
ports:
- 3000:3000
restart: always
depends_on:
- influxdb
volumes:
- grafana-storage:/var/lib/grafana
- ./grafana/provisioning:/etc/grafana/provisioning
environment:
GF_SECURITY_ADMIN_USER: aad
GF_SECURITY_ADMIN_PASSWORD: .......
GF_PATHS_CONFIG: /etc/grafana/grafana.ini
GF_USERS_DEFAULT_THEME: light
GF_AUTH_ANONYMOUS_ENABLED: "True"
GF_SECURITY_ALLOW_EMBEDDING: "True"
GF_AUTH_ANONYMOUS_ORG_NAME: Main Org.
GF_AUTH_ANONYMOUS_ORG_ROLE: Viewer
GF_DASHBOARDS_MIN_REFRESH_INTERVAL: 1s
GF_AUTH_BASIC_ENABLED: "True"
GF_DISABLE_LOGIN_FORM: "True"
logging:
driver: "json-file"
options:
max-size: "200k"
max-file: "10"
volumes:
grafana-storage:
Python Environment:
name: aad
channels:
- anaconda
- conda-forge
- defaults
dependencies:
- portaudio=19.6.0=h7b6447c_4
- flask=1.1.2=pyhd3eb1b0_0
- librosa=0.8.0=pyh9f0ad1d_0
- matplotlib=3.3.4=py38h06a4308_0
- numpy=1.20.1=py38h93e21f0_0
- pandas=1.2.4=py38h2531618_0
- pip=21.0.1=py38h06a4308_0
- pyaudio=0.2.11=py38h7b6447c_2
- python=3.8.8=hdb3f193_5
- scikit-learn=0.24.1=py38ha9443f7_0
- scipy=1.6.2=py38had2a1c9_1
- tqdm=4.59.0=pyhd3eb1b0_1
- werkzeug=1.0.1=pyhd3eb1b0_0
- pip:
- influxdb-client==1.16.0
- rx==3.2.0
I made 2 yml and when i run docker-compose -f postgresql.yml up its starts ok
and then when i run docker-compose -f postgresql2.yml up first exist code 0.
Is it even possible to run same image twice?
My main purpose to run same web app source twice with different db on the same server pc.
1 web app source 2 instances with self db each on one server(maybe its clearer definition).
Maybe there is better approach and I do and think everything in wrong way.
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5432:5432
and this no big difference
postgresql2.yml
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true
Just use another service name freshhipster-postgresql2 on postgresql2.yml
version: '3.8'
services:
freshhipster-postgresql2:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true
I'm building a dockerfile. But I meet a problem. It says that :
/bin/sh: 1: mongod: not found
My dockerfile:
FROM mongo:latest
FROM node
RUN mongod
COPY . .
RUN node ./scripts/import-data.js
Here is what happen when docker build:
Sending build context to Docker daemon 829.5MB
Step 1/8 : FROM rabbitmq
---> e8261c2af9fe
Step 2/8 : FROM portainer/portainer
---> 00ead811e8ae
Step 3/8 : FROM docker.elastic.co/elasticsearch/elasticsearch:6.5.1
---> 32f93c89076d
Step 4/8 : FROM mongo:latest
---> 5976dac61f4f
Step 5/8 : FROM node
---> b074182f4154
Step 6/8 : RUN mongod
---> Running in 0a4b66a77178
/bin/sh: 1: mongod: not found
The command '/bin/sh -c mongod' returned a non-zero code: 127
Any idea ?
The problem is that you are using two FROM instructions, which is referred to as a multi-stage build. The final image will be based on the node image that doesn't contain the mongo database.
* Edit *
here are more details about what is happening:
FROM mongo:latest
the base image is mongo:latest
FROM node
now the base image is node:latest. The previous image is just standing there...
RUN mongod
COPY . .
RUN node ./scripts/import-data.js
now you run mongod and the other commands in your final image that is based on node (which doesn't contain mongo)
It happens because multiple FROM instructions should be used for Multistage Build (check the documentation) and NOT for image creation contains all of present applications.
Multistage builds provide you possibility of delegation building process into container's environment without local application installation.
FROM rabbitmq
...some instructions require rabbitmq...
FROM mongo:latest
...some instructions require mongo...
In other words if you want to create an image with rabbitmq, mongo and other application you have to choose the image and install applications manually.
Use docker-compose (https://docs.docker.com/compose/install/) to run the images rather than attempting to build a new image from a collection of existing images. Your docker-compose.yml might look something like:
version: '3.7'
services:
portainer:
image: 'portainer/portainer'
container_name: 'portainer'
hostname: 'portainer'
domainname: 'example.com'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
- 'portainer_data:/data'
ports:
- '9000:9000'
rabbitmq:
image: 'rabbitmq'
container_name: 'rabbitmq'
hostname: 'rabbitmq'
domainname: 'example.com'
volumes:
- 'rabbitmq_data:/var/lib/rabbitmq'
elasticsearch:
image: 'elasticsearch:7.1.1'
container_name: 'elasticsearch'
hostname: 'elasticsearch'
domainname: 'example.com'
environment:
- 'discovery.type=single-node'
volumes:
- 'elasticsearch_data:/usr/share/elasticsearch/data'
ports:
- '9200:9200'
- '9300:9300'
node:
image: 'node:12'
container_name: 'node'
hostname: 'node'
domainname: 'example.com'
user: 'node'
working_dir: '/home/node/app'
environment:
- 'NODE_ENV=production'
volumes:
- './my-app:/home/node/app'
ports:
- '3000:3000'
command: 'npm start'
mongo:
image: 'mongo'
container_name: 'mongo'
hostname: 'mongo'
domainname: 'example.com'
restart: 'always'
environment:
- 'MONGO_INITDB_ROOT_USERNAME=root'
- 'MONGO_INITDB_ROOT_PASSWORD=example'
volumes:
- 'mongo_data:/data/db'
volumes:
portainer_data:
rabbitmq_data:
elasticsearch_data:
mongo_data:
I see, quite simple
step1. create this Dockerfile:
FROM mongo:latest
step2. create image from this Dockerfile:
docker build . -t my_mongo_build
This is equal to docker run ..... mongo:latest, used for some strange scenario
I am using docker-compose up command to spin-up few containers on AWS AMI RHEL 7.6 instance. I observe that in whichever containers there's a volume mounting, they are exiting with status Exiting(1) immediately after starting and remaining containers remain up. I tried using tty: true and stdin_open: true, but it didn't help. Surprisingly, the set-up works fine in another instance which basically I am trying to replicate in this new one.
The stopped containers are Fabric v1.2 peers, CAs and orderer.
Docker-compose.yml file which is in root folder where I use docker-compose up command
version: '2.1'
networks:
gcsbc:
name: gcsbc
services:
ca.org1.example.com:
extends:
file: fabric/docker-compose.yml
service: ca.org1.example.com
fabric/docker-compose.yml
version: '2.1'
networks:
gcsbc:
services:
ca.org1.example.com:
image: hyperledger/fabric-ca
environment:
- FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server
- FABRIC_CA_SERVER_CA_NAME=ca-org1
- FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
- FABRIC_CA_SERVER_TLS_ENABLED=true
- FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca.org1.example.com-cert.pem
ports:
- '7054:7054'
command: sh -c 'fabric-ca-server start -b admin:adminpw -d'
volumes:
- ./artifacts/channel/crypto-config/peerOrganizations/org1.example.com/ca/:/etc/hyperledger/fabric-ca-server-config
container_name: ca_peerorg1
networks:
- gcsbc
hostname: ca.org1.example.com
I use docker-compose v3 file to deploy services on docker swarm mode cluster.
My services are elasticsearch and kibana. I want that kibana was accessible from outside, and that elasticsearch could be accessed by kibana and was not visible and accessible from outside. In order to reach this kind of behavior, I created 2 overlay networks called 'external' and 'elk_only'. I put elasticseach on 'elk_only' network and I placed kibana under 'elk_only' and 'external' networks. And the things do not work. When I go to localhost:5601 (kibana's port), I get a message: 'localhost refused to connect'.
The command I use to deploy services is
docker stack deploy --compose-file=elastic-compose.yml elkstack
The content of elastic-compose.yml file:
version: "3"
services:
elasticsearch:
image: elasticsearch:5.1
expose:
- 9200
networks:
- elk_only
deploy:
restart_policy:
condition: on-failure
kibana:
image: kibana:5.1
ports:
- 5601:5601
volumes:
- ./kibana/kibana.yml:/etc/kibana/kibana.yml
depends_on:
- elasticsearch
networks:
- external
- elk_only
deploy:
restart_policy:
condition: on-failure
networks:
elk_only:
driver: overlay
external:
driver: overlay
The content of kibana.yml is
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://elkstack_elasticsearch:9200"
Could you help me to solve this problem and understand what's going wrong? Any help would be appreciated!