Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I am trying to convert my docker-compose.yml file which works perfectly fine to run it in minikube with kubectl.
I have installed minikube, kubectl and kompose, when I try to run I get the following error:
C:\Users\test\Downloads\get-string-master\get-string-master> .\kompose-windows-amd64.exe -f .\docker-compose.yml up
INFO Build key detected. Attempting to build and push image 'mainapp'
INFO Building image 'mainapp' from directory 'mainapp'
FATA Error while deploying application: k.Transform failed: Unable to build Docker image for service mainapp: open \tmp\kompose-image-build-399841535: The system cannot find the path specified.
Here is my docker compose file:
version: '3'
services:
mainapp:
container_name: mainapp
restart: always
build: ./mainapp
image: mainapp
ports:
- "8000:8000"
command: gunicorn -c gunicorn.ini mainapp.application:app
reverseapp:
container_name: reverseapp
restart: always
build: ./reverseapp
image: reverseapp
ports:
- "8001:8001"
depends_on:
- mainapp
command: gunicorn -c gunicorn.ini reverseapp.application:app
nginx:
container_name: nginx
restart: always
build: ./nginx
image: nginx1
ports:
- "80:80"
depends_on:
- reverseapp
Judging by exception you have posted:
\tmp\kompose-image-build-399841535: The system cannot find the path
specified
I can suggest you to check that this directory exists and your user has sufficient permissions to access it.
Related
This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed 3 months ago.
Hi everyone i was working in a django project with docker. and i have a problem to connect
docker container with postgres database, one the the way to connect is network_mode: host but i want to connect using custom network.
version: '3.3'
services:
web:
restart: always
container_name: main
command:
- /bin/bash
- -c
- |
python manage.py makemigrations accounts
python manage.py migrate
python manage.py runserver
image: main
build: .
volumes:
- .:/main
ports:
- "8000:8000"
extra_hosts:
- "dbhost:172.17.0.1"
networks:
- backend
networks:
backend:
volumes:
static:
If you are on Docker Desktop with WSL2 you can use host.docker.internal other than that you need to get the Gateway Ip from here: docker network inspect bridge it might be 172.17.0.1. Also to do this you don't want to use a specific network in your compose file but use the default one.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 8 months ago.
Improve this question
I tried to adjust the indentation for my YAML file but when I run docker-compose, I'm still seeing the "service must be a mapping, not a NoneType." error.
Pasted YAML file here:
version: "3.1" services: postgres-source:
image: postgres:12.6
ports:
- "5439:5432"
environment: - POSTGRES_PASSWORD=postgres postgres-target:
image: postgres:12.6
ports:
- "5440:5432"
environment: - POSTGRES_PASSWORD=postgres command: [ "postgres", "-c", "wal_level=logical" ]
The indentation of your YAML file doesn't look correct. Please try this format instead:
version: "3.1"
services:
postgres-source:
image: postgres:12.6
ports:
- "5439:5432"
environment:
- POSTGRES_PASSWORD=postgres
postgres-target:
image: postgres:12.6
ports:
- "5440:5432"
environment:
- POSTGRES_PASSWORD=postgres
command: [ "postgres", "-c", "wal_level=logical" ]
Warning: I am fairly new to docker and cloud hosting, this is likely a dumb question.
I have a local web app which has 3 images associated with it, the app itself, the db and a phpmyadmin image. All works well locally, and if I transfer all the files to my digital ocean droplet and bring up my containers it works fine there as well, but this is not how I want to deploy having every file from every library residing in my droplet.
I have been experimenting with creating a docker-machine on my droplet and deploying my containers remotely to it. This seems to work fine other than the fact that my db image does not reference my database and is simply an empty db. I tried to migrate the db in this fashion which I saw in a tutorial:
docker-compose run --rm web db:create db:migrate
But got the following error, I assume this is because my dev machine is running Windows 10 not Linux, but I cannot find anywhere what the equivalent command would be for a Windows machine.
Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "exec: \"db:create\": executable file not found in $PATH": unknown
I know I am probably missing something really stupid and easy but I am having difficulties figuring out how to migrate the data for my db image. Thanks in advance.
UPDATE:
As requested here is my docker-compose:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ./data:/docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
volumes:
data:
UPDATE #2:
transfered db file to /docker-entrypoint-initdb.d (I tried this yesterday too but couldn't get it working) and created a new production docker-compose-prod.yml I must be missing something still though as the DB is still empty. Below is my new docker-compose-prod.yml:
version: "3.4"
services:
phpmyadmin:
image: phpmyadmin/phpmyadmin
environment:
- PMA_ARBITRARY=1
- PMA_HOST=db
restart: always
ports:
- 80:80
volumes:
- /sessions
depends_on:
- db
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- /docker-entrypoint-initdb.d
restart: always
web:
depends_on:
- db
build: .
ports:
- "8080:8080"
restart: always
Your strategy is sound.
Actually, you can take it a further step by automating the Droplet provisioning to e.g. use a container-oriented OS and access your Compose file. But that's not this question ;-)
I think it is not relevant that you're using Windows and probably makes little difference; it may require some answer tweaks but that's about it.
The challenge is that you need to move (or recreate) the database state on the remote machine. There are several ways that the DB state could be persisted: in-container (not ideal); using volume mounts (good), other.
Each is "moveable" but it would help if you could add your Compose file to your question so that we may see which approach is being used.
In full-disclosure Im not familiar with the approach that you referencesd but that does not mean that it's inaccurate; I'm just not familiar with it.
Update: docker-entrypoint-initdb.d
See: "Initializing a fresh instance" on MySQL
So, any files within that directory are run to initialize the database container when it's created from the image.
In your Compose file you mount your host's ./data directory into this file. Presumably that directory contains >=1 file that performs your intended initialization.
NB The section volumes: data: at the end of the Compose file appears redundant. You're actually using a host-mounted directory ./data not this volume.
When you run the Compose file on the Droplet, those files aren't present and you'll need to copy them.
The simplest way to do this is to use scp and this provides 2 alternatives:
Either retain the data directory:
IP=[DROPLET-IP]
scp -r ./data root#${IP}:/data
NB The remote destination is /data not ./data. You will need to revise the Compose file on the Droplet (!) too:volumes: - /data:/docker-entrypoint-initdb.d
Or move the files directly to the Droplet's /docker-entrypoint-initdb.d:
scp -r ./data root#${IP}/docker-entrypointy-initdb.d
NB Now there's no need for the volume mapping. You may remove: volumes: - ./data:/docker-entrypoint-initdb.d
Update: repro (works)
I used a tweaked docker-compose.yaml but it's essentially the same:
version: "3.4"
services:
db:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: mypass
MYSQL_DATABASE: mydb
ports:
- "3306:3306"
volumes:
- ${PWD}/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d
restart: always
adminer:
image: adminer
restart: always
ports:
- 8080:8080
Then mkdir ${PWD}/docker-entrypoint-initdb.d and created a file in it called freddie.sql:
create database if not exists frederik;
use frederik;
create table treats (
TreatID INT NOT NULL AUTO_INCREMENT,
TreatName VARCHAR(255) NOT NULL,
PRIMARY KEY (TreatId));
insert into treats (TreatName)
values
("Dried Salmon"),
("Meatballs");
Then docker-compose rm --force && docker-compose up
I was able to browse the adminer UI (:8080), login (root|mypass) and browse the database frederik:
I'm new to docker and am trying to make a composed image consisting of services, nginx and postgresql database. I'm following the tutorial here : http://www.patricksoftwareblog.com/how-to-use-docker-and-docker-compose-to-create-a-flask-application/
And have been successful up to adding postgresql where I'm having difficulties and questions.
My docker-compose.yml:
version : '2'
services:
web:
restart: always
build: ./home/admin/
expose:
- "8000"
nginx:
restart: always
build: ./etc/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
depends_on:
- web
data:
image: postgres:9.6
volumes:
- /var/lib/postgresql
command: "true"
postgres:
restart: always
build: ./var/lib/postgresql
volumes_from:
- data
ports:
- "5432:5432"
I have included his docker generator script under /var/lib/postgresql but keep facing ERROR: Dockerfile parse error line 1: unknown instruction: IMPORT when I run 'docker-compose build'.
If I leave in the 'data' section & remove the postgres section in my docker-compose.yml file, my containers seemingly run fine but I'm unsure if postgresql is properly running at all. I'm able to GET using curl but still - I'm unsure how to go about confirming postgres specifics to confirm a proper environment and would appreciate examples on this topic in particular.
I was also wondering if running my docker-compose containers then simply running a separate postgresql container could also function if provided the correct ports.
Thank you!
Check the content of your docker-compose.yml:
yaml format (see for instance codebeautify.org/yaml-validator)
eol or encoding issue
multi-line instructions
So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)