Make the same thing made with docker-compose on k8s - kubernetes

Is there a way to run the docker-compose app identically on k8s?
Currently the content of my docker-compose.yml file is as follows:
version: "3"
services:
registry:
restart: always
image: registry:2
ports:
- 5000:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
REGISTRY_AUTH: htpasswd
REGISTRY_AUTH_HTPASSWD_PATH: /auth/htpasswd
REGISTRY_AUTH_HTPASSWD_REALM: Registry Realm
volumes:
- /home/app/docker/test/data:/var/lib/registry
- /home/app/docker/test/certs:/certs
- /home/app/docker/test/auth:/auth
nginx:
container_name: "nginx"
build:
context: ./nginx
dockerfile: Dockerfile
ports:
- '80:80'
depends_on:
- node
node:
container_name: "node"
build:
context: ./web
dockerfile: Dockerfile
volumes:
- ./volumes/index.html:/usr/src/node/index.html
expose:
- "3000"
Can you change this to work with k8s?

Try Kompose(github)
kompose convert -f docker-compose.yaml

Not sure if there's an easy way to do that, but I heard about a tool called Kompose that might help you (I never tried it)

Related

Docker pgadmin 4 - error: "does not appear to be a valid email address. Please reset the PGADMIN_DEFAULT_EMAIL environment variable"

Please bear with me, I'm rather new to docker.
I've got the following docker-compose.yaml file from my colleague who runs this on windows - apparently without problems:
version: "3.3"
services:
mysql-server:
image: mysql:8.0.19
restart: always
environment:
MYSQL_ROOT_PASSWORD: secret
volumes:
- mysql-data:/var/lib/mysql
ports:
- "33061:33061"
phpmyadmin:
image: phpmyadmin/phpmyadmin:5.1.1
restart: always
environment:
PMA_HOST: mysql-server
PMA_USER: ${PMA_USER}
PMA_PASSWORD: ${PMA_PASSWORD}
UPLOAD_LIMIT: 256M
MAX_EXECUTION_TIME: 0
ports:
- "8080:80"
volumes:
- ./database/config.user.inc.php:/etc/phpmyadmin/config.user.inc.php
postgresdb:
container_name: pg_container
image: postgres:latest
restart: always
ports:
- "54321:54321"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
volumes:
- postgres:/var/lib/postgresql/data
pgadmin:
container_name: pgadmin_container
depends_on:
- postgresdb
image: dpage/pgadmin4:5
restart: always
ports:
- "5556:80"
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
volumes:
- pgadmin:/var/lib/pgadmin
web:
build:
context: .
dockerfile: dockerfile-python
command: python3 manage.py runserver 0.0.0.0:8000
container_name: python_myApp
volumes:
- .:/theApp
ports:
- "8000:8000"
depends_on:
- postgresdb
volumes:
mysql-data:
postgres:
pgadmin:
I run it on Linux, version is: Docker version 20.10.9, build c2ea9bc
Problem is, container pgadmin won't start up - it gives me the following error:
'"server#myapp.de"' does not appear to be a valid email address. Please reset the PGADMIN_DEFAULT_EMAIL environment variable and try again.
The .env file looks like that:
PMA_USER="root"
PMA_PASSWORD="XXXX"
POSTGRES_DB='postgres'
POSTGRES_USER='admin'
POSTGRES_PASSWORD='XXXX'
PGADMIN_DEFAULT_EMAIL="server#myapp.de"
PGADMIN_DEFAULT_PASSWORD="XXXX"
I tried to reset everything by doing a
docker system prune
docker volume prune
but the error persists. What's going wrong here?
thanks!
You don't need any " in env files, just remove them
PMA_USER=root
PMA_PASSWORD=XXXX
POSTGRES_DB=postgres
POSTGRES_USER=admin
POSTGRES_PASSWORD=XXXX
PGADMIN_DEFAULT_EMAIL=server#myapp.de
PGADMIN_DEFAULT_PASSWORD=XXXX

AWS ECS "invalid reference format" on multicontainer app deploy

Cannot compose up in ecs context my multi-container app. In the default context it run. Where am I doing wrong?
$ docker compose up
mysql:8.0.23 resolved to docker.io/library/mysql:8.0.23#sha256:...
mongo:4.4.3-bionic resolved to docker.io/library/mongo:4.4.3-bionic#sha256:...
**invalid reference format**
My docker-compose file:
version: '3.8'
services:
app:
container_name: tsb-app
build:
context: ..
dockerfile: .docker/Dockerfile
depends_on:
- mongo
- mysql
volumes:
- tsb_modules:/node_modules
expose:
- "80"
mongo:
container_name: tsb-mongo
image: mongo:4.4.3-bionic
environment:
MONGO_INITDB_ROOT_USERNAME: asdasdasd
MONGO_INITDB_ROOT_PASSWORD: asdasdasd
volumes:
- tsb_data:/data/db
mysql:
container_name: tsb-mysql
image: mysql:8.0.23
environment:
MYSQL_ROOT_PASSWORD: asdasdasd
MYSQL_USER: asdasdasd
MYSQL_PASSWORD: asdasdasd
volumes:
- tsb_logs:/var/lib/
# Create the required schemas and tabs on first start
- ./setup.sql:/docker-entrypoint-initdb.d/setup.sql
ports:
- 3300:3306
volumes:
tsb_data:
tsb_logs:
tsb_modules:
my .docker/Dockerfile
FROM node:15.8.0-alpine3.10
WORKDIR /app
COPY package.json .yarnrc yarn.lock prod.env ./.docker/setup.sql ./
RUN yarn install --production --frozen-lockfile
COPY build/ ./build
CMD node build/index.js

docker compose phpmyadmin php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution

I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.

Docker compose for MongoDB ReplicaSet

I have been trying to dockerize my spring boot application which depends on redis, kafka and mongodb.
Following is the docker-compose.yml:
version: '3.3'
services:
my-service:
image: my-service
build:
context: ../../
dockerfile: Dockerfile
restart: always
container_name: my-service
environment:
KAFKA_CONFLUENT_BOOTSTRAP_SERVERS: kafka:9092
MONGO_HOSTS: mongodb:27017
REDIS_HOST: redis
REDIS_PORT: 6379
volumes:
- /private/var/log/my-service/:/var/log/my-service/
ports:
- 8080:8090
- 1053:1053
depends_on:
- redis
- kafka
- mongodb
portainer:
image: portainer/portainer
command: -H unix:///var/run/docker.sock
restart: always
container_name: portainer
ports:
- 9000:9000
- 9001:8000
volumes:
- /var/run/docker.sock:/var/run/docker.sock
redis:
image: redis
container_name: redis
restart: always
ports:
- 6379:6379
zookeeper:
image: wurstmeister/zookeeper
ports:
- 2181:2181
container_name: zookeeper
kafka:
image: wurstmeister/kafka
ports:
- 9092:9092
container_name: kafka
environment:
KAFKA_CREATE_TOPICS: "cms.entity.change:1:1" # topic:partition:replicas
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_PORT: 9092
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- "zookeeper"
mongodb:
image: mongo:latest
container_name: mongodb
environment:
MONGO_INITDB_ROOT_USERNAME:
MONGO_INITDB_ROOT_PASSWORD:
ports:
- 27017:27017
volumes:
- ./data/db:/data/db
The issue is that this starts up mongo as a STANDALONE instance. So the APIs in my service that persist data are failing as mongo needs to start as a REPLICA_SET.
How can I edit my docker-compose file to start mongo as a REPLICA_SET?
I had the same issue and ended up on this stackoverflow post.
We had a requirement of using official mongoDB docker image (https://hub.docker.com/_/mongo ) and couldn't use bitnami as suggested in Vahid's answer.
This answer isn't exactly what's needed by the question asked and coming in 6 months too late; but it should give directions to someone who need to use the mongoDb standalone replicaset throw away instance for integration testing purpose. If you need to use it in PROD then you'll have to provide environment variables for volumes and auth as per Vahid's answer.
version: '3.7'
services:
mongodb:
image: mongo:latest
container_name: myservice-mongodb
networks:
- myServiceNetwork
expose:
- 27017
command: --replSet singleNodeReplSet
mongodb-replicaset:
container_name: mongodb-replicaset-helper
depends_on:
- mongodb
networks:
- myServiceNetwork
image: mongo:latest
command: bash -c "sleep 5 && mongo --host myservice-mongodb --port 27017 --eval \"rs.initiate()\" && sleep 2 && mongo --host myservice-mongodb --port 27017 --eval \"rs.status()\" && sleep infinity"
my-service:
depends_on:
- mongodb-replicaset
image: myserviceimage
container_name: myservicecontainer
networks:
- myServiceNetwork
environment:
myservice__Database__ConnectionString: mongodb://myservice-mongodb:27017/?connect=direct&replicaSet=singleNodeReplSet&readPreference=primary
myservice__Database__Name: myserviceDb
networks:
myServiceNetwork:
driver: bridge
NOTE: Please look at the way how connection string is passed as env variable to the service depending on mongo replicaset instance. You'd have to ensure that the name used in setting up the mongodb replicaset (in my case singleNodeReplicaSet) is passed on to the service depending on it.
Edited:
my previous answer was far wrong so I changed it. I managed to make it work using 'bitnami/mongodb:4.0'. Not sure if that would help you or not, but maybe it gives you some idea. They have a docker-compose file ready for replicaset mode.
version: '3'
services:
mdb-primary:
image: 'bitnami/mongodb:4.0'
environment:
- MONGODB_REPLICA_SET_MODE=primary
- MONGODB_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-primary
mdb-secondary:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=secondary
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-secondary
mdb-arbiter:
image: 'bitnami/mongodb:4.0'
depends_on:
- mdb-primary
environment:
- MONGODB_PRIMARY_HOST=mdb-primary
- MONGODB_REPLICA_SET_MODE=arbiter
- MONGODB_PRIMARY_ROOT_PASSWORD=somepassword
- MONGODB_REPLICA_SET_KEY=replicasetkey
- MONGODB_ADVERTISED_HOSTNAME=mdb-arbiter
mongo-cli:
image: 'bitnami/mongodb:latest'
don't forget to add volumes and map it to /bitnami on the primary node
the last container, mongo-cli is for testing purposes. So you can connect to the replicaset using the cli, there is an argument about that here if you like to read about it.
$ docker-compose exec mongo-cli bash
$ mongo "mongodb://mdb-primary:27017/test?replicaSet=replicaset"

Save Postgres Data to Directory in Docker Named Volume

Problem
I have an application with postgres. I want to be able to back up the initial database data so that I don't have to re enter it each deployment. However, despite having a named volume set up in my compose file.
What I'm not sure of is how to have postgres save its data into the directory associated with the volume. I'm also not sure exactly how to associate a directory with the named volume. What I want is for the docker host server to be able to see the postgress data in the named volume's associated directory.
Could someone please provide an explanation/some examples of how to handle this? Right now even though the volume is associated with the docker service in the compose file, it doesn't write any data to the database_volume/ directory. This is what I would like to address.
Code
Here's my Dockerfile:
FROM python:3.6
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
ADD /scripts/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
EXPOSE 8001
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
And my docker-compose.yml:
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=sasite.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
db:
restart: always
image: postgres:10.1-alpine
container_name: ps01
environment:
POSTGRES_DB: sasite_db
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pguser123
ports:
- "5432:5432"
volumes:
- database_volume:/var/lib/postgresql/data
networks:
- main
nginx:
restart: always
image: nginx
container_name: ng01
volumes:
- ./config/nginx-prodtest.conf:/etc/nginx/conf.d/default.conf:ro
- ./static:/usr/share/nginx/sasite/static
- ./media:/usr/share/nginx/sasite/media
ports:
- "80:80"
- "443:443"
networks:
- main
depends_on:
- app
networks:
main:
volumes:
database_volume:
driver_opts:
type: none
device: ./database_volume
o: bind