I am facing some issue in Docker which is installed on EC2 node in AWS.
I have installed ELK in docker using docker-compose and now able to see the logs using tcp filter (winston3-npm) . I have also attached one EBS volume in this EC2 instance , now i want to persist the logs in this EBS so that even if i terminate my EC2 instance and spawn a new instance using this EBS volume then i want to see all the old logs.
So, I am not able to mount a EBS volume to docker so that all my data can be preserved.
Below is my docker-compose file .
Could anyone help me on this ?
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- /data:/usr/share/elasticsearch/data/:rw
#- type: volume
#source: elasticsearch
#target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5000:5000"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
EBS is attached to your EC2 instance and there is no direct way to mount your EBS volume in docker container running on your EC2 instance.
You can mount EBS volume on your EC2 instance and point docker persistence storage to that mount point.
I am able to resolve the issue...Have mentioned the EBS mounted directory path in docker.service file which exists inside the /lib/systemd/system/ and able to see all the docker respective data in ebs.
Thank you all for helping me .
Related
I have a few config file that have to be mapped to files inside the container. I want to be able to change these config files on the host and that should reflect in the container. These are basically connection string files that I want to swap without having to rebuild the containers. What I have in my docker-compose.yml is:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: volume
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: volume
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
I fail to get this to work... I saw some examples where they did not supply the type (or instead of volume they made it "bind") but nothing seems to work for me.
If I build the images with docker compose up and then do docker inspect portal I can see that it has: "Mounts": []
My final plan is to have a docker-compose.yml that has a service called portal and mounts 2 or more files inside the container(NOT copy so that I can change it on my host at will) as well as a few directories. What is kicking me in the face is the files that have to be mapped into the container.
I think you need to change type: volume to type: mount
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: mount
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: mount
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
Also, you can add read-only: true to both of those mounts if you don't want the services to be able to modify parameters.yml or portal.conf.
Just mapping should do the job if the files and folders in the lhs exists in your local machine:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- ./local/parameters.local.yml:/var/www/portal/s/config/parameters.yml
- ./portal.conf:/etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
volumes:
awscreds:
When using a docker ACI context, the following docker-compose file fails. The mongodb container continuously restarts.
version: "3.9"
services:
mongodb:
image: mongo:5.0.6
env_file: mongo.env
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=changeit
ports:
- 27017
volumes:
- dbdata:/data/db
volumes:
dbdata:
driver: azure_file
driver_opts:
share_name: mongodb-data
storage_account_name: kpncoqyuxumoetuftz
If I don't use the azure_file storage it will run ok (But of course the data won't be persistent)
I am not sure why I can't mount to the default directory /data/db but to get this to work I had to mount to a different directory and then replace the default command with one that takes a parameter.
Working version is below:
version: "3.9"
services:
mongodb:
image: mongo:5.0.6
command: ["mongod", "--dbpath=/dbdata"]
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=changeit
ports:
- 27017
volumes:
- dbdata:/dbdata
volumes:
dbdata:
driver: azure_file
driver_opts:
share_name: mongodb-data
storage_account_name: kpncoqyuxumoetuftz
I have a Docker Compose file with some services. One of them is the database which I would like to back up the volumes and migrate all the data to another machine.
My docker-compose.yml looks like this
version: '3'
services:
service1:
...
serviceN:
db:
image: postgres:11
ports:
- 5432:5432
networks:
- postgresnet
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
volumes:
postgresql:
postgresql_data:
networks:
postgresnet:
driver: bridge
How could I backup the data of postgresql and postgresql_data volunes and migrate them to another machine?
Easiest way is to share external volumes between your docker-compose files.
First create volume
docker volume create shared-data
Next modify you yml:
...
volumes:
postgresql:
postgresql_data:
external:
name: shared-data
...
Now your postgresql_data is mapped to external volume and everything you save there could be visible from outside. Just create same configuration in another docker-compose.yml and enjoy.
I use docker-compose v3 file to deploy services on docker swarm mode cluster.
My services are elasticsearch and kibana. I want that kibana was accessible from outside, and that elasticsearch could be accessed by kibana and was not visible and accessible from outside. In order to reach this kind of behavior, I created 2 overlay networks called 'external' and 'elk_only'. I put elasticseach on 'elk_only' network and I placed kibana under 'elk_only' and 'external' networks. And the things do not work. When I go to localhost:5601 (kibana's port), I get a message: 'localhost refused to connect'.
The command I use to deploy services is
docker stack deploy --compose-file=elastic-compose.yml elkstack
The content of elastic-compose.yml file:
version: "3"
services:
elasticsearch:
image: elasticsearch:5.1
expose:
- 9200
networks:
- elk_only
deploy:
restart_policy:
condition: on-failure
kibana:
image: kibana:5.1
ports:
- 5601:5601
volumes:
- ./kibana/kibana.yml:/etc/kibana/kibana.yml
depends_on:
- elasticsearch
networks:
- external
- elk_only
deploy:
restart_policy:
condition: on-failure
networks:
elk_only:
driver: overlay
external:
driver: overlay
The content of kibana.yml is
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "http://elkstack_elasticsearch:9200"
Could you help me to solve this problem and understand what's going wrong? Any help would be appreciated!
I'm trying to persist postgres data in a docker container so that as soon as you docker-compose down and docker-compose up -d you don't lose data from your previous session. I haven't been able to make it do much of anything - pulling the container down and back up again routinely deletes the data.
Here's my current docker-compose.yml:
version: '2'
services:
api:
build: .
ports:
- '8245:8245'
volumes:
- .:/home/app/api
- /home/app/api/node_modules
- /home/app/api/public/src/bower_components
links:
- db
db:
build: ./database
env_file: .env
ports:
- '8246:5432'
volumes_from:
- dbdata
dbdata:
image: "postgres:9.5.2"
volumes:
- /var/lib/postgresql/data
Help?
According to the Dockment of Docker Compose, when you write something like:
volumes:
- /var/lib/postgresql/data
It creates a new docker volume and map it to /var/lib/postgresql/data inside the container.
Therefore, each time you run docker-compose up and docker-compose down, it creates new volume. You can confirm the behavior with docker volume ls.
To avoid it, you have two options:
(A) Map host directory into container
You can map directory of host into container using <HOST_PATH>:<CONTAINER_PATH>.
volumes:
- /path/to/your/host/directory:/var/lib/postgresql/data
The data of postgresql will be saved into /path/to/your/host/directory of the container host.
(B) Use external container
docker-compose has an option of external container.
When it is set to true, it won't always create volume.
Here's an example.
version: '2'
services:
dbdata:
image: postgres:9.5.2
volumes:
- mypostgresdb:/var/lib/postgresql/data
volumes:
mypostgresdb:
external: true
With external: true, docker-compose won't create the mypostgredb volume, so you have to create it by your own using following command:
docker volume create --name=mypostgredb
The data of postgresql will be saved into docker volume named mypostgredb. Read reference for more detail.