Monitor Container is not running - ceph

I pulled the latest image from DockerHub Ceph/Daemon. I run the container like this:
docker run -d --net=host \
-v ~/ceph-container1/etc/ceph:/etc/ceph \
-v ~/ceph-container1/var/lib/ceph/:/var/lib/ceph/ \
-e MON_IP=192.168.0.20 \
-e CEPH_PUBLIC_NETWORK=192.168.0.0/24 \
ceph/daemon mon
The container exits immediately after created. I can not use ceph -v or ceph -s to check I deployed right or not. Same thing happens on OSD and MDS as well. Only MGR container will keep running after created.
My system is ArchLinux. Did I miss any thing else to keep it running? Thanks.

Related

Cannot connect to volume in Docker (Windows)

I am trying to run Postgresql in Docker using this code in a terminal:
`winpty docker run -it \
-e POSTGRES_USER="root" \
-e POSTGRES_PASSWORD="root" \
-e POSTGRES_DB="ny_taxi" \
-v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/var/lib/postgresql/data \
-p 5432:5432 \
postgres:13`
and I keep running into this error: Error response from daemon: The system cannot find the file specified.
I have looked up this error and the solutions I see online (such as restarting Docker Desktop, reinstalling Docker, updating Docker) did not work for me.
I think the issue is with the volume part (designated by -v) because when I remove it, it works just fine. However, I want to be able to store the contents in a volume permanently, so running it without the -v is not a long-term solution.
Has anyone run into a similar issue before?
Check if you can access to this path in host.
dir C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data
check if you can access on volume inside a container.
winpty docker run -v C:\Users\SomeUser\OneDrive\Documents\ny_taxi_postgres_data:/data alpine ls /data

I cannot run more than 2 containers in Docker

I'm running docker for a project on Github. (https://github.com/mydexchain/mydexchain)
I'm running the code below and creating an image file and a container.
docker run -d --rm -p 2020:2020 -v mydexchain:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain mydexchain/mydexchain:latest
I set my tracker address on port 2020.
I run the container with the attach command. "docker attach mydexchain"
I'm running the code below and creating a second container.
docker run -d --rm -p 2021:2020 -v mydexchain1:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain1 mydexchain/mydexchain:latest
I set my tracker address on port 2021.
I run the container with the attach command. "docker attach mydexchain1"
-So far everything is normal-
I'm running the code below and creating a third container.
docker run -d --rm -p 2022:2020 -v mydexchain2:/var/lib/postgresql/ --privileged --log-driver=none --name mydexchain2 mydexchain/mydexchain:latest
I'm checking the containers with the docker ps command.
I see this screen.
As soon as I want to do anything with this container like attach or set tracker, the container disappears. Like This.
When I check the logs;
When I did these procedures for the first time, I did not encounter any errors.
I would be glad if you could help. I have been working for 1 week and could not find any solution.
Have you ensured that all volumes are clean and similar? Usually the path /var/lib/postgresql/data is mounted into container.
Might be also related to your pausing/closing your previous attach command which kills that container, since you are missing -i and -t flags when launching the container. Those should be used to prevent it from closing. See more from the documentation of attach command.

How to run pg_rewind in postgresql docker container

Got the same question. Running PostgreSQL replication clusters in docker container with the official postgresql docker image, now is trying to work out an approach to do failover.
When running pg_rewind against the previous primary container without stopping PostgreSQL service, the failure occurs:
pg_rewind: fatal: target server must be shut down cleanly
But if I run:
docker exec <container-name> pg_ctl stop -D <datadir>
The container is restarted because of the restart policy unless-stopped.
Found the answer by myself.
Just stop the existing container and run command in a new container using the same image and volume mounts, something like:
docker run -it --rm --name pg_rewind --network=host \
--env="PGDATA=/var/lib/postgresql/data/pgdata" \
--volume=/var/lib/postgresql/data:/var/lib/postgresql/data:rw \
--volume=/var/lib/postgresql:/var/lib/postgresql:rw \
--volume=/var/run/postgresql:/var/run/postgresql:rw \
postgres:12.4 \
pg_rewind --target-pgdata=/var/lib/postgresql/data/pgdata --source-server='host=10.0.0.55 port=5432 dbname=postgres user=replicator password=password'

PostgreSQL docker container not writing data to disk

I am having some difficulty with docker and the postgres image from the Docker Hub. I am developing an app and using the postgres docker to store my development data. I am using the following command to start my container:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser -e PGDATA="/pgdata" --mount source=mydata,target=/home/myuser/pgdata -p 5432:5432/tcp postgres
When I finish working on my app, I usually have to run "docker container prune", in order to free up the container name and be able to run it again later. This worked until recently, when I upgraded my postgres image to run version 11 of PostgreSQL. Now, when I start my container and create data in it, the next time I use it the data is gone. I've been reading about volumes in the docker documentation cannot find anything that can tell my why this is not working. Can anyone please shed some light on this?
Specify a volume mount with -v $PGDATA_HOST:/var/lib/postgresql/data.
The default PGDATA inside the container is /var/lib/postgresql/data so there is no need to change that if you're not modifying the Docker image.
e.g. to mount the data directory on the host at /srv/pgdata/:
$ PGDATA_HOST=/srv/pgdata/
$ docker run -d -p 5432:5432 --name=some-postgres \
-e POSTGRES_PASSWORD=secret \
-v $PGDATA_HOST:/var/lib/postgresql/data \
postgres
The \ are only needed if you break the command over multiple lines, which I did here for the sake of clarity.
since you specified -e PGDATA="/pgdata", the database data will be written to /pgdata within the container. If you want the files in /pgdata to survive container deletion, that location must be a docker volume. To make that location a docker volume, use --mount source=mydata,target=/pgdata.
In the end, it would be simpler to just run:
sudo docker run --name some-postgresql -e POSTGRES_DB=AppDB -e POSTGRES_PASSWORD=App123! -e POSTGRES_USER=appuser --mount source=mydata,target=/var/lib/postgresql/data -p 5432:5432/tcp postgres

Docker container exits immediately after running or restarting PostgreSQL image

I am begginer with docker, and I stuck in place due to container restarting problem.
The problem occures when I try to restart an existing exited container, or create new container (after deleting old one) running:
docker run -d --name mempostgres \
-v "/home/lukasz/lc_pg_data:/var/lib/pgsql/data:Z" \
-e POSTGRES_USER=postgres \
-e POSTGRES_PASSWORD=password \
-e POSTGRES_DB=dbName \
-p 5432:5432 \
fedora/postgresql
My container always exits immediately with status "Exited(1)"
Inside the logs of my container i have:
However I don't have any PostgreSQL server running at this moment.
You need to kill that postmaster process.
cat .../postmaster.pid
The first number of this file is the PID of postmaster process.
Then, kill that process using:
kill PID
Finally, run a container, your problem should be fixed.
Postgres should conatain password environmental variable as below:
-e POSTGRES_PASSWORD=postgres
Add also, pgadmin should have two environmental variables(email and passworld) as below:
-e 'PGADMIN_DEFAULT_EMAIL=address#email.something' -e 'PGADMIN_DEFAULT_PASSWORD=postgresmaster'
This is the email address used when setting up the initial administrator account to login to pgAdmin. This variable is required and must be set at launch time.
If these details are not given postgres and pgadmin will go to exited state.