I am an error while creating a Pgadmin docker container.
I am trying to create a default server and add database connection on creation of docker container.
Please check below logs from pgadmin container.
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgresql/data -l logfile start
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
waiting for server to start....2023-01-26 16:43:57.062 UTC [36] LOG: starting PostgreSQL 12.13 on aarch64-unknown-linux-musl, compiled by gcc (Alpine 11.2.1_git20220219) 11.2.1 20220219, 64-bit
2023-01-26 16:43:57.064 UTC [36] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
2023-01-26 16:43:57.076 UTC [37] LOG: database system was shut down at 2023-01-26 16:43:56 UTC
2023-01-26 16:43:57.083 UTC [36] LOG: database system is ready to accept connections
done
server started
/usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
2023-01-26 16:43:57.171 UTC [36] LOG: received fast shutdown request
waiting for server to shut down....2023-01-26 16:43:57.173 UTC [36] LOG: aborting any active transactions
2023-01-26 16:43:57.176 UTC [36] LOG: background worker "logical replication launcher" (PID 43) exited with exit code 1
2023-01-26 16:43:57.179 UTC [38] LOG: shutting down
2023-01-26 16:43:57.198 UTC [36] LOG: database system is shut down
done
server stopped
PostgreSQL init process complete; ready for start up.
Docker-compose file:
postgres:
# image: postgres:14.1-alpine
build:
context: ./
dockerfile: ./Docker/Postgres/Dockerfile
container_name: postgres
image: postgres
restart: always
ports:
- 5432:5432
networks:
- 'default'
environment:
POSTGRES_USER: "postgres"
POSTGRES_PASSWORD: "postgres"
expose:
- 5432
pgadmin4:
build:
context: .
dockerfile: ./Docker/PgAdmin/Dockerfile
container_name: pgadmin
image: pgadmin
depends_on:
- postgres
environment:
- PGADMIN_DEFAULT_EMAIL=test#test.com
- PGADMIN_DEFAULT_PASSWORD=test
volumes:
- ./Docker/PgAdmin/servers.json:/pgadmin4/servers.json
networks:
- 'default'
ports:
- 9050:80
restart: unless-stopped
healthcheck:
test: wget localhost/misc/ping -q -O - > /dev/null 2>&1 || exit 1
interval: 5s
timeout: 10s
retries: 5
I am using postgres:12-alpine3.16 and dpage/pgadmin4 images.
Can someone please help me to resolve the issue?
I'm using the 'singlestore/cluster-in-a-box:latest' image and trying to map the singlestore data to my host machine (windows), in a custom location, but no luck so far.
I've tried as I do it in any other container:
volumes:
- ./data/singlestore-data:/var/lib/memsql
but this gives me the following errors when I run it:
singlestoreissue | ✓ Set up runtime environment
singlestoreissue | Starting Memsql nodes
singlestoreissue | ✘ Failed to start node with node ID 9DA037A1695B23997F48D58E5098A1B88F108D47 (1/2)
singlestoreissue | Starting nodes
singlestoreissue | ✘ Failed to start node with node ID F395C09DFE9841A27B18DF7F1FC5B9DD27DEB389 (2/2)
singlestoreissue | Starting nodes
singlestoreissue | ✓ Started 0 nodes
singlestoreissue | Latest errors from the database's tracelog:
singlestoreissue |
singlestoreissue | : Failed to connect to the database: process exited: exit status 255
singlestoreissue exited with code 1
I've also tried other ways:
volumes:
- singlestore-data:/var/lib/memsql
....
volumes:
singlestore-data:
driver: local
driver_opts:
o: bind
type: none
device: ./data/singlestore-data
Error response from daemon: failed to mount local volume: mount data/singlestore-data:/var/lib/docker/volumes/singlestoreissue2_singlestore-data/_data, flags: 0x1000: no such file or directory
volumes:
- singlestore-data:/var/lib/memsql
....
volumes:
singlestore-data:
driver: local
driver_opts:
o: bind
type: none
device: ${PWD}/data/singlestore-data
time="2023-01-11T16:29:34Z" level=warning msg="The "PWD" variable is not set. Defaulting to a blank string."
Running 0/0
Container singlestoreneat Creating 0.1s
Error response from daemon: failed to mount local volume: mount data/singlestore-data:/var/lib/docker/volumes/singlestoreissue2_singlestore-data/_data, flags: 0x1000: no such file or directory
Am I missing something here?
I have the following compose file:
version: '3'
services:
worker:
ports:
- "8089:8089"
image: locustio/locust
volumes:
- ./:/mnt/locust
command: --config=/mnt/locust/devndb.conf --users 1 --spawn-rate 1 --run-time 10s --host http://devndb-ci:8081 --headless -f /mnt/locust/rest/normId_by_external_identifier.py
When I run the container the --host argument is not evaluated in my locust file.
When I run the locust file directly without a container it works:
locust --headless --users 10 --spawn-rate 1 -H http://devndb-ci:8081 --run-time 10s -f rest/normId_by_external_identifier.py
Is this a bug in the locust docker image?
I am expecting the locust container to take the --host argument into account and build the REST URLs accordingly.
Running a CentOS 8 Stream server with nfs-utils version 2.3.3-57.el8 and using ansible-playbook core version 2.11.12 with a test playbook
- hosts: server-1
tasks:
- name: Collect status
service_facts:
register: services_state
- name: Print service_facts
debug:
var: services_state
- name: Collect systemd status
ansible.builtin.systemd:
name: "nfs-server"
register: sysd_service_state
- name: Print systemd state
debug:
var: sysd_service_state
will render the following results
service_facts
...
"nfs-server.service": {
"name": "nfs-server.service",
"source": "systemd",
"state": "stopped",
"status": "disabled"
},
...
ansible.builtin.systemd
...
"name": "nfs-server",
"status": {
"ActiveEnterTimestamp": "Tue 2022-10-04 10:03:17 UTC",
"ActiveEnterTimestampMonotonic": "7550614760",
"ActiveExitTimestamp": "Tue 2022-10-04 09:05:43 UTC",
"ActiveExitTimestampMonotonic": "4096596618",
"ActiveState": "active",
...
The NFS Server is very much running/active but the service_facts fails to report it as such.
Other services, such as httpd reports correct state in service_facts.
Have I misunderstood or done something wrong here? Or have I run in to an anomaly?
Running RHEL 7.9, nfs-utils 1.3.0-0.68.el7, ansible 2.9.27, I was able to observe the same behavior and to reproduce the issue you are observing.
It seems to be caused by "two" redundant service files (or the symlink).
ll /usr/lib/systemd/system/nfs*
...
-rw-r--r--. 1 root root 1044 Oct 1 2022 /usr/lib/systemd/system/nfs-server.service
lrwxrwxrwx. 1 root root 18 Oct 1 2022 /usr/lib/systemd/system/nfs.service -> nfs-server.service
...
diff /usr/lib/systemd/system/nfs.service /usr/lib/systemd/system/nfs-server.service; echo $?
0
Obviously a status request call to systemd will produce the expected result.
systemctl status nfs
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Sat 2022-10-01 22:00:00 CEST; 4 days ago
Process: 1080 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
Process: 1070 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 1065 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 1070 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
systemctl status nfs-server
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Sat 2022-10-01 22:00:00 CEST; 4 days ago
Process: 1080 ExecStartPost=/bin/sh -c if systemctl -q is-active gssproxy; then systemctl reload gssproxy ; fi (code=exited, status=0/SUCCESS)
Process: 1070 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=0/SUCCESS)
Process: 1065 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 1070 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service
However, a test playbook
---
- hosts: nfs_server
become: true
gather_facts: false
tasks:
- name: Gather Service Facts
service_facts:
- name: Show Facts
debug:
var: ansible_facts
called via
sshpass -p ${PASSWORD} ansible-playbook --user ${ACCOUNT} --ask-pass service_facts.yml | grep -A4 nfs
will result into an output of
PLAY [nfs_server] ***************
TASK [Gather Service Facts] *****
ok: [test.example.com]
--
...
nfs-server.service:
name: nfs-server.service
source: systemd
state: stopped
status: enabled
...
nfs.service:
name: nfs.service
source: systemd
state: active
status: enabled
and report for the first service file found(?), nfs.service correct only.
Workaround
You could just check for ansible_facts.nfs.service, the alias name.
systemctl show nfs-server.service -p Names
Names=nfs-server.service nfs.service
Further Investigation
Ansible Issue #73215 "ansible_facts.service returns incorrect state"
Ansible Issue #67262 "service_facts does not return correct state for services"
It might be that this is somehow caused by What does status "active (exited)" mean for a systemd service? and def _list_rh(self, services) and even if there is already set RemainAfterExit=yes within the .service files.
systemctl list-units --no-pager --type service --all | grep nfs
nfs-config.service loaded inactive dead Preprocess NFS configuration
nfs-idmapd.service loaded active running NFSv4 ID-name mapping service
nfs-mountd.service loaded active running NFS Mount Daemon
● nfs-secure-server.service not-found inactive dead nfs-secure-server.service
nfs-server.service loaded active exited NFS server and services
nfs-utils.service loaded inactive dead NFS server and client services
For further tests
systemctl list-units --no-pager --type service --state=running
# versus
systemctl list-units --no-pager --type service --state=exited
one may also read about NFS server active (exited) and Service Active but (exited).
... please take note that I haven't done further investigation on the issue yet. Currently also for me it is still not fully clear which part in the code of ansible/modules/service_facts.py might causing this.
I'm using docker-compose to stand up an Express/React/Mongo app. I can currently stand up everything using retry logic in the express app. However, I would prefer to use Docker's healthcheck to prevent the string of errors when the containers initially spin up. However, when I add a healthcheck in my docker-compose.yml, it hangs for the interval/retry time limit and exits with:
ERROR: for collector Container "70e7aae49c64" is unhealthy.
ERROR: for server Container "70e7aae49c64" is unhealthy.
ERROR: Encountered errors while bringing up the project.
It seems that my healthcheck never returns a healthy status, and I'm not entirely sure why. The entirety of my docker-compose.yml:
version: "2.1"
services:
mongo:
image: mongo
volumes:
- ./data/mongodb/db:/data/db
ports:
- "${DB_PORT}:${DB_PORT}"
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 10s
timeout: 10s
retries: 5
collector:
build: ./collector/
environment:
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
- DB_NAME=${DB_NAME}
volumes:
- ./collector/:/app
depends_on:
mongo:
condition: service_healthy
server:
build: .
environment:
- SERVER_PORT=$SERVER_PORT
volumes:
- ./server/:/app
ports:
- "${SERVER_PORT}:${SERVER_PORT}"
depends_on:
mongo:
condition: service_healthy
For the test, I've also tried:
["CMD", "nc", "-z", "localhost", "27017"]
And:
["CMD", "bash", "/mongo-healthcheck"]
I've also tried ditching the healthcheck altogether, following the advice of this guy. Everything stands up, but I get the dreaded errors in the output before a successful connection:
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect [MongoNetworkError: connect
ECONNREFUSED 172.21.0.2:27017]
collector_1 | MongoDB connection with retry
collector_1 | MongoDB connection error: MongoNetworkError: failed to connect to server [mongo:27017] on first connect
The ultimate goal is a clean startup output when running the docker-compose up --build. I've also looked into some of the solutions in this question, but I haven't had much luck with wait-for-it either. What's the correct way to wait for Mongo to be up and running before starting the other containers, and achieving a clean startup?
Firstly, I'd suggest to update the docker-compose.yaml file version to at least 3.4 (version: "3.5"), then please add the start_period option to your mongo healthcheck
Note: start_period is only supported for v3.4 and higher of the compose file format.
start period provides initialization time for containers that need time to bootstrap. Probe failure during that period will not be counted towards the maximum number of retries. However, if a health check succeeds during the start period, the container is considered started and all consecutive failures will be counted towards the maximum number of retries.
So it would look something like this:
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet
interval: 10s
timeout: 10s
retries: 5
start_period: 40s
We can use MongoDB's serverStatus command to do the health check, as the MongoDB document puts it this way:
Monitoring applications can run this command at a regular interval to collect statistics about the instance.
Because this command serverStatus requires authentication, you need setup the health check similar to the configuration shown below:
version: '3.4'
services:
mongo:
image: mongo
restart: always
healthcheck:
test: echo 'db.runCommand({serverStatus:1}).ok' | mongo admin -u $MONGO_INITDB_ROOT_USERNAME -p $MONGO_INITDB_ROOT_PASSWORD --quiet | grep 1
interval: 10s
timeout: 10s
retries: 3
start_period: 20s
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: example
That's it. If your MongoDB instance is healthy, you will see something similar to mine:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01ed0e02aa70 mongo "docker-entrypoint.s…" 11 minutes ago Up 11 minutes (healthy) 27017/tcp demo_mongo_1
The mongo shell is removed from MongoDB 6.0. The replacement is mongosh.
Check if this works for you :
mongo:
image: mongo
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongosh localhost:27017/test --quiet
Note that you probably should you use mongosh if you use newer versios of the mongodb:
healthcheck:
test: ["CMD","mongosh", "--eval", "db.adminCommand('ping')"]
interval: 5s
timeout: 5s
retries: 3
start_period: 5s
I found a solution here
https://github.com/docker-library/healthcheck/tree/master/mongo
Note, it explains why health check is not included into official image
https://github.com/docker-library/cassandra/pull/76#issuecomment-246054271
docker-healthcheck
#!/bin/bash
set -eo pipefail
if mongo --quiet "localhost/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
exit 0
fi
exit 1
In the example from the link, they use host variable
host="$(hostname --ip-address || echo '127.0.0.1')"
if mongo --quiet "$host/test" --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then
# continues the same code
It did not work for me, so I replaced the host with localhost.
In docker-compose
mongo:
build:
context: "./mongodb"
dockerfile: Dockerfile
container_name: crm-mongo
restart: always
healthcheck:
test: ["CMD", "docker-healthcheck"]
interval: 10s
timeout: 2s
retries: 10
Alternatively, you can execute health checks in container. Change Dockerfile or that.
FROM mongo:4
ADD docker-healthcheck /usr/local/bin/
When i execute the echo db.runCommand("ping").ok' | mongo localhost:27017/test --quiet 1 command in the docker container, the result is:
2019-04-19T02:39:19.770+0000 E - [main] file [1] doesn't exist
failed to load: 1
Try this
healthcheck:
test: bash -c "if mongo --eval 'quit(db.runCommand({ ping: 1 }).ok ? 0 : 2)'; then exit 0; fi; exit 1;"
This one worked for me:
healthcheck:
test: ["CMD","mongo", "--eval", "db.adminCommand('ping')"]
interval: 10s
timeout: 10s
retries: 5