Failed to mount volume with Singlestore "cluster-in-a-box" Docker image - docker-compose

I'm using the 'singlestore/cluster-in-a-box:latest' image and trying to map the singlestore data to my host machine (windows), in a custom location, but no luck so far.
I've tried as I do it in any other container:
volumes:
- ./data/singlestore-data:/var/lib/memsql
but this gives me the following errors when I run it:
singlestoreissue | ✓ Set up runtime environment
singlestoreissue | Starting Memsql nodes
singlestoreissue | ✘ Failed to start node with node ID 9DA037A1695B23997F48D58E5098A1B88F108D47 (1/2)
singlestoreissue | Starting nodes
singlestoreissue | ✘ Failed to start node with node ID F395C09DFE9841A27B18DF7F1FC5B9DD27DEB389 (2/2)
singlestoreissue | Starting nodes
singlestoreissue | ✓ Started 0 nodes
singlestoreissue | Latest errors from the database's tracelog:
singlestoreissue |
singlestoreissue | : Failed to connect to the database: process exited: exit status 255
singlestoreissue exited with code 1
I've also tried other ways:
volumes:
- singlestore-data:/var/lib/memsql
....
volumes:
singlestore-data:
driver: local
driver_opts:
o: bind
type: none
device: ./data/singlestore-data
Error response from daemon: failed to mount local volume: mount data/singlestore-data:/var/lib/docker/volumes/singlestoreissue2_singlestore-data/_data, flags: 0x1000: no such file or directory
volumes:
- singlestore-data:/var/lib/memsql
....
volumes:
singlestore-data:
driver: local
driver_opts:
o: bind
type: none
device: ${PWD}/data/singlestore-data
time="2023-01-11T16:29:34Z" level=warning msg="The "PWD" variable is not set. Defaulting to a blank string."
Running 0/0
Container singlestoreneat Creating 0.1s
Error response from daemon: failed to mount local volume: mount data/singlestore-data:/var/lib/docker/volumes/singlestoreissue2_singlestore-data/_data, flags: 0x1000: no such file or directory
Am I missing something here?

Related

File permission issue with mongo docker image

I'm trying to instanciate a mongodb docker image using the following commmand:
docker run -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password mongo
The command fails instantly because of a Persmission denied:
2019-11-12T20:16:29.503+0000 I CONTROL [main] ERROR: Cannot write pid file to /tmp/docker-entrypoint-temp-mongod.pid: Permission denied
The weird thing is the same command works fine on some other machines that has the same users, groups... The only thing that differs is the docker version.
I don't understand why the mongo instance does not run as I do not have any volumes or user specified on the command line.
Here is my docker info
Client:
Debug Mode: false
Server:
Containers: 29
Running: 1
Paused: 0
Stopped: 28
Images: 87
Server Version: 19.03.4
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: b34a5c8af56e510852c35414db4c1f4fa6172339
runc version: 3e425f80a8c931f88e6d94a8c831b9d5aa481657
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.0-11-amd64
Operating System: Debian GNU/Linux 9 (stretch)
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 25.52GiB
Name: jenkins-vm
ID: YIGQ:YOVJ:2Y7F:LM77:VHK6:ICMY:QDGA:5EFD:ZYDD:EQM5:DR77:DANT
Docker Root Dir: /data/var/lib/docker
Debug Mode: false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
And as suggested by #jan-garaj, here is the result of docker run -e MONGO_INITDB_ROOT_USERNAME=root -e MONGO_INITDB_ROOT_PASSWORD=password mongo id: uid=0(root) gid=0(root) groups=0(root)
What could be the reason of this failure ?
You may have a problem with some security configuration. Check and compare docker info outputs. You may have enabled user namespaces (userns-remap), some special seccomp, selinux policies, weird storage driver, full disk space, ...
Could be because of the selinux policy :
edit config file at /etc/selinux/config :
SELINUX:disabled
reboot your system and try to run the image after.

Why docker-compose fail and docker run success with postgresql?

I am tring to start a postgreSQL docker container with my mac; i use OSX 10.11.16 El Capitan with Docker Toolbox 19.03.01.
If i run:
docker run --name my_postgres -v my_dbdata:/var/lib/postgresql/data -p 54320:5432 postgres:11
all was done and i get:
my_postgres | 2019-09-17 04:51:48.908 UTC [41] LOG: database system is ready to accept connections
but if i use an .yml file like this one:
docker-compose.yml:
version: "3"
services:
db:
image: "postgres:11"
container_name: "my_postgres"
ports:
- "54320:5432"
volumes:
- my_dbdata:/var/lib/postgresql/data
volumes:
my_dbdata:
and run
docker-compose up
i get instead:
my_postgres | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
my_postgres |
my_postgres | 2019-09-17 04:51:49.009 UTC [41] LOG: received fast shutdown request
my_postgres | 2019-09-17 04:51:49.011 UTC [41] LOG: aborting any active transactions
my_postgres | waiting for server to shut down....2019-09-17 04:51:49.087 UTC [41] LOG: background worker "logical replication launcher" (PID 48) exited with exit code 1
my_postgres | 2019-09-17 04:51:49.091 UTC [43] LOG: shutting down
my_postgres | 2019-09-17 04:51:49.145 UTC [41] LOG: database system is shut down
Why the same think with docker-compose fail?
So many thanks in advance
Try the below one it worked for me
version: '3.1'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: mypassword
volumes:
- ./postgres-data:/var/lib/postgresql/data
ports:
- 5432:5432
Then use docker-compose up to get the logs after using the previous command use
docker-compose logs -f
If you are trying to access and existing volume on the host machine, you need to specify that the volume was created outside the Compose file with the external keyword like this:
version: "3.7"
services:
db:
image: postgres
volumes:
- data:/var/lib/postgresql/data
volumes:
data:
external: true
I took the example from the Compose file reference https://docs.docker.com/compose/compose-file/.
Also double check the contents of your external volume between runs, to see if it was overriden.
Please also double check your quotes, you don't need to put the image name in quotes, but I don't think that's the issue.
The my_dbdata named volume is not the same for the two cases.
docker run creates a volume named my_dbdata, instead docker-compose creates by default a volume called <dir>_my_dbdata
Run docker volume to list the volumes:
docker volume ls |grep my_dbdata
I suspect the volume created by docker-compose has issues and as a consequence postgres doesn't start correctly. The initialization of the database in the my_postgres container is done only once.
Try to remove the container and the volume created by docker-compose:
docker rm my_postgres
docker volume rm <dir>_my_dbdata
Hope it helps

Docker [for mac] file system became read-only which breaks almost all features of docker

My Docker ran into an error state, where I cannot use it anymore.
output of docker system info:
Containers: 14
Running: 2
Paused: 0
Stopped: 12
Images: 61
Server Version: 18.03.1-ce
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: error
NodeID:
Error: open /var/lib/docker/swarm/worker/tasks.db: read-only file system
Is Manager: false
Node Address: 192.168.65.3
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 773c489c9c1b21a6d78b5c538cd395416ec50f88
runc version: 4fc53a81fb7c994640722ac585fa9ca548971871
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 4.9.87-linuxkit-aufs
Operating System: Docker for Mac
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 1.952GiB
Name: linuxkit-025000000001
ID: MCSC:SFXH:R3JC:NU4D:OJ5V:K4B5:LPMJ:2BFL:LHT3:LYCI:XKY2:DTE6
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
HTTP Proxy: docker.for.mac.http.internal:3128
HTTPS Proxy: docker.for.mac.http.internal:3129
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
This behaviour occured, after I built the following Dockerfile:
FROM perl:5.20
RUN apt-get update && apt-get install -y libsoap-lite-perl \
&& rm -rf /var/lib/apt/lists/*
RUN cpan SOAP::LITE
the error message when I try to build an image or run a container or remove an image is always similar to this:
Error: open /var/lib/docker/swarm/worker/tasks.db: read-only file system
for example if I try to execute this command:
docker container run -it perl:5.20 bash
I get this error:
docker: Error response from daemon: mkdir /var/lib/docker/overlay2/1b966e163e500a8c78a64e8d0f14984b091c1c5fe188a60b8bd030672d3138d9-init: read-only file system.
How can I reset my docker so these errors go away?
Go to your docker for mac Icon in the top right, click on it and then click Restart.
After that Docker works as expected.
This seems to be an temporary issue since I cannot reproduce it after restarting docker. My guess is that I had an network communication breakdown while docker tried to download and install the packages in the Dockerfile.

Seed mongodb replica set

I want to create replica set of 3 nodes using docker-compose and seed initial data to them. If I remove --replSet and seed data without specifying hosts I have no problems.
docker-compose.yml
master:
image: 'mongo:3.4'
ports:
- '50000:27017'
volumes:
- './restaurants.json:/restaurants.json'
- './setup.js:/docker-entrypoint-initdb.d/00_setup.js'
- './seed.sh:/docker-entrypoint-initdb.d/01_seed.sh'
command: '--replSet rs'
slave1:
image: 'mongo:3.4'
ports:
- '50001:27017'
command: '--replSet rs'
slave2:
image: 'mongo:3.4'
ports:
- '50002:27017'
command: '--replSet rs'
seed.sh
# ...
_wait "slave1"
_wait "slave2"
echo "Starting to import data..."
mongoimport --host="rs/master:27017,slave1:27017,slave2:27017" --db db --collection restaurants --drop --file /restaurants.json > /dev/null
echo "Done."
Log
master_1 | /usr/local/bin/docker-entrypoint.sh: running /docker-entrypoint-initdb.d/01_seed.sh
master_1 | Waiting for slave1...
master_1 | .
master_1 | Done.
master_1 | Waiting for slave2...
master_1 | Done.
master_1 | Starting to import data...
master_1 | 2017-11-26T16:06:39.148+0000 [........................] db.restaurants 0B/11.3MB (0.0%)
master_1 | 2017-11-26T16:06:39.653+0000 [........................] db.restaurants 0B/11.3MB (0.0%)
master_1 | 2017-11-26T16:06:39.653+0000 Failed: error connecting to db server: no reachable servers
master_1 | 2017-11-26T16:06:39.653+0000 imported 0 documents
mongoreplication_master_1 exited with code 1
This question is old but i ran into the same issue recently, it's worth noting that the mongo docker-entrypoint.sh script will strip the --replicaSet argument during the initDb phase, see:
https://github.com/docker-library/mongo/blob/master/3.6/docker-entrypoint.sh#L237
So you can't connect to the host that is running the init scripts, you can create another container with the sole purpose of initializing the replicaset however and override the docker-entrypoint.sh

Volume mount haproxy socket from HAProxy docker container

I want to give a peer container access to /var/run/haproxy.sock. Unfortunately, it throws an error when I try to do this through bind mounting with a named volume. Is is possible to share the haproxy.sock with other containers? I assume it is, so I wonder which piece I am missing here. Probably rights - but how to set them correctly?
worker1 | <7>haproxy-systemd-wrapper: executing /usr/local/sbin/haproxy -p /run/haproxy.pid -f /usr/local/etc/haproxy/haproxy.cfg -Ds
worker1 | [ALERT] 182/075644 (6) : Starting frontend GLOBAL: error when trying to preserve previous UNIX socket [/var/run/haproxy.sock]
worker1 | <5>haproxy-systemd-wrapper: exit, haproxy RC=1
I have the following config in haproxy.cfg:
global
maxconn 8204
tune.ssl.default-dh-param 2048
stats socket /var/run/haproxy.sock mode 660 level admin
stats timeout 30s
I use docker-compose to start my containers in swarm mode:
version: '3.2'
services:
haproxy:
image: haproxy:1.7.7
ports:
- "80:80"
- "443:443"
volumes:
- "/home/ubuntu/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro"
- "socket:/var/run/haproxy.sock:rw"
ulimits:
nofile:
soft: 16479
hard: 16479
deploy:
placement:
constraints:
- node.hostname==worker1
volumes:
socket: {}
Named volumes can only be directories, not single files. As a result, this line;
"socket:/var/run/haproxy.sock:rw"
Will attempt to mount a directory (the "socket" volume) at location /var/run/haproxy.sock inside the container.
If the location of "haproxy.sock" is configurable, you may try something like;
"socket:/my-haproxy-socket-directory"
(the socket itself would be located at /my-haproxy-socket-directory/haproxy.sock inside the container)