Unable to access Redis console in docker container (Mac M1) - docker-compose

version: "3.9"
services:
redis:
image: "redislabs/redismod:latest"
container_name: redis-lab
ports:
- "6379:6379"
platform: linux/amd64
volumes:
- ./data:/data
entrypoint: >
redis-server
--loadmodule /usr/lib/redis/modules/redisai.so
ONNX redisai_onnxruntime/redisai_onnxruntime.so
TF redisai_tensorflow/redisai_tensorflow.so
TFLITE redisai_tflite/redisai_tflite.so
TORCH redisai_torch/redisai_torch.so
--loadmodule /usr/lib/redis/modules/redisearch.so
--loadmodule /usr/lib/redis/modules/redisgraph.so
--loadmodule /usr/lib/redis/modules/redistimeseries.so
--loadmodule /usr/lib/redis/modules/rejson.so
--loadmodule /usr/lib/redis/modules/redisbloom.so
--loadmodule /var/opt/redislabs/lib/modules/redisgears.so
--appendonly yes
#--platform linux/amd64 :
# redis The requested image's platform (linux/amd64) does not match the detected host platform
#(linux/arm64/v8) and no specific platform was requested 0.0s
deploy:
replicas: 1
restart_policy:
condition: on-failure
manager:
container_name: manager_redis
image: "redislabs/redisinsight:latest"
ports:
- "8001:8001"
Fast memory test PASSED, however your memory can still be broken.
Please run a memory test for several hours if possible.
------ DUMPING CODE AROUND EIP ------ Symbol: gsignal (base: 0x40021a1ba0) Module: /lib/x86_64-linux-gnu/libc.so.6 (base
0x4002166000) $ xxd -r -p /tmp/dump.hex /tmp/dump.bin $ objdump
--adjust-vma=0x40021a1ba0 -D -b binary -m i386:x86-64 /tmp/dump.bin
Function at 0x40022736f0 is __stack_chk_fail
=== REDIS BUG REPORT END. Make sure to include from START to END. ===
Please report the crash by opening an issue on github:
http://github.com/redis/redis/issues Suspect RAM error? Use redis-server --test-memory to verify it. qemu: uncaught target
signal 6 (Aborted) - core dumped
I have a container deployed, there is an error inside, since the memory test failed. I can't connect to the Redis console, and I also can't connect through the redis manager, because the connection is denied.
Does anyone have any ideas how to fix this and why this is happening?

Related

Docker compose read connection reset by peer error on pipeline

when running a docker compose in a pipeline I'm getting this error when the tests on the pipleine are making use of mycontainer's API.
panic: Get "http://localhost:8080/api/admin/folder": read tcp 127.0.0.1:60066->127.0.0.1:8080: read: connection reset by peer [recovered]
panic: Get "http://localhost:8080/api/admin/folder": read tcp 127.0.0.1:60066->127.0.0.1:8080: read: connection reset by peer
This is my docker copose file:
version: "3"
volumes:
postgres_vol:
driver: local
driver_opts:
o: size=1000m
device: tmpfs
type: tmpfs
networks:
mynetwork:
driver: bridge
services:
postgres:
image: postgres:14
container_name: postgres
restart: always
environment:
- POSTGRES_USER=xxx
- POSTGRES_PASSWORD=xxx
- POSTGRES_DB=newdatabase
volumes:
#- ./postgres-init-db.sql:/docker-entrypoint-initdb.d/postgres-init-db.sql
- "postgres_vol:/var/lib/postgresql/data"
ports:
- 5432:5432
networks:
- mynetwork
mycontainer:
image: myprivaterepo/mycontainer-image:1.0.0
container_name: mycontainer
restart: always
environment:
- DATABASE_HOST=postgres
- DATABASE_PORT=5432
- DATABASE_NAME=newdatabase
- DATABASE_USERNAME=xxx
- DATABASE_PASSWORD=xxx
- DATABASE_SSL=false
depends_on:
- postgres
ports:
- 8080:8080
networks:
- mynetwork
mycontainer is listening on port 8080 and locally everything works fine.
However, when I run the pipeline which is initiating this docker compose is where I'm getting the error.
Basically, I'm running some tests in the pipeline that make use of mycontainer API (http://localhost:8080/api/admin/folder).
If I run the docker compose locally and I reproduce the steps followed on my pipeline to make use of the API everything is working fine. I can comunicate locally with both containers through localhost.
Also, I tried using healthchecks on the containers and 127.0.0.1:8080:8080 on mycontainer & 127.0.0.1:5432:5432 in postgres (including 0.0.0.0:8080:8080 & 0.0.0.0:5342:5432 just in case).
Any idea about that?
I was able to reproduce your error in a pipeline.
Make sure that you are not catching anything (e.g the code that's interacting with your container's API).
You did not mention anything related to your pipeline but just in case, delete the catching on your pipeline.

Rundeck from docker: no logs?

Running rundeck from docker (default backend), but noticed there are no logs. This documentation seems not complete / not valid for docker deployment: https://docs.rundeck.com/docs/administration/maintenance/logs.html
All the logs inside docker:/home/rundeck/server/logs have 0 size.
How to review the logs when running as a docker ?
Thanks,
The execution logs are stored at the /home/rundeck/var/logs/rundeck path, so, a good idea is to mount it as a volume (to see them in your local filesystem), take a look at this docker-compose example:
version: '3'
services:
rundeck:
image: rundeck/rundeck:4.2.1
environment:
RUNDECK_GRAILS_URL: http://localhost:4440
RUNDECK_DATABASE_DRIVER: org.mariadb.jdbc.Driver
RUNDECK_DATABASE_USERNAME: rundeck
RUNDECK_DATABASE_PASSWORD: rundeck
RUNDECK_DATABASE_URL: jdbc:mariadb://mysql/rundeck?autoReconnect=true&useSSL=false&allowPublicKeyRetrieval=true
RUNDECK_LOGGING_STRATEGY: FILE
volumes:
- ./data/logs/:/home/rundeck/var/logs/rundeck/
ports:
- 4440:4440
tty: true
mysql:
image: mysql:8
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=rundeck
- MYSQL_USER=rundeck
- MYSQL_PASSWORD=rundeck
The service.log is available in the docker logs command, to see it just do docker logs <container_id> -f.

How to load rejson.so with docker-compose

I want to store json type in Redis, so I set up for using RedisJSON module with docker-compose. But, I keep failing on running it. The code is below.
I also tried to use redis.conf that is filled with same parameters as command, but Segmentation fault was occured.
What's wrong on my step?
docker-compose.yml
version: '3.8'
services:
redis:
container_name: redis
hostname: redis
image: redis:7.0.0-alpine
command: redis-server --loadmodule /etc/redis/modules/rejson.so
volumes:
- /etc/redis/redis.conf:/etc/redis/redis.conf
- /etc/redis/modules/rejson.so:/etc/redis/modules/rejson.so
Environment
Node.js Version: 16.14.1
Node Redis Version: 4.0.6
Platform: Mac OS 12.3.1
Edited
Segmentation fault was because of unexisting includes option.
Below messages were repeated.
What it means Exec format error?
# oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
# Redis version=7.0.0, bits=64, commit=00000000, modified=0, pid=1, just started
# Configuration loaded
* monotonic clock: POSIX clock_gettime
# Warning: Could not create server TCP listening socket ::1:6380: bind: Address not available
* Running mode=standalone, port=6380.
# Server initialized
# Module /etc/redis/modules/rejson.so failed to load: Error loading shared library /etc/redis/modules/rejson.so: Exec format error
# Can't load module from /etc/redis/modules/rejson.so: server aborting
After a lot of trial and error, I found out that as of this time, the redis v7 docker images seem to lack the rejson module.
I am now using redis/stack-server which already includes all modules (including) rejson. It is based upon redis v6, see https://hub.docker.com/r/redis/redis-stack or https://hub.docker.com/r/redis/redis-stack-server
My compose.yaml now looks like this:
version: "3"
services:
redis:
image: redis/redis-stack
volumes:
- redis_data:/data:rw
ports:
- 6379:6379
restart: unless-stopped
volumes:
redis_data:
In the redis-cli you can then give it a try:
127.0.0.1:6379> JSON.SET employee_profile $ '{ "employee": { "name": "alpha", "age": 40,"married": true } } '
OK
127.0.0.1:6379> JSON.GET employee_profile
"{\"employee\":{\"name\":\"alpha\",\"age\":40,\"married\":true}}"
If you still have Docker volumes from previous installations, you should delete them first. Otherwise your new container might get problems reading an older database.

CouchDB with docker-compose not reachable from host (but from localhost)

I am setting up CouchDB using docker-compose with the following docker-compose.yml (the following is a minimal example):
version: "3.6"
services:
couchdb:
container_name: couchdb
image: apache/couchdb:2.2.0
restart: always
ports:
- 5984:5984
volumes:
- ./test/couchdb/data:/opt/couchdb/data
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
couchdb_setup:
depends_on: ['couchdb']
container_name: couchdb_setup
image: apache/couchdb:2.2.0
command: ['/bin/bash', '-x', '-c', 'cat /usr/local/bin/couchdb_setup.sh | tr -d "\r" | bash']
volumes:
- ./scripts/couchdb_setup.sh:/usr/local/bin/couchdb_setup.sh:ro
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
The setup script of the second container is executing the script ./scripts/couchdb_setup.sh that starts with:
until curl -f http://couchdb:5984; do
sleep 1
done
Now, the issue is that the curl call is always returning The requested URL returned error: 502 Bad Gateway. I figured that CouchDB is only listening on http://localhost:5984 but not on http://couchdb:5984 as is evident when I bash into the couchdb container and issue both curls; for http://localhost:5984 I get the expected response, for http://couchdb:5984 as well as http://<CONTAINER_IP>:5984 (that's http://192.168.32.2:5984, in my case) responds with server 192.168.32.2 is unreachable ...
I looked into the configs and especially into the [chttp] settings and its bind_address argument. By default, bind_address is set to any, but I have also tried using 0.0.0.0, to no avail.
I'm looking for hints what I did wrong and for advice how to set up CouchDB with docker-compose. Any help is appreciated.

use nvidia-docker-compose launch a container, but exited soon

My docker-compose.yml file :
version: '2'
services:
zl:
image: zl/caffe-torch-gpu:12.27
ports:
- "8801:8888"
- "6001:6008"
devices:
- /dev/nvidia0
volumes:
- ~/dl-data:/root/dl-data
After nvidia-docker-compose up -d the container launched, but exited soon.
But when I launch a container by nvidia-docker way, it worked well.
nvidia-docker run -itd -p 6008:6006 -p 8808:8888 -v `pwd`:/root/dl-data --name zl_test
You don't have to use nvidia-docker-compose.
By configuring the nvdia-docker plugin correctly you can just use docker-compose!
Via the nvidia docker git repo:
(can confirm it works for me)
Step 1:
Figure out nvidia driver version (it matters).
run:
nvidia-smi
output:
+---------------------------------------------------------------+
NVIDIA-SMI 367.57 Driver Version: 367.57
|-------------------------------+--------+----------------------+
Step 2:
create a docker volume that uses the nvidia-docker plugin must be done outside of compose as compose will mangle the volume name if it creates it.
docker volume create --name=nvidia_driver_367.57 -d nvidia-docker
Step 3
in the docker-compose.yml file:
version: '2'
volumes:
nvidia_driver_367.57: # same name as one created above
external: true #this will use the volume we created above
services:
cuda:
command: nvidia-smi
devices: #this is required
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0 #in general: /dev/nvidia# where # depends on which gpu card is wanted to be used
image: nvidia/cuda
volumes:
- nvidia_driver_367.57:/usr/local/nvidia/:ro