How to load rejson.so with docker-compose - docker-compose

I want to store json type in Redis, so I set up for using RedisJSON module with docker-compose. But, I keep failing on running it. The code is below.
I also tried to use redis.conf that is filled with same parameters as command, but Segmentation fault was occured.
What's wrong on my step?
docker-compose.yml
version: '3.8'
services:
redis:
container_name: redis
hostname: redis
image: redis:7.0.0-alpine
command: redis-server --loadmodule /etc/redis/modules/rejson.so
volumes:
- /etc/redis/redis.conf:/etc/redis/redis.conf
- /etc/redis/modules/rejson.so:/etc/redis/modules/rejson.so
Environment
Node.js Version: 16.14.1
Node Redis Version: 4.0.6
Platform: Mac OS 12.3.1
Edited
Segmentation fault was because of unexisting includes option.
Below messages were repeated.
What it means Exec format error?
# oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
# Redis version=7.0.0, bits=64, commit=00000000, modified=0, pid=1, just started
# Configuration loaded
* monotonic clock: POSIX clock_gettime
# Warning: Could not create server TCP listening socket ::1:6380: bind: Address not available
* Running mode=standalone, port=6380.
# Server initialized
# Module /etc/redis/modules/rejson.so failed to load: Error loading shared library /etc/redis/modules/rejson.so: Exec format error
# Can't load module from /etc/redis/modules/rejson.so: server aborting

After a lot of trial and error, I found out that as of this time, the redis v7 docker images seem to lack the rejson module.
I am now using redis/stack-server which already includes all modules (including) rejson. It is based upon redis v6, see https://hub.docker.com/r/redis/redis-stack or https://hub.docker.com/r/redis/redis-stack-server
My compose.yaml now looks like this:
version: "3"
services:
redis:
image: redis/redis-stack
volumes:
- redis_data:/data:rw
ports:
- 6379:6379
restart: unless-stopped
volumes:
redis_data:
In the redis-cli you can then give it a try:
127.0.0.1:6379> JSON.SET employee_profile $ '{ "employee": { "name": "alpha", "age": 40,"married": true } } '
OK
127.0.0.1:6379> JSON.GET employee_profile
"{\"employee\":{\"name\":\"alpha\",\"age\":40,\"married\":true}}"
If you still have Docker volumes from previous installations, you should delete them first. Otherwise your new container might get problems reading an older database.

Related

Unable to access Redis console in docker container (Mac M1)

version: "3.9"
services:
redis:
image: "redislabs/redismod:latest"
container_name: redis-lab
ports:
- "6379:6379"
platform: linux/amd64
volumes:
- ./data:/data
entrypoint: >
redis-server
--loadmodule /usr/lib/redis/modules/redisai.so
ONNX redisai_onnxruntime/redisai_onnxruntime.so
TF redisai_tensorflow/redisai_tensorflow.so
TFLITE redisai_tflite/redisai_tflite.so
TORCH redisai_torch/redisai_torch.so
--loadmodule /usr/lib/redis/modules/redisearch.so
--loadmodule /usr/lib/redis/modules/redisgraph.so
--loadmodule /usr/lib/redis/modules/redistimeseries.so
--loadmodule /usr/lib/redis/modules/rejson.so
--loadmodule /usr/lib/redis/modules/redisbloom.so
--loadmodule /var/opt/redislabs/lib/modules/redisgears.so
--appendonly yes
#--platform linux/amd64 :
# redis The requested image's platform (linux/amd64) does not match the detected host platform
#(linux/arm64/v8) and no specific platform was requested 0.0s
deploy:
replicas: 1
restart_policy:
condition: on-failure
manager:
container_name: manager_redis
image: "redislabs/redisinsight:latest"
ports:
- "8001:8001"
Fast memory test PASSED, however your memory can still be broken.
Please run a memory test for several hours if possible.
------ DUMPING CODE AROUND EIP ------ Symbol: gsignal (base: 0x40021a1ba0) Module: /lib/x86_64-linux-gnu/libc.so.6 (base
0x4002166000) $ xxd -r -p /tmp/dump.hex /tmp/dump.bin $ objdump
--adjust-vma=0x40021a1ba0 -D -b binary -m i386:x86-64 /tmp/dump.bin
Function at 0x40022736f0 is __stack_chk_fail
=== REDIS BUG REPORT END. Make sure to include from START to END. ===
Please report the crash by opening an issue on github:
http://github.com/redis/redis/issues Suspect RAM error? Use redis-server --test-memory to verify it. qemu: uncaught target
signal 6 (Aborted) - core dumped
I have a container deployed, there is an error inside, since the memory test failed. I can't connect to the Redis console, and I also can't connect through the redis manager, because the connection is denied.
Does anyone have any ideas how to fix this and why this is happening?

How to resolve docker name resolution failure?

Below is my docker compose file:
version: '3.7'
# Run tests in much the same way as circleci
# docker-compose -f docker-compose.test.yml up
# TODO check aws versions
services:
db:
# image: circleci/postgres:11-alpine
image: kotify/postgres-non-durable:11.2
env_file: .env-test
container_name: limetonic_db
ports:
- 5433:5432
redis:
image: circleci/redis:6.2.1-alpine
selenium:
image: selenium/standalone-chrome:89.0
container_name: limetonic_selenium
shm_size: '2gb'
environment:
TZ: "Australia/Sydney"
ports:
- 4444:4444
test:
build:
context: .
dockerfile: Dockerfile
env_file: .env-test
depends_on:
- redis
- db
- selenium
command: bash -c "make ci.test"
The db, redis, selenium containers have DNS as 127.0.0.11 while test has 8.8.8.8. Now I am spinning a django server at test:8000 but it does not come up with name resolution failure which I understand is coming from 8.8.8.8 DNS.
I have read many questions on SO but none of the solutions work. I have modified DOCKER_OPTS and changes dnsmasq etc. Now this problem occurs in stock ubuntu installation and any changes made does not work.
It does not matter what DNS is in docker test container it won't resolve test.
Note that db, selenium and redis can ping each other but obviously test cannot.
My systemd-resolv has DNS as 4.2.2.2 and 8.8.8.8 that is why test is not resolving. I understand that docker does not take 127.0.0.11 as DNS. However, if that is the case how other images can resolve with the local DNS? And even if I set DNS of test container to 127.0.0.11 it still does not resolve test?

How to make graylog 4 and elasticticsearch 7 working with docker compose

I am trying to make local setup of graylog 4 with elasticsearch 7 and mongo 4 using docker-compose. I am working on mac.
Here is my docker-compose.yml: https://gist.github.com/gandra/dc649b37e165d8e3fc5b20c30a8b5a79
After running:
docker-compose up -d --build
I can not see any data on http://localhost:9000/
When open that url I see :
localhost didn’t send any data.
ERR_EMPTY_RESPONSE
Any idea how to make it working?
Here's the configuration I'm using in my project to get it working (compose v3).
###################################
# Greylog container logging start #
###################################
# Taken from https://docs.graylog.org/en/4.0/pages/installation/docker.html
# MongoDB: https://hub.docker.com/_/mongo/
mongo:
image: mongo:4.2
# Elasticsearch: https://www.elastic.co/guide/en/elasticsearch/reference/7.10/docker.html
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.10.2
environment:
- http.host=0.0.0.0
- transport.host=localhost
- network.host=0.0.0.0
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
deploy:
resources:
limits:
memory: 1g
# Graylog: https://hub.docker.com/r/graylog/graylog/
graylog:
image: graylog/graylog:4.0
environment:
# CHANGE ME (must be at least 16 characters)!
- GRAYLOG_PASSWORD_SECRET=somepasswordpepper
# Password: admin
- GRAYLOG_ROOT_PASSWORD_SHA2=8c6976e5b5410415bde908bd4dee15dfb167a9c873fc4bb8a81f6f2ab448a918
- GRAYLOG_HTTP_EXTERNAL_URI=http://127.0.0.1:9000/
entrypoint: /usr/bin/tini -- wait-for-it elasticsearch:9200 -- /docker-entrypoint.sh
restart: always
depends_on:
- mongo
- elasticsearch
ports:
# Graylog web interface and REST API
- 9000:9000
# Syslog TCP
- 1514:1514
# Syslog UDP
- 1514:1514/udp
# GELF TCP
- 12201:12201
# GELF UDP
- 12201:12201/udp
###################################
# Greylog container logging end #
###################################
I will say, this took a fair bit of time to start. The output logs ran awhile while Graylog, MongoDB, and Elastisearch did their setup work. At the end of it, though, it did eventually become available (took about a full two minutes). Until it was ready, though, I saw the same response that you did.
Graylog does not support Elasticsearch versions 7.11 or greater, so you'll need to change the Elasticsearch version to 7.10.2. Beyond that, what are you seeing in Graylog's server.log?

Works on mac, but on windows get ECONNREFUSED, docker-toolbox

This is my docker-compose file that runs when I do docker-compose up -d on mac. I am now trying this on windows, with docker-toolbox (as docker desktop isn't supported on my windows). I run my application on http://localhost:1337 and then the application needs to talk to inside this container. Works totally fine on mac.
version: '3.4'
services:
# Add a redis instance to which our app can connect. Quite simple.
redis-dev:
image: redis:5.0.5-alpine
ports:
- 6379:6379
# Add a postgres instance as our primary data store
postgres-dev:
image: postgres:11.5-alpine
environment:
- POSTGRES_DB=the-masjid-app
ports:
- 5432:5432
volumes:
# Here we specify that docker should keep postgres data,
# so the next time we start docker-compose,
# our data is intact.
- the-masjid-app-pgdata-dev:/var/lib/postgresql/data
# Add a postgres instance as our primary data store
postgres-test:
image: postgres:11.5-alpine
environment:
- POSTGRES_DB=the-masjid-app
ports:
- 5433:5432
# Here we can configure settings for the default network
networks:
default:
# Here we can configure settings for the postgres data volume where our data is kept.
volumes:
the-masjid-app-pgdata-dev:
Doing the same thing in Windows is giving me:
Error: Redis connection to localhost:6379 failed - connect
ECONNREFUSED 127.0.0.1:6379 at TCPConnectWrap.afterConnect [as
oncomplete] (net.js:1141:16)
Any ideas on how to fix?

How do I properly set up my Keystone.js app to run in docker with mongo?

I have built my app which runs fine locally. When I try to run it in docker (docker-compose up) it appears to start, but then throws an error message:
Creating mongodb ... done
Creating webcms ... done
Attaching to mongodb, webcms
...
Mongoose connection "error" event fired with:
MongoError: failed to connect to server [localhost:27017] on first connect
...
webcms exited with code 1
I have read that with Keystone.js you need to configure the Mongo location in the .env file, which I have:
MONGO_URI=mongodb://localhost:27017
Here is my Docker file:
# Use node 9.4.0
FROM node:9.4.0
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
# Expose API port to the outside
EXPOSE 3000
# Launch application
CMD ["node","keystone"]
...and my docker-compose
version: "2"
services:
# NodeJS app
web:
container_name: webcms
build: .
ports:
- 3000:3000
depends_on:
- mongo
# MongoDB
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db/mongo
ports:
- 27017:27017
When I run docker ps it confirms that mongo is up and running in a container...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3e06e4a5cfe mongo "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:27017->27017/tcp mongodb
I am either missing some config or I have it configured incorrectly. Could someone tell me what that is?
Any help would be appreciated.
Thanks!
It is not working properly because you are sending the wrong host.
your container does not understand what is localhost:27017 since it's your computer address and not its container address.
Important to understand that each service has it's own container with a different IP.
The beauty of the docker-compose that you do not need to know your container address! enough to know your service name:
version: "2"
volumes:
db-data:
driver: local
services:
web:
build: .
ports:
- 3000:3000
depends_on:
- mongo
environment:
- MONGO_URI=mongodb://mongo:27017
mongo:
image: mongo
volumes:
- "db-data:/data/db/mongo"
ports:
- 27017:27017
just run docker-compose up and you are all-set
A couple of things that may help:
First. I am not sure what your error logs look like but buried in my error logs was:
...Error: The cookieSecret config option is required when running Keystone in a production environment.Update your app or environment config so this value is supplied to the Keystone constructor....
To solve this problem, in your Keystone entry file (eg: index.js) make sure your Keystone constructor has the cookieSecret parameter set correctly: process.env.NODE_ENV === 'production'
Next. Change the mongo uri from the one Keystone generated (mongoUri: mongodb://localhost/my-keystone) to: mongoUri: 'mongodb://mongo:27017'. Docker needs this because it is the mongo container address. This change should also be reflected in your docker-compose file under the environment variable under MONGO_URI:
... environment: - MONGO_URI=mongodb://mongo:27017 ...
After these changes your Keystone constructor should look like this:
const keystone = new Keystone({
adapter: new Adapter(adapterConfig),
cookieSecret: process.env.NODE_ENV === 'production',
sessionStore: new MongoStore({ url: 'mongodb://mongo:27017' }),
});
And your docker-compose file, something like this (I used a network instead of links for my docker-compose as Docker has stated that links are a legacy option. I've included mine in case its useful for anyone else):
version: "3.3"
services:
mongo:
image: mongo
networks:
- appNetwork
ports:
- "27017:27017"
environment:
- MONGO_URI=mongodb://mongo:27017
appservice:
build:
context: ./my-app
dockerfile: Dockerfile
networks:
- appNetwork
ports:
- "3000:3000"
networks:
appNetwork:
external: false
It is better to use mongo db atlas if you does not want complications. You can use it in local and in deployment.
Simple steps to get the mongo url is available in https://www.mongodb.com/cloud/atlas
Then add a env variable
CONNECT_TO=mongodb://your_url
For passing the .env to docker, use
docker run --publish 8000:3000 --env-file .env --detach --name kb keystoneblog:1.0