If I use the default redis docker image configured with docker compose as such:
redis-storage:
image: redis:7.0
container_name: 'redis-storage'
command: ["redis-server", "--save", "1200", "32", "--loglevel", "warning"]
volumes:
- redis-storage-data:/data
it starts up fine and writes to disk every 20 minutes if at least 32 changes.
But if I use the same command with image: redis/redis-stack-server:latest it appears to start okay, but it really goes into protected mode and becomes inaccessible. Commenting out the command, all works fine.
What is the correct command in docker-compose format that will allows altering the default save-to-disk parameters?
(Also tried alternative syntax: command: redis-server --save 1200 32)
Working solution for docker-compose schema '3.8':
redis-stack-svc:
image: redis/redis-stack-server:latest
# use REDIS_ARGS for redis-stack-server instead of command arguments
environment:
- REDIS_ARGS=--save 1200 32
volumes:
- my-redis-data:/data
Not easy to find a clear, non-conflicting example. And something of an historical bug.
For redis-stack-server (when not using a local redis-stack.conf file mounted to the container) configuration for the underlying redis can be passed in via the REDIS_ARGS environment variable instead of directly to the command. (There are also environment vars for the stack modules, such as REDISJSON_ARGS, etc.
However 'save' is particularly fussy. It expects two arguments (seconds, changes) but most configuration parameters expect one. Some forms of quoting the arguments would make it look like one argument, and the underlying argument parser would either be ignored or report 'wrong number of arguments' and put the server into protected mode.
For save, you can also specify several conditionals. For example, the default is:
save 3600 1 300 100 60 10000
(Save after 1hr if 1 write, after 5min if 100 writes, after 60 sec if 10000 writes)
For the original redis container, you can specify this in docker-compose as command line arguments using the following format:
redis-storage:
image: redis:7.0
command: ["redis-server", "--save", "3600", "1", "300", "100", "60", "10000"]
volumes:
- my-redis-data:/data
However, the underlying argument parsing logic creates a problem for redis-stack
Both of these formats will be parsed incorrectly:
# (valid syntax but ignored...'save' is actually set to 'nil')
environment:
- REDIS_ARGS=--save 3600 1 300 100 60 10000
# ('invalid number of arguments', server not started)
environment:
- REDIS_ARGS="--save 3600 1 300 100 60 10000"
The correct syntax is obscure:
# (using non-default values here to validate the behavior)
environment:
- REDIS_ARGS=--save 3602 1 --save 302 100 --save 62 10000
If you docker exec into the running container and invoke redis-cli CONFIG GET save it will return:
root#f45860:/data# redis-cli CONFIG GET save
1) "save"
2) "3602 1 302 100 62 10000"
There is also an alternative compose syntax example in the
redis developer docs
environment:
- REDIS_ARGS:--save 20 1
but compose schema 3.8 will complain (the example uses schema 3.9)
Related
I am trying to setup a local Beam Runner for easier testing/developing.
I'd like to allow testing python pipeline which uses kafka IO locally on my mac.
Here's my current plan for the entire framework looks like:
Here's my current docker-compose
services:
zookeeper:
image: wurstmeister/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: wurstmeister/kafka
container_name: kafka
environment:
KAFKA_ADVERTISED_HOST_NAME: kafka
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
ports:
- "9092:9092"
jobmanager:
image: flink_image
command: ['jobmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\nparallelism.default: 2"
ports:
- "8081:8081"
taskmanager:
image: flink_image
scale: 1
depends_on:
- jobmanager
command: ['taskmanager']
environment:
FLINK_PROPERTIES: "jobmanager.rpc.address: jobmanager\ntaskmanager.numberOfTaskSlots: 2\nparallelism.default: 2"
beam-jobserver:
image: flink_image
ports:
- "8097:8097"
- "8098:8098"
- "8099:8099"
entrypoint:
- java
- -cp
- /target/flink/flink-web-upload/beam-runner.jar
- org.apache.beam.runners.flink.FlinkJobServerDriver
- --flink-master=jobmanager
- --job-host=0.0.0.0
And my pipeline looks like this:
LOCAL_ARGS = [
'--streaming',
'--runner=portableRunner',
'--environment_type=LOOPBACK',
'--job_endpoint=localhost:8099',
'--artifact_endpoint=localhost:8098',
'--defaultEnvironmentType=EXTERNAL',
'--defaultEnvironmentConfig=host.docker.internal:5000',
]
with beam.Pipeline(options=PipelineOptions(LOCAL_ARGS)) as pipeline:
result = (
pipeline
| "Kafka Read" >> ReadFromKafka(
consumer_config={"bootstrap.servers": "kafka:9092", 'auto.offset.reset': 'earliest'},
topics=["test.topic"],
with_metadata=False,
expansion_service=default_io_expansion_service(
append_args=[
'--defaultEnvironmentType=PROCESS',
"--defaultEnvironmentConfig={\"command\":\"/opt/apache/beam/java_boot\"}",
'--experiments=use_deprecated_read',
]
)
)
| "logging" >> beam.Map(lambda x: logging.info(f"logged: {x}"))
)
However, it looks like the LOOPBACK tried to open a port on my host machine, and ask the task manager to talk to itself via localhost:<randomPort>. Which is not accessible inside the container.
Unfortunately, host network is not supported for Docker on Mac, and thus I need to find a way to overwrite the Loopback settings so that it connect to host.docker.internal:<dedicated_pool> instead of a random port on my host machine? or if there are other suggested workaround? Thanks!
(The entire infra can be found here: https://gist.github.com/lydian/0db7614652c2ccdc733884134bf67f9b)
It looks like this is not supported. LOOPBACK mode is mostly targeting very simple setups.
You could come close by starting the worker manually, e.g.
python -m apache_beam.runners.worker.worker_pool_main --service_port =PORT
and then passing --environment_type=EXTERNAL --environment_config= host.docker.internal:PORT.
I was just facing similar struggles recently. Luckily there's two environment variables that facilitate testing on Docker for Mac. Unfortunately, there's not much documentation around that currently.
DOCKER_MAC_CONTAINER=1 limits the ports for communication with SDK workers to the range 8100 - 8200 instead of using random ports. Ports of that range are used in a round-robin fashion and have to be published.
BEAM_WORKER_POOL_IN_DOCKER_VM=1 tells an SDK worker to communicate with a runner node using host.docker.internal / via the docker host instead of using localhost.
Here's an example how to use these with Spark, but Flink shouldn't be any different
EDIT: weird DNS behavior was some kind of transient issue, and now RHEL/podman works the same way as Ubuntu/podman. I can't reproduce the issue, which makes most part (not 100% though) of this question moot.
I am trying to use Podman and docker-compose to create a compose stack with multiple replicas of backend container, and having a hard time with it.
I use Podman because I have to (comes from Red Hat platform), and picked docker-compose because it is familiar and I use in local dev host, too. I know there are alternatives (podman-compose etc). I learned that Podman 4.1 supported Docker Compose so this sounded like a good candidate.
As an example I have docker-compose.yml with one frontend container and 3 backend containers:
version: "3"
services:
frontend:
image: "nginx:latest"
ports:
- "3000:80"
depends_on:
- backend
backend:
image: "tomcat:latest"
ports:
- "8180-8280:8080"
scale: 3
Note: this stack is just an example. It's purpose is only to highlight the networking aspects of multi-replica docker-compose . I could use something else than nginx:latest, and using e.g. Traefik can solve some of the problems...but sometimes you wish to connect directly from one container to a service with multiple container replicas.
Docker & docker-compose (Ubuntu)
Running this on host which has docker and docker-compose is straightforward.
Backend containers get assigned random ports from range 8180-8280.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98941117708c nginx:latest "/docker-entrypoint.…" 25 seconds ago Up 22 seconds 0.0.0.0:3000->80/tcp, :::3000->80/tcp example-docker-compose-frontend-1
3d749e25eaba tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8193->8080/tcp, :::8193->8080/tcp example-docker-compose-backend-1
854ba8f60cb3 tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8192->8080/tcp, :::8192->8080/tcp example-docker-compose-backend-2
e57e32181e8e tomcat:latest "catalina.sh run" 26 seconds ago Up 23 seconds 0.0.0.0:8194->8080/tcp, :::8194->8080/tcp example-docker-compose-backend-3
Logging into frontend container, service name backend resolves to all 3 backend containers
dig backend
; <<>> DiG 9.16.27-Debian <<>> backend
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 31602
;; flags: qr rd ra; QUERY: 1, ANSWER: 3, AUTHORITY: 0, ADDITIONAL: 0
;; QUESTION SECTION:
;backend. IN A
;; ANSWER SECTION:
backend. 600 IN A 172.19.0.3
backend. 600 IN A 172.19.0.2
backend. 600 IN A 172.19.0.4
curl backend:8080 works
Podman & docker-compose (RHEL 8)
Podman came preinstalled, I added docker-compose (standalone) and podman-docker:
# curl -SL https://github.com/docker/compose/releases/download/v2.10.2/docker-compose-linux-x86_64 -o /usr/local/bin/docker-compose
# chmod a+x /usr/local/bin/docker-compose
# sudo yum install podman-docker
And activated rootless podman socket so that podman and docker-compose can talk to each other:
# systemctl --user enable podman.socket
# systemctl --user start podman.socket
# systemctl --user status podman.socket
# export DOCKER_HOST=unix:///run/user/$UID/podman/podman.sock
I also switched network backend to netavark, DNS did not work without that change
$ podman info |grep -i networkbackend
networkBackend: netavark
1. Port ranges are not supported
Podman does not like ports: - "8180-8280:8080" due to this bug: https://github.com/containers/podman/issues/15111
[+] Running 1/0
⠿ Network example-docker-compose_default Created 0.0s
⠋ Container example-docker-compose-backend-3 Creating 0.0s
⠋ Container example-docker-compose-backend-1 Creating 0.0s
⠋ Container example-docker-compose-backend-2 Creating 0.0s
Error response from daemon: make cli opts(): strconv.Atoi: parsing "8180-8280": invalid syntax
2. Without port range, address already in use conflict
Changed docker-compose.yml to remove port range
ports:
- "8080:8080"
This results in port conflict, all 3 backend containers try to bind to 8080
Catalina.startup.Catalina.start Server startup in [190] milliseconds
Error response from daemon: rootlessport listen tcp 0.0.0.0:8080: bind: address already in use
3. Scale = 1
Let's try with just one backend container. System starts up
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
217c3e726431 docker.io/library/tomcat:latest catalina.sh run 7 seconds ago Up 6 seconds ago 0.0.0.0:8080->8080/tcp example-docker-compose-backend-1
9c3f86676bde docker.io/library/nginx:latest nginx -g daemon o... 6 seconds ago Up 6 seconds ago 0.0.0.0:3000->80/tcp example-docker-compose-frontend-1
DNS from frontend server looks weird. What are all these different backend IP addresses 10.89.0.3 - 10.89.0.12? Only the last of them works when I curl 10.89.0.x. Still, curl backend:8080 works fine?
4. Scale up from command line
I remove ports and scale from docker-compose.yml and start compose stack with scale=3 option:
version: "3"
services:
frontend:
image: "nginx:latest"
ports:
- "3000:80"
depends_on:
- backend
backend:
image: "tomcat:latest"
$ docker-compose up --scale backend=3
[+] Running 4/4
⠿ Container example-docker-compose-backend-2 Recreated 0.3s
⠿ Container example-docker-compose-backend-3 Recreated 0.2s
⠿ Container example-docker-compose-backend-1 Recreated 0.3s
⠿ Container example-docker-compose-frontend-1 Recreated 0.3s
Attaching to example-docker-compose-backend-1, example-docker-compose-backend-2, example-docker-compose-backend-3, example-docker-compose-frontend-1
Now compose stack starts nicely with 3 backend containers
$ docker ps
Emulate Docker CLI using podman. Create /etc/containers/nodocker to quiet msg.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b81c44246ae7 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-3
814dc5d307f7 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-2
fb0a5090a456 docker.io/library/tomcat:latest catalina.sh run 22 minutes ago Up 22 minutes ago example-docker-compose-backend-1
c0219d7fded4 docker.io/library/nginx:latest nginx -g daemon o... 22 minutes ago Up 22 minutes ago 0.0.0.0:3000->80/tcp example-docker-compose-frontend-1
DNS from frontend has even more entries
# dig backend
; <<>> DiG 9.16.27-Debian <<>> backend
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 29526
;; flags: qr rd ad; QUERY: 1, ANSWER: 11, AUTHORITY: 0, ADDITIONAL: 1
;; WARNING: recursion requested but not available
;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
; COOKIE: 410008265c2b0edb (echoed)
;; QUESTION SECTION:
;backend. IN A
;; ANSWER SECTION:
backend. 86400 IN A 10.89.0.3
backend. 86400 IN A 10.89.0.4
backend. 86400 IN A 10.89.0.6
backend. 86400 IN A 10.89.0.7
backend. 86400 IN A 10.89.0.15
backend. 86400 IN A 10.89.0.16
backend. 86400 IN A 10.89.0.18
backend. 86400 IN A 10.89.0.19
backend. 86400 IN A 10.89.0.20
backend. 86400 IN A 10.89.0.21
backend. 86400 IN A 10.89.0.22
And curl backend:8080 does not work (not sure which port I should use now)
Questions
What's going on here?
Can I achieve a setup of 3 backend containers, so that DNS name backend would resolve to those, with podman & docker-compose?
Podman seems to support docker-compose (or vice versa), but only to a degree. Is there some documentation which tells what docker-compose features are supported on Podman, and which are not?
Podman is my container runtime of choice...unless I'm working with docker-compose, in which case I have found it to be "close but not quite" in terms of its docker API support.
However, for what you're trying to do, you could replace Nginx with Traefik, and let Traefik handle the load balancing. Traefik is a dynamic proxy that uses the Docker API and container labelling to discover containers and configure the proxy rules.
For example:
version: "3"
services:
frontend:
image: "docker.io/traefik:v2.8"
ports:
- "3000:80"
- "127.0.0.1:3080:8080"
command:
- --api.insecure=true
- --providers.docker
volumes:
- /run/user/$UID/podman/podman.sock:/var/run/docker.sock
backend:
labels:
traefik.http.routers.backend.rule: Host(`localhost`)
image: "quay.io/larsks/demoserver:latest"
scale: 3
Here we're mapping all requests for Host: localhost to to our backend containers. This is just for the purposes of a demonstration (since I'll be running curl localhost/...); a more realistic configuration would use specific hostnames, or paths, etc. You can read more in the Routing configuration section of Traefik's Docker documentation, and also in the general Router documentation.
With this configuration, we see the following containers running:
$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c29de1d137e2 docker.io/library/traefik:v2.8 --api.insecure=tr... 5 seconds ago Up 6 seconds ago 0.0.0.0:3000->80/tcp, 127.0.0.1:3080->8080/tcp demoserver_frontend_1
f4fac24cb494 quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 6 seconds ago demoserver_backend_2
d9d388202be2 quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 5 seconds ago demoserver_backend_1
5a3e6330739d quay.io/larsks/demoserver:latest /usr/local/bin/st... 5 seconds ago Up 5 seconds ago demoserver_backend_3
And we can see that requests on port 3000 cycle between the available backends. Running this script:
for i in {1..10}; do
curl http://localhost:3000/hostname
done
Produces as output:
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
d9d388202be2
f4fac24cb494
5a3e6330739d
I'm getting error when configuration file is set.
My host is a Ubuntu 22.04
Inside the docker container the user is rabbitmq, using id -u rabbitmq the $UID is 999
I changed the file using: chown 999 advanced.config
But the same error still persists.
Failed to load advanced configuration file "/etc/rabbitmq/advanced.config": unknown POSIX error
Error during startup: {error,failed_to_read_advanced_configuration_file}
version: "3.2"
services:
rabbitmq2:
image: rabbitmq:3-management
hostname: rabbitmq2
container_name: 'rabbitmq2'
ports:
- "5672:5672"
- "15672:15672"
- "5552:5552"
volumes:
- ./advanced/rabbitmq2/advanced.config:/etc/rabbitmq/advanced.config
# or using:
# - type: bind
# source: $PWD/advanced/rabbitmq2/advanced.config
# target: /etc/rabbitmq/advanced.config
environment:
- RABBITMQ_ADVANCED_CONFIG_FILE=/etc/rabbitmq/advanced.config
If I use another place to put the file, or another file name, the container runs, but Rabbitmq doesn't load the configuration file.
I changed the content of the file and it didn't work (rabbitmq can't load the file), I tried using blank file, and using some configurations, for example:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
]
Be sure the format is correct:
[
%% 4 replicas by default, only makes sense for nine node clusters
{rabbit, [{quorum_cluster_size, 4},
{quorum_commands_soft_limit, 512}]}
].
Note the trailing period.
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
I'm creating test.conf configuration file by using a python script which will be volume mounted to docker-compose.
In python script there are few configuration variable, for which I need to check the values in container and accordingly generate the config.
Python snippet to generate the config file :
dns_ip = os.popen('cat /etc/resolv.conf').read()
dns_match = re.search(r'(\d+\.\d+\.\d+\.\d+)',dns_ip)
try:
if match:
dns = dns_match.group(1)
service_config.append('add service dns_service '+dns+' DNS 53 -healthmonitor NO')
service_config.append (.....
.......
#### Creating config file ######
f = open('test.conf','w')
for i in service_config :
f.write(str(i)+"\n")
docker-compose file where generated test.conf from above python script will be volume mount:
test-docker:
image: test
network_mode: bridge
ports:
- '444:8443'
privileged: yes
ulimits:
core: -1
volumes:
- /test.conf:/config/test.conf
Please suggest that how such configuration can be applied after checking the values in container ? Can I pass these details as environment variable, if yes, then how can I achieve it ?
Still learning #Ansible. Trying to automate a MongoDB restore.
I have three servers which run MongoDB. After the restore, the status of the MongoDB servers can be outputted with a shell command (see below).
What I want Ansible to do is to perform a task when the string 'lastHeartbeatMessage' is present after 10 min in the output.
- name: Register MongoDB sync status
shell: mongo --eval "printjson(rs.status())"
register: mongoReplInfo
- debug: var=mongoReplInfo
- name: Copy rs.status to local log
local_action: copy content={{ mongoReplInfo }} dest=/tmp/mongoStatus
- name: Copy rs.status to server
copy: src=/tmp/mongoStatus dest=/tmp/mongoStatus
- name: Check if slave is still syncing
wait_for: path=/tmp/mongoStatus search_regex=lastHeartbeatMessage
- name: Succesfull sync
shell: 'run_succesfull_command'
when: lastHeartbeatMessage is absent after 10 min
- name: Failed sync
shell: 'run_succesfull_command'
when: lastHeartbeatMessage is present after 10 min
Right now i'm using the wait_for. But the status is only written once to the file, and it is not updated. Which module should I use to repeat the tasks which output the rs.status to the server?
Or am I taking this playbook the whole wrong way?
That's a use case for a do-until loop rather than wait_for.
The following will register mongoReplInfo twice: immediately and after 600 seconds. Then you can check the value for your condition.
- name: Register MongoDB sync status
shell: mongo --eval "printjson(rs.status())"
register: mongoReplInfo
until: false
retries: 2
delay: 600
But you should rather increase the number of retries and check for the condition in until parameter, so that the loop exits when the condition is met. Just like in the linked doc chapter.