Localstack SNS: Unable to send message to ElasticMq - localstack

I have 2 applications:
1 application uses ElasticMq queue for listening to the messages.
The 2nd application publishes the messages on an SNS topic.
I am able to subscribe to the ElasticMq queue on the SNS topic. But when I publish on the topic local stack is unable to send the message to elasticmq eventhough subscription was successful.
awslocal sns list-subscriptions-by-topic --topic-arn arn:aws:sns:us-east-1:123456789012:classification-details-topic
{
"Subscriptions": [
{
"SubscriptionArn": "arn:aws:sns:us-east-1:123456789012:classification-details-topic:ea470c5a-c352-472e-9ae0-a1386044b750",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "http://elasticmq-service:9324/queue/test",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:classification-details-topic"
}
]
}
Below is the error message I receive:
awslocal sns publish --topic-arn
arn:aws:sns:us-east-1:123456789012:classification-details-topic
--message "My message"
An error occurred (InvalidParameter) when calling the Publish
operation: An error occurred (AWS.SimpleQueueService.NonExistentQueue)
when calling the SendMessage operation:
AWS.SimpleQueueService.NonExistentQueue; see the SQS docs.
Am I wrong in having elasticmq subscribed on local stack?
I am running localstack using docker-compose file
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8001}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
default:
external:
name: my_network
I have the elasticmq and other services as a part of different docker-compose using the same docker network "my_network"
The below is the complete docker-compose. I tried reproducing it by combining the entries into one docker-compose file.
Steps to reproduce
version: '3'
services:
elasticmq:
build: ./elasticmq
ports:
- '9324:9324'
networks:
- my_network
dns:
- 172.16.198.101
localstack:
image: localstack/localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8001}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
links:
- elasticmq:elasticmq-service
networks:
- my_network
dns:
- 172.16.198.101
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.16.198.0/24
After this one can run the following set of commands
awslocal sqs create-queue --queue-name test --endpoint http://elasticmq:9324/
awslocal sns create-topic --name test-topic
awslocal sns subscribe --topic-arn arn:aws:sns:us-east-1:123456789012:test-topic --protocol sqs --notification-endpoin http://elasticmq-service:9324/queue/test

Based on your comments, I would hazard a guess that the networks of your 2 docker-compose files are not setup correctly.
For simplicity's sake I would merge the elasticmq service in the above docker-compose and try again (if you post your second docker-compose and exact aws command to create the subscription someone can try it locally).
If you really want to keep 2 separate docker-compose files then if the above works then at least you can pin point your problem. I'm afraid I am not too familiar with setting this up but this answer might help.
EDIT:
Thanks for the additional details. I have a simplified version of a docker-compose that works for me. First of all according to this you will need to create a config file to set the hostname of your elasticmq instance since it will not pick up the container_name from docker-compose (similar to the HOSTNAME environment variable in LocalStack which I set below as you will see). The contents of this file named elasticmq.conf (in a folder named config) are:
include classpath("application.conf")
node-address {
host = elasticmq
}
queues {
test-queue {}
}
With that in place, the following docker-compose publishes the message without any errors:
version: '3'
services:
elasticmq:
image: s12v/elasticmq
container_name: elasticmq
ports:
- '9324:9324'
volumes:
- ./config/elasticmq.conf:/etc/elasticmq/elasticmq.conf
localstack:
image: localstack/localstack
container_name: localstack
environment:
- SERVICES=sns
- DEBUG=1
- PORT_WEB_UI=${PORT_WEB_UI- }
- HOSTNAME=localstack
ports:
- "4575:4575"
- "8080:8080"
awscli:
image: garland/aws-cli-docker
container_name: awscli
depends_on:
- localstack
- elasticmq
environment:
- AWS_DEFAULT_REGION=eu-west-2
- AWS_ACCESS_KEY_ID=xxx
- AWS_SECRET_ACCESS_KEY=xxx
command:
- /bin/sh
- -c
- |
sleep 20
aws --endpoint-url=http://localstack:4575 sns create-topic --name test_topic
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:eu-west-2:123456789012:test_topic --protocol http --notification-endpoint http://elasticmq:9324/queue/test-queue
aws --endpoint-url=http://localstack:4575 sns publish --topic-arn arn:aws:sns:eu-west-2:123456789012:test_topic --message "My message"
And the output:
Admittedly at this point I did not check elasticmq to see if the message got consumed but I leave that to you.

Related

How to setup kafka-kinesis-connector if I use a Kafka container?

I'm a little bit confused. I've been following this to get started and the installation method.
https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-kafka-connector-msk/
Setup AWS cli and configure it
Install maven
Compile connector
Set classpath with the jar generated
Set up the properties file of the connector
FYI: I have a docker-compose that creates all my containers (kafka, mqtt, etc.)
(all of the above is setup on-premise)
And then, I executed all this on my machine itself and not on the Kafka container, so for the last step how would that work when I try to run it standalone?
version: '3'
services:
nodered:
container_name: nodered
image: nodered/node-red
ports:
- "1880:1880"
volumes:
- ./nodered:/data
depends_on:
- mosquitto
environment:
- TZ=America/Toronto
- NODE_RED_ENABLE_PROJECTS=true
restart: always
mosquitto:
image: eclipse-mosquitto
container_name: mqtt
restart: always
ports:
- "1883:1883"
volumes:
- "./mosquitto/config:/mosquitto/config"
- "./mosquitto/data:/mosquitto/data"
- "./mosquitto/log:/mosquitto/log"
environment:
- TZ=America/Toronto
user: "${PUID}:${PGID}"
portainer:
ports:
- "9000:9000"
container_name: portainer
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./portainer/portainer_data:/data"
image: portainer/portainer-ce
zookeeper:
image: zookeeper:3.4
container_name: zookeeper
ports:
- "2181:2181"
volumes:
- "zookeeper_data:/data"
kafka:
image: wurstmeister/kafka:1.0.0
container_name: kafka
ports:
- "9092:9092"
- "9093:9093"
volumes:
- "kafka_data:/data"
environment:
- KAFKA_ZOOKEEPER_CONNECT=10.0.0.129:2181
- KAFKA_ADVERTISED_HOST_NAME=10.0.0.129
- JMX_PORT=9093
- KAFKA_ADVERTISED_PORT=9092
- KAFKA_LOG_RETENTION_HOURS=1
- KAFKA_MESSAGE_MAX_BYTES=10000000
- KAFKA_REPLICA_FETCH_MAX_BYTES=10000000
- KAFKA_GROUP_MAX_SESSION_TIMEOUT_MS=60000
- KAFKA_NUM_PARTITIONS=2
- KAFKA_DELETE_RETENTION_MS=1000
depends_on:
- zookeeper
restart: on-failure
cmak:
image: hlebalbau/kafka-manager:1.3.3.16
container_name: kafka-manager
restart: always
depends_on:
- kafka
- zookeeper
ports:
- "9080:9080"
environment:
- ZK_HOSTS=10.0.0.129
- APPLICATION_SECRET=letmein
command: -Dconfig.file=/kafka-manager/conf/application.conf -Dapplication.home=/kafkamanager -Dhttp.port=9080
volumes:
zookeeper_data:
driver: local
kafka_data:
driver: local
I guess I have to go into my Kafka container and run the below code, but how can I reference by machine path... I'm stuck here or perhaps I'm missing something:
./bin/connect-standalone.sh {{path_from_machine_where_jar_is}}/kinesis-kafka-connector/config/worker.properties {{path_from_machine_where_jar_is}}/kinesis-kafka-connector/config/kinesis-streams-kafka-
connector.properties
Or I have to run all the previous steps in my Kafka container directly...
I was thinking of just doing this, copy my jar file and just moving it in my kafka container.
docker cp /hostfile (container_id):/(to_the_place_you_want_the_file_to_be)
Thank you!
guess I have to go into my Kafka container and run the below code
No. Containers should only run one process, which is kafka server.
So, either you download Kafka locally and run the Connect scripts from your host.
Or you simply add a new container for Kafka Connect, which will run Connect distributed mode, rather than standalone.
In either case, yes, you need to copy (or mount) the jar into Connect's plugin path
Alternatively, run MSK rather than Kinesis and produce your Kafka data there rather than locally.

What can be the reason why I can't connect to CateDB with grafana and locally with Dbeaver?

Cheers
I am currently trying to reproduce the tutorial to develop an application / solution with Fiware seen in: Desarrolla tu primer aplicación en Fiware and I am having difficulty connecting Grafana with the Crate database (recommended by Fiware).
My docker-compose file configuration is:
version: "3.5"
services:
orion:
image: fiware/orion
hostname: orion
container_name: fiware-orion
networks:
- fiware_network
depends_on:
- mongo-db
ports:
- "1026:1026"
command: -dbhost mongo-db -logLevel DEBUG -noCache
mongo-db:
image: mongo:3.6
hostname: mongo-db
container_name: db-mongo
networks:
- fiware_network
ports:
- "27017:27017"
command: --bind_ip_all --smallfiles
volumes:
- mongo-db:/data
grafana:
image: grafana/grafana:8.2.6
container_name: fiware-grafana
networks:
- fiware_network
ports:
- "3000:3000"
depends_on:
- crate
quantumleap:
image: fiware/quantum-leap
container_name: fiware-quantumleap
networks:
- fiware_network
ports:
- "8668:8668"
depends_on:
- mongo-db
- orion
- crate
environment:
- CRATE_HOST=crate
crate:
image: crate:1.0.5
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data
volumes:
mongo-db:
cratedata:
networks:
fiware_network:
driver: bridge
After starting the containers, I have a positive response from OCB (Orion), from Quantumleap and even after creating the subscription between Orion and quantumleap, in the Crate database, the data is stored and updated correctly.
Unfortunately I am not able to get the visualization of the data in grafana.
I thought that the reason was the fact that crateDB was removed as a grafana plugin versions ago, but after researching how to connect crateDB with grafana, through a postgres data source (I read on: Visualizing time series data with Grafana and CrateDB), I'm still having difficulty getting the connection, getting in grafana "Query error
dbquery error: EOF"
grafana settings img
The difference with respect to the guide is the listening port, since with input port 5432 I get a response indicating that it is not possible to connect to the server. I'm using port 4300.
After configuring, and trying to query from grafana, I get the mentioned EOF error
EOF error in grafana img
I tried to connect from a database IDE (Dbeaver) and I get exactly the same problem.
EOF error in Dbeaver img
Is there something I am missing?
What should I change in my docker configuration, or ports, or anything else to fix this?
I think it is worth mentioning that I am studying this because I am being asked in a project to visualize context switches in real time with Fiware and grafana.
Thanks in advance
The PostgreSQL port 5432 must be exposed as well by the CrateDB docker image.
Additionally, I highly recommend to use a recent (or just the latest) CrateDB version, current stable is 5.0.0, your used version 1.0.5 is very old and not maintained anymore.
Full example entry:
crate:
image: crate:latest
networks:
- fiware_network
ports:
# Admin UI
- "4200:4200"
# Transport protocol
- "4300:4300"
# PostgreSQL protocol
- "5432:5432"
command: -Ccluster.name=democluster -Chttp.cors.enabled=true -Chttp.cors.allow-origin="*"
volumes:
- cratedata:/data

Kafka docker image that works without zookeeper

I read that Kafka no longer requires zookeeper, so I don't want to have zookeeper in docker-compose. But I don't know which kafka image can work w/o zookeeper. can anyone give a hint?
Here's a Kafka Docker image which doesn't required Zookeeper (as described above):
https://hub.docker.com/r/bashj79/kafka-kraft
Disclaimer: I'm the author.
Confluent published a working docker-compose.yaml without zookeeper in their repository cp-all-in-one.
There is a script used as a workaround
#!/bin/sh
# Docker workaround: Remove check for KAFKA_ZOOKEEPER_CONNECT parameter
sed -i '/KAFKA_ZOOKEEPER_CONNECT/d' /etc/confluent/docker/configure
# Docker workaround: Ignore cub zk-ready
sed -i 's/cub zk-ready/echo ignore zk-ready/' /etc/confluent/docker/ensure
# KRaft required step: Format the storage directory with a new cluster ID
echo "kafka-storage format --ignore-formatted -t $(kafka-storage random-uuid) -c /etc/kafka/kafka.properties" >> /etc/confluent/docker/ensure
which is called in the command of the docker-compose-setup
broker:
image: confluentinc/cp-kafka:7.2.x-latest
hostname: broker
container_name: broker
ports:
- "9092:9092"
- "9101:9101"
environment:
KAFKA_BROKER_ID: 1
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: 'CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT'
KAFKA_ADVERTISED_LISTENERS: 'PLAINTEXT://broker:29092,PLAINTEXT_HOST://localhost:9092'
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
KAFKA_GROUP_INITIAL_REBALANCE_DELAY_MS: 0
KAFKA_TRANSACTION_STATE_LOG_MIN_ISR: 1
KAFKA_TRANSACTION_STATE_LOG_REPLICATION_FACTOR: 1
KAFKA_JMX_PORT: 9101
KAFKA_JMX_HOSTNAME: localhost
KAFKA_PROCESS_ROLES: 'broker,controller'
KAFKA_NODE_ID: 1
KAFKA_CONTROLLER_QUORUM_VOTERS: '1#broker:29093'
KAFKA_LISTENERS: 'PLAINTEXT://broker:29092,CONTROLLER://broker:29093,PLAINTEXT_HOST://0.0.0.0:9092'
KAFKA_INTER_BROKER_LISTENER_NAME: 'PLAINTEXT'
KAFKA_CONTROLLER_LISTENER_NAMES: 'CONTROLLER'
KAFKA_LOG_DIRS: '/tmp/kraft-combined-logs'
volumes:
- ./update_run.sh:/tmp/update_run.sh
command: "bash -c 'if [ ! -f /tmp/update_run.sh ]; then echo \"ERROR: Did you forget the update_run.sh file that came with this docker-compose.yml file?\" && exit 1 ; else /tmp/update_run.sh && /etc/confluent/docker/run ; fi'"
I read that Kafka no longer requires zookeeper
You may well have read that in the future Apache Kafka will not need Zookeeper - this is detailed in KIP-500
However, this is not yet implemented, so for the time being (January 2021) you will still need a Zookeeper in your Docker Compose ensemble.
You can use this image for no Zookeeper.
https://hub.docker.com/r/bitnami/kafka
Here is a example yaml.
version: "3"
services:
kafka:
image: 'bitnami/kafka:3.2.3'
restart: "no"
privileged: true
ports:
- 2181:2181
- 19092:19092
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LISTENERS=PLAINTEXT://:19092,CONTROLLER://:2181
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:19092
- KAFKA_BROKER_ID=1
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1#127.0.0.1:2181
- ALLOW_PLAINTEXT_LISTENER=yes
According to "What’s New in Apache Kafka 3.3" document and "KIP-833: Mark KRaft as Production Ready" Kafka can work without Zookeeper (but there are some features yet works only by Apache ZooKeeper (ZK) mode).
Example (docker-compose.yml):
version: "2.5"
volumes:
volume1:
services:
kafka1:
image: 'bitnami/kafka:3.3.1'
container_name: kafka
environment:
- KAFKA_ENABLE_KRAFT=yes
- KAFKA_CFG_PROCESS_ROLES=broker,controller
- KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
- KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka1:9092
- KAFKA_CFG_BROKER_ID=1
- KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=1#kafka1:9093
- ALLOW_PLAINTEXT_LISTENER=yes
- KAFKA_KRAFT_CLUSTER_ID=r4zt_wrqTRuT7W2NJsB_GA
volumes:
- volume1:/bitnami/kafka
kafka-ui:
container_name: kafka-ui
image: 'provectuslabs/kafka-ui:latest'
ports:
- "8080:8080"
environment:
- KAFKA_CLUSTERS_0_BOOTSTRAP_SERVERS=kafka1:9092
- KAFKA_CLUSTERS_0_NAME=r4zt_wrqTRuT7W2NJsB_GA
You could try localhost:8080 and you will see that it works perfectly.

Setting up localstack resources in docker compose file results in connection aborted failure

I have a docker compose file that looks like the following:
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- "4567-4597:4567-4597"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=s3
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/private${TMPDIR}:/tmp/localstack"
networks:
- my_awesome_network
setup-resources:
image: mesosphere/aws-cli
volumes:
- ./dev_env:/project/dev_env
environment:
- AWS_ACCESS_KEY_ID=dummyaccess
- AWS_SECRET_ACCESS_KEY=dummysecret
- AWS_DEFAULT_REGION=us-east-1
entrypoint: /bin/sh -c
command: >
"
sleep 10;
# aws kinesis create-stream --endpoint-url=http://localstack:4568 --stream-name my_stream --shard-count 1;
aws --endpoint-url=http://localhost:4572 s3 mb s3://demo-bucket
"
networks:
- my_awesome_network
depends_on:
- localstack
networks:
my_awesome_network:
which has been copied from this blog post that I have found, but when I run docker-compose up the bucket fails to create with the following error: ('Connection aborted.', error(99, 'Address not available'))
I ran it with small changes and it's correctly working, change the localhost to localstack
version: "3"
services:
localstack:
image: localstack/localstack:latest
ports:
- '4568-4576:4568-4576'
- '8055:8080'
environment:
- SERVICES=s3
- DOCKER_HOST=unix:///var/run/docker.sock
- DEFAULT_REGION=us-east-1
- DEBUG=1
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "/private${TMPDIR}:/tmp/localstack"
networks:
- my_awesome_network
setup-resources:
image: mesosphere/aws-cli
volumes:
- ./dev_env:/project/dev_env
environment:
- AWS_ACCESS_KEY_ID=dummyaccess
- AWS_SECRET_ACCESS_KEY=dummysecret
- AWS_DEFAULT_REGION=us-east-1
entrypoint: /bin/sh -c
command: >
"
sleep 10;
aws --endpoint-url=http://localstack:4572 s3 mb s3://demo-bucket
"
networks:
- my_awesome_network
depends_on:
- localstack
networks:
my_awesome_network:
Small detail here but it should not be aws --endpoint-url=http://localhost:4572 s3 mb s3://demo-bucket
it should instead be aws --endpoint-url=http://localstack:4572 s3 mb s3://demo-bucket, that's right localhost becomes localstack
Starting with version 0.11.0,
All APIs are exposed via a single edge service,
which is accessible on
http://localhost:4566 by default
and EDGE_PORT=4566.
Found on this
Article

RabbitMq refuses connection when run in docker

My docker-compose file looks like this:
version: '2'
services:
explore:
image: explore
build:
context: ./Explore
dockerfile: VsDockerfile
environment:
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
- "RabbitMq/Username=guest"
- "RabbitMq/Password=guest"
networks:
- localnet
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.4.3
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- 9200:9200
volumes:
- ./esdata:/usr/share/elasticsearch/data
networks:
- localnet
rabbit:
image: rabbitmq:3.6.7-management
hostname: rabbit
ports:
- 15672:15672
- 5672:5672
networks:
- localnet
networks:
localnet:
external:
name: localnet
If I type http://localhost:15672 in the browser, I get the rabbitmq interface, but if I tries to connect from my Explore project like this:
public SqlToRabbitProcessor(SqlToRabbitRepository sqlToRabbitRepository)
{
_sqlToRabbitRepository = sqlToRabbitRepository;
var factory = new ConnectionFactory
{
HostName = Environment.GetEnvironmentVariable("RabbitMq/Host"),
UserName = Environment.GetEnvironmentVariable("RabbitMq/Username"),
Password = Environment.GetEnvironmentVariable("RabbitMq/Password")
};
var rabbit = factory.CreateConnection();
channel = rabbit.CreateModel();
}
Then it breaks in the line
var rabbit = factory.CreateConnection();
with the error saying
ExtendedSocketException: Connection refused 127.0.0.1:5672
System.Net.Sockets.Socket.EndConnect(IAsyncResult asyncResult)
ConnectFailureException: Connection failed
RabbitMQ.Client.EndpointResolverExtensions.SelectOne(IEndpointResolver resolver, Func selector)
BrokerUnreachableException: None of the specified endpoints were reachable
RabbitMQ.Client.ConnectionFactory.CreateConnection(IEndpointResolver endpointResolver, string clientProvidedName)
As my comment under the question suggested, it's because the "localhost" defined in the web application part is it's containers localhost, and not the docker host..
just needed to change
- "ElasticUrl=http://localhost:9200"
- "RabbitMq/Host=localhost"
to
- "ElasticUrl=http://elasticsearch:9200"
- "RabbitMq/Host=rabbit"
I had the same issue with docker-compose.
I solved it by with hostname:
rabbit:
hostname: rabbit
command: sh -c "rabbitmq-plugins enable rabbitmq_management; rabbitmq-server"
image: rabbitmq
environment:
RABBITMQ_DEFAULT_USER: admin
RABBITMQ_DEFAULT_PASS: admin
ports:
- 5672:5672
- 15672:15672
Follow instructions on this post.
Just to benefit people that stumble upon this question. The --link feature is now considered legacy and is a prime candidate to be deprecated by docker.
The easiest way is to use
depends_on:
In order to do this, its recommended to first create a network like so"
docker network create <network_name>
Then use docker-compose up to spawn services that bind with each other. Look at the example below where I've bound my spring-boot app to rabbit-mq. You can clone my repo from here
version: "3.1"
services:
rabbitmq-container:
image: rabbitmq:3.5.3-management
hostname: rabbitmq-container
ports:
- 5673:5673
- 5672:5672
- 15672:15672
networks:
- resolute
resolute-container:
build: .
ports:
- 8080:8080
environment:
- spring_rabbitmq_host=rabbitmq-container
- spring_rabbitmq_port=5672
- spring_rabbitmq_username=guest
- spring_rabbitmq_password=guest
- resolute_rabbitmq_publishQueueName=resolute-run-request
- resolute_rabbitmq_exchange=resolute
depends_on:
- rabbitmq-container
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- resolute
networks:
resolute:
external:
name: resolute
See how I've created a network called resolute and bound the apps to the same network. I've also given my rabbitmq-container a hostname. This is because docker now prepends the container name and that makes it difficult to bind services by name.