I'm trying to create a Kinesis stream using Localstack running on Docker.
My docker-compose.yml looks like this:
version: '3.2'
services:
localstack:
image: localstack/localstack:latest
container_name: localstack_test_serialize
ports:
- '4563-4599:4563-4599'
- '8055:8080'
environment:
- SERVICES=s3,kinesis:4569
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
volumes:
- './.localstack:/tmp/localstack'
- '/var/run/docker.sock:/var/run/docker.sock'
Running docker-compose up -d starts everything just fine, and I'm able to create an S3 bucket on the normal S3 port.
However, when I try to run
aws --endpoint-url=http://localhost:4569 kinesis create-stream --stream-name sample-application-stream --shard-count 1
to create a Kinesis stream, I end up getting a timeout message for port 4569.
Any idea what I'm doing wrong or why Localstack isn't letting me create this stream?
You could use the port 4568.
The LocalStack documentation mark this port to use kinesis.
Related
I am trying to get a zookeeper/kafka non-clustered setup to be able to talk to containers with python scripts. I want to be able to run a zookeeper/kafka container and 2 or more containers with python scripts communicating to the zookeeper/kafka, all running in containers or container groups on Azure.
To test this, I have created the below docker container group, with zookeeper and kafka as 2 services and a 3rd service that starts a simple python script to produce a steady pace of messages to a kafka topic. The docker-compose.yml that I am using is as follows:
version: '2'
services:
zookeeper:
image: confluentinc/cp-zookeeper:latest
container_name: zookeeper
environment:
ZOOKEEPER_CLIENT_PORT: 2181
ZOOKEEPER_TICK_TIME: 2000
ports:
- 22181:2181
networks:
- my-network
kafka:
image: confluentinc/cp-kafka:latest
container_name: kafka
depends_on:
- zookeeper
ports:
- 29092:29092
networks:
- my-network
environment:
KAFKA_BROKER_ID: 1
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
kafka_producer:
build: ../kafka_producer
image: annabotkafka.azurecr.io/kafka_producer:v1
container_name: kafka_producer
depends_on:
- kafka
volumes:
- .:/usr/src/kafka_producer
networks:
- my-network
environment:
KAFKA_SERVERS: kafka:9092
networks:
my-network:
driver: bridge
The kafka_producer.py script is as follows:
import os
from time import sleep
import json
from confluent_kafka import Producer
def acked(err, msg):
if err is not None:
print("Failed to deliver message: {0}: {1}"
.format(msg.value(), err.str()))
else:
print("Message produced: {0}".format(msg.value()))
# Function to send a status message out on the status topic
def send_status(producer,counter):
msg = {'counter':counter}
json_dump = json.dumps(msg)
producer.produce("counter", json_dump.encode('utf-8'), callback=acked)
producer.poll()
# Define kafkaProducer to push messages to the status topic
producer = Producer({'bootstrap.servers': 'kafka:9092'})
for j in range(9999):
print("Iteration", j)
send_status(producer, j)
sleep(2)
When I 'docker-compose up' this on my Ubuntu 20.04 dev machine, I get the expected behaviour: a stead stream of messages sent to the kafka producer.
After I 'docker-compuse push' this to Azure Container instances and create a container in Azure with the image, the kafka_producer script appears to no longer be able to connect to the kafka broker at kafka:9092.
These are the logs from the container group after startup:
Iteration 0
%3|1629363616.468|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Failed to resolve 'kafka:9092': Name or service not known (after 25ms in state CONNECT)
%3|1629363618.465|FAIL|rdkafka#producer-1| [thrd:kafka:9092/bootstrap]: kafka:9092/bootstrap: Failed to resolve 'kafka:9092': Name or service not known (after 22ms in state CONNECT, 1 identical error(s) suppressed)
Iteration 1
Iteration 2
I had understood that the container group is on the same network subnet and on a single host so I would expect this to operate the same as on my dev machine locally.
My next step will be the have separate containers with different python scripts that I will want to communicate with kafka in this container group. Having the producer script within the same container group is not my longterm expectation, but I believed this simpler setup should work.
Any suggestions for where I am going wrong?
From Azure documentation
Within a container group, container instances can reach each other via localhost on any port, even if those ports aren't exposed externally on the group's IP address or from the container.
This makes it sound like the containers are using a host network, not a Docker bridge like you've setup in Compose (where your code works fine)
Therefore, you ought to connect with localhost:29092
If you don't actually need message persistence, then I'd suggest using sockets via HTTP, gRPC or Zeromq between your scripts rather than a Kafka container
I have 2 applications:
1 application uses ElasticMq queue for listening to the messages.
The 2nd application publishes the messages on an SNS topic.
I am able to subscribe to the ElasticMq queue on the SNS topic. But when I publish on the topic local stack is unable to send the message to elasticmq eventhough subscription was successful.
awslocal sns list-subscriptions-by-topic --topic-arn arn:aws:sns:us-east-1:123456789012:classification-details-topic
{
"Subscriptions": [
{
"SubscriptionArn": "arn:aws:sns:us-east-1:123456789012:classification-details-topic:ea470c5a-c352-472e-9ae0-a1386044b750",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "http://elasticmq-service:9324/queue/test",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:classification-details-topic"
}
]
}
Below is the error message I receive:
awslocal sns publish --topic-arn
arn:aws:sns:us-east-1:123456789012:classification-details-topic
--message "My message"
An error occurred (InvalidParameter) when calling the Publish
operation: An error occurred (AWS.SimpleQueueService.NonExistentQueue)
when calling the SendMessage operation:
AWS.SimpleQueueService.NonExistentQueue; see the SQS docs.
Am I wrong in having elasticmq subscribed on local stack?
I am running localstack using docker-compose file
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8001}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
default:
external:
name: my_network
I have the elasticmq and other services as a part of different docker-compose using the same docker network "my_network"
The below is the complete docker-compose. I tried reproducing it by combining the entries into one docker-compose file.
Steps to reproduce
version: '3'
services:
elasticmq:
build: ./elasticmq
ports:
- '9324:9324'
networks:
- my_network
dns:
- 172.16.198.101
localstack:
image: localstack/localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8001}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
links:
- elasticmq:elasticmq-service
networks:
- my_network
dns:
- 172.16.198.101
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.16.198.0/24
After this one can run the following set of commands
awslocal sqs create-queue --queue-name test --endpoint http://elasticmq:9324/
awslocal sns create-topic --name test-topic
awslocal sns subscribe --topic-arn arn:aws:sns:us-east-1:123456789012:test-topic --protocol sqs --notification-endpoin http://elasticmq-service:9324/queue/test
Based on your comments, I would hazard a guess that the networks of your 2 docker-compose files are not setup correctly.
For simplicity's sake I would merge the elasticmq service in the above docker-compose and try again (if you post your second docker-compose and exact aws command to create the subscription someone can try it locally).
If you really want to keep 2 separate docker-compose files then if the above works then at least you can pin point your problem. I'm afraid I am not too familiar with setting this up but this answer might help.
EDIT:
Thanks for the additional details. I have a simplified version of a docker-compose that works for me. First of all according to this you will need to create a config file to set the hostname of your elasticmq instance since it will not pick up the container_name from docker-compose (similar to the HOSTNAME environment variable in LocalStack which I set below as you will see). The contents of this file named elasticmq.conf (in a folder named config) are:
include classpath("application.conf")
node-address {
host = elasticmq
}
queues {
test-queue {}
}
With that in place, the following docker-compose publishes the message without any errors:
version: '3'
services:
elasticmq:
image: s12v/elasticmq
container_name: elasticmq
ports:
- '9324:9324'
volumes:
- ./config/elasticmq.conf:/etc/elasticmq/elasticmq.conf
localstack:
image: localstack/localstack
container_name: localstack
environment:
- SERVICES=sns
- DEBUG=1
- PORT_WEB_UI=${PORT_WEB_UI- }
- HOSTNAME=localstack
ports:
- "4575:4575"
- "8080:8080"
awscli:
image: garland/aws-cli-docker
container_name: awscli
depends_on:
- localstack
- elasticmq
environment:
- AWS_DEFAULT_REGION=eu-west-2
- AWS_ACCESS_KEY_ID=xxx
- AWS_SECRET_ACCESS_KEY=xxx
command:
- /bin/sh
- -c
- |
sleep 20
aws --endpoint-url=http://localstack:4575 sns create-topic --name test_topic
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:eu-west-2:123456789012:test_topic --protocol http --notification-endpoint http://elasticmq:9324/queue/test-queue
aws --endpoint-url=http://localstack:4575 sns publish --topic-arn arn:aws:sns:eu-west-2:123456789012:test_topic --message "My message"
And the output:
Admittedly at this point I did not check elasticmq to see if the message got consumed but I leave that to you.
I'm having trouble accessing a database created from a docker-compose file.
Given the following compose file, I should be able to connect to it from java using something like:
jdbc:postgresql://eprase:eprase#database:7000/eprase
However, the connection is rejected. I can't even use PGAdmin to connect it using the same details to create a new server.
I've entered the database container and ran psql commands to verify that the eprase user and database have been created according to postgres Docker documentation, everything seems fine. I can't tell if the problem is within the database container or something I need to change in the compose network.
The client & server services can largely be ignored, the server is a java based web API and the client is an Angular app.
Compose file:
version: "3"
services:
client:
image: eprase/client:latest
build: ./client/eprase-app
networks:
api:
ports:
- "5000:80"
tty: true
depends_on:
- server
server:
image: eprase/server:latest
build: ./server
networks:
api:
ports:
- "6000:8080"
depends_on:
- database
database:
image: postgres:9
volumes:
- "./database/data:/var/lib/postgresql/data"
environment:
- "POSTGRES_USER=eprase"
- "POSTGRES_PASSWORD=eprase"
- "POSTGRES_DB=eprase"
networks:
api:
ports:
- "7000:5432"
restart: unless-stopped
pgadmin:
image: dpage/pgadmin4:latest
environment:
- "PGADMIN_DEFAULT_EMAIL=admin#eprase.com"
- "PGADMIN_DEFAULT_PASSWORD=eprase"
networks:
api:
ports:
- "8000:80"
depends_on:
- database
networks:
api:
The PostgreSQL database is listening on container port 5432. The 7000:5432 line is mapping host port 7000 to container port 5432. That allows you to connect to the database on port 7000. But, your services on a common network (api) should communicate with each other via the container ports.
So, from the perspective of the containers for the client and server services, the connection string should be:
jdbc:postgresql://eprase:eprase#database:5432/eprase
I have a docker compose file that looks like:
version: "3"
services:
redis:
image: 'redis:3.2.7'
# command: redis-server --requirepass redispass
postgres:
image: postgres:9.6
ports:
- "5432:5432"
environment:
- POSTGRES_USER=airflow
- POSTGRES_PASSWORD=airflow
- POSTGRES_DB=airflow
# - PGDATA=/var/lib/postgresql/data
webserver:
image: airflow:develop
depends_on:
- postgres
ports:
- "8080:8080"
command:
- webserver
After I run docker-compose up I see all the services started and seemingly working well. My webserver service connects to postgres with the following sqlalchemy connection string: postgresql+psycopg2://airflow:airflow#postgres/airflow
Whenever I kill the composition (with ctrl-c or docker-compose stop) and then restart it, the data in postgres seems to persist. The reason I believe it persists is because my webserver starts with data from the previous session.
I read the docs for the postgres docker image and found the PGDATA environment variable. I tried to force set it to the default (as seen in the commented line in my docker compose file, but that didn't help. I'm not sure how else to debug why data seems to be persisting between container starts.
How can I force my postgres container to start fresh with each new initialization?
Your data was persisted because you did not destroy postgres container. You used docker-compose stop which only hibernate containers. Use docker-compose down instead. It completely destroys containers (but not images).
In these days, I am trying to deploy my Spring Boot OAuth2 project. It has 3 different modules.(Authentication Server, Resource Server and Front-end)
Authentication and Resource servers have own *.yml file for configurations such as mongodb name-port, server profile-ip etc.
What I am trying to do exactly? I want to deploy spring boot application on docker but i dont want to put my database(mongodb) on docker as a container.
I am not sure this structure is possible or not ?
Because When i run my mongodb on my local(localhost:27017) after that try to deploy spring boot application on local docker as a container, i am getting Timeout exception for MongoDB. The application couldnt connect to external mongoDB(non docker container).
What should I do? Should I run mongodb on docker? I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo.
I tried to run another spring boot app without mongodb, it is working successfully and i made request from browser by ip&port, i got response from application as i expected.
*** MONGO URL ****
mongodb://127.0.0.1:27017/db-localhost
**** Authentication server .yml file ****
server:
port: 9080
contextPath: /auth-service
tomcat:
access_log_enabled: true
basedir: target/tomcat
security:
basic:
enabled: false
spring:
profiles:
active: development
thymeleaf:
cache: false
mongo:
db:
server: 127.0.0.1
port: 27017
logging:
level:
org.springframework.security: DEBUG
---
spring:
profiles: development
data:
mongodb:
database: db-localhost
---
spring:
profiles: production
data:
mongodb:
database: db-prod
---
***** DOCKER FILE *******
FROM java:8
VOLUME /tmp
ADD auth-server-1.0-SNAPSHOT.jar app.jar
EXPOSE 9080
RUN bash -c 'touch /app.jar'
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]
**** DOCKER COMMAND *******
docker run -it -P --name authserver authserver
The issue with your configuration is referencing the mongodb from inside of the authservice on 127.0.0.1 which is the loopback adapter inside of the authservice container. So you tell your spring application that mongodb is running in the same container as the authservice spring application, which is not the case.
Either you are running your database as an own container (which requires to handle the data volumes correctly) and referencing it using the container name as hostname (via link) or you need to reference the externally running mongodb instance with the correct address. This would be the ip address of the machine running the docker daemon (I assume for your local environment something like 192.168.0.xxx).
Question: What should I do?
At least for developing purposes I would recommend to also use docker for your mongodb instance. I had a similar setup with RabbitMQ in addition and it solved a lot of problems when I used docker for those as well. Using docker-compose to set everything up makes it even easier. Later you can still specify which mongodb instance you want to use through your spring properties.
Problem: I tried it also, Mongo runs successfully but still spring container couldnt run and connect to mongo
The problem is probably because you have not set up any networks or hostnames for you services. Your spring application can not resolve the hostname of your mongo server, since you specified 127.0.0.1 for your mongodb server in your properties.
I would recommend using docker for your mongodb and use a docker-compose.yml file like this to set everything up:
version: '3.7'
services:
resource-server:
image: demo/resource-server:latest
container_name: resource-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8080:8080
auth-server:
image: demo/auth-server:latest
container_name: auth-server
depends_on:
- mongodb-example
networks:
- your-network
ports:
- 8081:8080
mongodb-example:
image: mongo:latest
container_name: mongo-example
hostname: mongo-example
networks:
- your-network
ports:
- 27017:27017
networks:
your-network:
name: network-name
Of course you then need to adapt your property file or specify environment variables through your docker-compose.yml file.