Automatically create SQS queue using localstack and docker-compose - docker-compose

Is there any way to automatically create SQS Queus using localstack with docker-compose.yml?
My docker-compose.yml:
version: '3.8'
services:
localstack:
image: localstack/localstack
ports:
- "4566:4566"
- "4571:4571"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
I would like to have some queues created when start docker-compose instead of create it manually.

If you want to automatically bootstrap all needed queues on docker up,
you can add a shell script that will be run by localstack on docker container start.
Here is an example.
Add to your volumes the following:
- ./localstack_bootstrap:/docker-entrypoint-initaws.d/
Then add to the directory specified above (localstack_bootstrap in my case) a shell script with any name you like (I decided to call it sqs_bootstrap.sh) with the following contents:
#!/usr/bin/env bash
set -euo pipefail
# enable debug
# set -x
echo "configuring sqs"
echo "==================="
LOCALSTACK_HOST=localhost
AWS_REGION=eu-central-1
create_queue() {
local QUEUE_NAME_TO_CREATE=$1
awslocal --endpoint-url=http://${LOCALSTACK_HOST}:4566 sqs create-queue --queue-name ${QUEUE_NAME_TO_CREATE} --region ${AWS_REGION} --attributes VisibilityTimeout=30
}
create_queue "queue1"
create_queue "queue2"
Don't forget to run chmod +x ./localstack_bootstrap/sqs_bootstrap.sh.
More details on that I've found here - https://joerg-pfruender.github.io/software/docker/microservices/testing/2020/01/25/Localstack_in_Docker.html

Localstack currently does not have anything to do this automatically at start-up.
For now I suggest either:
create a script that starts docker-compose and calls the aws cli tool to create the topic you need. This needs a sleep in the script :(
Build an image based on localstack that has an extra startup start with your additional setup stuff

Related

Sort out docker container permission when running silverstripe dev/build

I have created a fresh SilverStripe project using composer and I'm wanting to have my containers up and running via docker-compose up.
I have written a very basic Dockerfile:
FROM brettt89/silverstripe-web:7.4-apache
ENV DOCUMENT_ROOT /var/www/html/public
COPY . $DOCUMENT_ROOT
WORKDIR $DOCUMENT_ROOT
RUN chown www-data:www-data $DOCUMENT_ROOT
USER www-data
as well as a simple compose yaml file which specifies almost all the required services for it to work. here's what it looks like:
version: "3.8"
services:
silverstripe:
build:
context: .
volumes:
- .:/var/www/html
depends_on:
- database
environment:
- DOCUMENT_ROOT=/var/www/html/public
- SS_TRUSTED_PROXY_IPS=*
- SS_ENVIRONMENT_TYPE=dev
- SS_DATABASE_SERVER=database
- SS_DATABASE_NAME=SS_mysite
- SS_DATABASE_USERNAME=root
- SS_DATABASE_PASSWORD=
- SS_DEFAULT_ADMIN_USERNAME=admin
- SS_DEFAULT_ADMIN_PASSWORD=password
ports:
- 8088:80
database:
image: mysql:5.7
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
volumes:
- db-data:/var/lib/mysql
volumes:
db-data:
I can get my containers up and running. But when I go to 127.0.0.1:8080:/dev/build, It is raising the mkdir():permission denied warning.
I can see my files in the container have 1000:1000 ownership which I assume is still root?
So wondering how I can fix this. I have seen examples of setting up things such that containers could be created via docker build, but I just want to be able to run things via docker-compose up.
I am using Ubuntu-20.04 and project has been created by $USER.
The quickest trick to fix this, for setting up your local environment, is to change your user UID from 1000 to www-data using the usermod command:
RUN usermod -u 1000 www-data
then, of course, you can skip your last two lines.
You can find more info here:
https://blog.gougousis.net/file-permissions-the-painful-side-of-docker/

localstack+ssm how to configure parameters from docker compose

I am trying to run locally localstack with SSM, the default port I am getting is 4566
but when trying to init params via the docer compose I just ant figure oout how I do it from the docker compose
this is what I have :
localstack:
image: 'localstack/localstack'
ports:
- '4566:4566'
environment:
- SERVICES=lambda,ssm
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=local
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
I am trying to figure out how to pass multiple values from the docker-compose file
I am aware it can be done after by aws cli
aws --endpoint-url=http://localhost:4566 ssm put-parameter --name "/dev/some_key" --type String --value "vaue" --overwrite --region "us-east-1"
any thoughts?
I know this is an old question, but for anyone who is still looking for answers:
Localstack has a few lifecycle stages which we can hook on to. In this case, since we want to create SSM parameters in localstack after localstack is ready, we would want to use the init/ready.d hook. Which means, create a script with the awlocal command and mount it into /etc/localstack/init/ready.d/.
volumes:
- "/path/to/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"
I know this is an old question, but for anyone who is still looking for answers:
LocalStack has a few lifecycle stages which we can hook on to. In your case, since you want to create SSM parameters in LocalStack after LocalStack is ready, you would want to use the init/ready.d hook. Which means, create a script with your awlocal command and mount it into /etc/localstack/init/ready.d/. If you watch the logs after LocalStack is up and ready, you would see the script getting applied and the SSM parameters getting created.
volumes:
- "/path/to/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"

How can I run mysql router in Docker with compose file with bootstrap mode

MySql Router has a command to bootstrap itself
mysqlrouter --bootstrap root#127.0.0.1:3306 --directory /tmp/router
After bootstrap, Router will exit, and we should run it again with config file generated by bootstrap since I will modify this file
mysqlrouter --config /tmp/router/mysqlrouter.conf
It's work ok in Linux pure environment, but not in docker, below is my docker compose file
version: '2'
services:
common: &baseDefine
environment:
MYSQL_HOST: "192.168.213.6"
MYSQL_PORT: 3306
MYSQL_USER: 'root'
MYSQL_PASSWORD: 'urpwd.root'
MYSQL_INNODB_CLUSTER_MEMBERS: 3
MYSQL_CREATE_ROUTER_USER: 0
image: "docker.io/mysql/mysql-router:latest"
volumes:
- "./conf:/tmp/myrouter"
network_mode: "host"
boot:
container_name: "mysql_router_boot"
command: ["mysqlrouter","--bootstrap","root#192.168.213.7:3306","--directory","/tmp/myrouter","--conf-use-sockets","--conf-skip-tcp",
"--force","--strict","--user","mysqlrouter","--account","sqlrouter","--account-create","if-not-exists"]
<<: *baseDefine
run:
container_name: "mysql_router"
restart: always
command: ["mysqlrouter","--config","/tmp/myrouter/mysqlrouter.conf"]
<<: *baseDefine
First, I will call the boot service for bootstrap, and generate configure to specified dir
docker-compose run --rm boot
after this command, the configure file is generated ok, then I execute
docker-compose run --name mysql_router run
It will work, but not work like what I supposed,
Without docker, the second step to run mysqlrouter only use config and without bootstrap
But with docker and those commands, the second step will bootstrap again.
I know this is since 2 service in 2 containers.
Is there any ideas to make this flow more suitable?
Such as run 2 service in one container?
Or run a service in an existed container?
It's ok with below:
modify yml's command of run service
run:
command: /bin/bash -c "mysqlrouter --config /tmp/myrouter/mysqlrouter.conf"
use bash to run mysqlrouter delay for recognizing the existed conf file;
it works

How do I get crond to autostart on Alpine in a Docker container?

I want to be able to run a simple bash script within a container service on the hour using cron. I'm using Alpine Linux via docker-compose with a custom Dockerfile to produce a php-fpm based image, on which I hope to get crond running as well - except I can't.
Executing ps aux | grep cron on the container once built, returns nothing.
From what I understand, the usual Linux startup processes don't exist in Docker containers - fine - so how do I auto-start crond? Its dirs under /etc/periodic/ are created automatically, so I don't understand why the applicable process that consumes those dirs, isn't also running.
I tried creating a dedicated service definition within docker-compose.yml, which actually worked but the shell script to be run hourly needs access to a php binary which is running in a different container, so this isn't a viable solution.
If I shell into the container and run rc-service crond start I get this - but it never "finishes":
/var/www/html # rc-service crond start
* WARNING: crond is already starting
#> docker --version
Docker version 19.03.8, build afacb8b7f0
#> docker-compose --version
docker-compose version 1.23.2, build 1110ad01
I need a solution that I can place into my Dockerfile or docker-compose.yml files.
Dockerd is running on Ubuntu Xenial FWIW.
to run a cronjob container (Alpine), you need to make sure sure that the command of your docker container is
exec crond -f
if you want to add this to a docker file
CMD ["exec", "crond", "-f"]
you also may need to update the corn files before running the above command
Update based on the docker file and compose
To be able to solve your issues you need to update your docker-compose to have two containers one for cron and one for web
service_php_cron:
build:
context: .
dockerfile: .docker/services/php/Dockerfile.dev
container_name: base_service_php
command: 'cron_jobs'
volumes:
- ./app:/var/www/html/public
env_file:
- ./.env
# Low level container logging
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "5"
service_php:
build:
context: .
dockerfile: .docker/services/php/Dockerfile.dev
ports:
- "9000:9000"
command: 'web_server'
container_name: base_service_php
volumes:
- ./app:/var/www/html/public
env_file:
- ./.env
# Low level container logging
logging:
driver: "json-file"
options:
max-size: "1m"
max-file: "5"
you also need to update your docker file to be able to handle multiple commands using docker entry points
Add the below line to your docker file + remove the CMD one
COPY ./docker-entrypoint.sh /
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
and finally, create the entry point (make sure it hash execute permissions)
#!/bin/sh -e
case $1 in
web_server)
YOUR WEB SERVER COMMAND
;;
cron_jobs)
exec crond -f
;;
*)
exec "$#"
;;
esac
exit 0
you can check this link for more info about entrypoints

Postgres inside docker; reload database / init script every time the container is started

Following the offical postgres docker image, you can set up an entrypoint where you put your initilization scripts.
This works fine. For development/testing, I want a clean database on every container startup, not only on it's first.
All scripts inside the docker-entrypoint-initdb.d are only run once (the first time the container is started).
Is there an easy way to execute the script every time the container is started via docker-compose?
I put DROP TABLE IF EXISTS in front of every CREATE TABLE, so the .sql script will work even on a new startup.
Relevant part of the docker-compose if anyone needs that:
postgres-myname:
image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=postgres-db
ports:
- "54320:5432"
build:
context: .
dockerfile: postgresql-config/Dockerfile
networks:
- my-network
You need a "cycle script" for restarting, which should contain:
docker-compose rm -vs postgres-myname
docker volume prune -f --filter label=protgres-myname
docker-compose up -d
I recommend exploring docker volume prune before using it in a script.
I also recommend having a named volume mapped to postgres data directory (/var/lib/postgresql/data) and removing the volume explicitly instead of pruning.
# docker-compose.override.yml
postgres-myname:
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
- pgdata
And then
docker volume rm -f "$(basename "`/bin/pwd`")_pgdata"