localstack+ssm how to configure parameters from docker compose - docker-compose

I am trying to run locally localstack with SSM, the default port I am getting is 4566
but when trying to init params via the docer compose I just ant figure oout how I do it from the docker compose
this is what I have :
localstack:
image: 'localstack/localstack'
ports:
- '4566:4566'
environment:
- SERVICES=lambda,ssm
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=local
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
- HOST_TMP_FOLDER=${TMPDIR}
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
I am trying to figure out how to pass multiple values from the docker-compose file
I am aware it can be done after by aws cli
aws --endpoint-url=http://localhost:4566 ssm put-parameter --name "/dev/some_key" --type String --value "vaue" --overwrite --region "us-east-1"
any thoughts?

I know this is an old question, but for anyone who is still looking for answers:
Localstack has a few lifecycle stages which we can hook on to. In this case, since we want to create SSM parameters in localstack after localstack is ready, we would want to use the init/ready.d hook. Which means, create a script with the awlocal command and mount it into /etc/localstack/init/ready.d/.
volumes:
- "/path/to/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"

I know this is an old question, but for anyone who is still looking for answers:
LocalStack has a few lifecycle stages which we can hook on to. In your case, since you want to create SSM parameters in LocalStack after LocalStack is ready, you would want to use the init/ready.d hook. Which means, create a script with your awlocal command and mount it into /etc/localstack/init/ready.d/. If you watch the logs after LocalStack is up and ready, you would see the script getting applied and the SSM parameters getting created.
volumes:
- "/path/to/init-aws.sh:/etc/localstack/init/ready.d/init-aws.sh"

Related

Helm Chart - Camunda

Good Morning.
I'm currently using an helmchart to deploy camunda inside an openshift namespace/cluster.
For your information, Camunda has a default process called "Invoice" and that process is responsible to create a default user called "demo".
I would like to avoid that user creation, so i was able to do it through docker with the following command:
docker run -d --name camunda -p 8080:8080 -v
/tmp/empty:/camunda/webapps/camunda-invoice
camunda/camunda-bpm-platform:latest
But now, my helm chart uses a custom "values.yaml" that calls the camunda image, and then issues a command to start it:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
So is it possible to use the same behavior as docker command shown above, to empty the "webapps" directory after calling the camunda.sh?
I know that I can pass through the args: [ ] the argument "--webapps" but the issue is that it will remove the "tasklist" and "cockpit" that allows users to access the Camunda UI.
Thank you everyone.
Have a nice day!
EDIT:
While speaking with Camunda team, i just had the information that i can send the "--webapps --swaggerui --rest" arguments in order to start the application without having the default BPMN Process (Invoice).
So I'm currently try to use multiple arguments in my Helm Chart values.yaml like this:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ["--webapps", "--rest", "--swaggerui"]
Unfortunately, it's not working this way. What am i doing wrong?
If I send just one argument like "--webapps" it reads the arguments and creates the container.
But if i send multiple arguments, like the example shown above, it just doesn't create the container.
Am i doing something wrong?
The different start arguments for the Camunda 7 RUN distribution are documented here: https://docs.camunda.org/manual/7.18/user-guide/camunda-bpm-run/#start-script-arguments
Here is a helm value file example using these parameters:
image:
name: camunda/camunda-bpm-platform
tag: run-latest
command: ['./camunda.sh']
args: ['--production','--webapps','--rest','--swaggerui']
extraEnvs:
- name: DB_VALIDATE_ON_BORROW
value: "false"

Automatically create SQS queue using localstack and docker-compose

Is there any way to automatically create SQS Queus using localstack with docker-compose.yml?
My docker-compose.yml:
version: '3.8'
services:
localstack:
image: localstack/localstack
ports:
- "4566:4566"
- "4571:4571"
- "${PORT_WEB_UI-8080}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
I would like to have some queues created when start docker-compose instead of create it manually.
If you want to automatically bootstrap all needed queues on docker up,
you can add a shell script that will be run by localstack on docker container start.
Here is an example.
Add to your volumes the following:
- ./localstack_bootstrap:/docker-entrypoint-initaws.d/
Then add to the directory specified above (localstack_bootstrap in my case) a shell script with any name you like (I decided to call it sqs_bootstrap.sh) with the following contents:
#!/usr/bin/env bash
set -euo pipefail
# enable debug
# set -x
echo "configuring sqs"
echo "==================="
LOCALSTACK_HOST=localhost
AWS_REGION=eu-central-1
create_queue() {
local QUEUE_NAME_TO_CREATE=$1
awslocal --endpoint-url=http://${LOCALSTACK_HOST}:4566 sqs create-queue --queue-name ${QUEUE_NAME_TO_CREATE} --region ${AWS_REGION} --attributes VisibilityTimeout=30
}
create_queue "queue1"
create_queue "queue2"
Don't forget to run chmod +x ./localstack_bootstrap/sqs_bootstrap.sh.
More details on that I've found here - https://joerg-pfruender.github.io/software/docker/microservices/testing/2020/01/25/Localstack_in_Docker.html
Localstack currently does not have anything to do this automatically at start-up.
For now I suggest either:
create a script that starts docker-compose and calls the aws cli tool to create the topic you need. This needs a sleep in the script :(
Build an image based on localstack that has an extra startup start with your additional setup stuff

How to make sure docker-compose will not remove my volume with postgres data

I am running a simple django webapp with docker-compose. I define both a web service and a db service in a docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
env_file:
- ./.env.dev
depends_on:
- db
volumes:
postgres_data:
I start the service by running:
docker-compose up -d
I can load some data in there with a custom django command that I wrote for my app. Everything is running fine (with data) on localhost:8000.
However, when I run
docker-compose down
(so without -v) and then again
docker-compose up -d
the database is empty again. The volume was not persisted. From what I read in the docker-compose docs and also in several posts here at SO, persisting the volume and reusing it when you start a new container should be the default behavior (which, if I understand it correctly, you can disable by using the --renew-anon-volumes flag).
However in my case, the volume is not persisted. Or maybe it is, but my data is gone.
By doing docker volume ls I can see that my volume (I'll use the name my_volume here) still exists after the docker-compose down command. However, the CreatedAt value has been changed. This makes me think it's a different volume with the same name, and my data is already gone, but I don't know how to confirm that.
This SO answer suggests to mount the volume on /var/lib/postgresql instead of /var/lib/postgresql/data. However, I've seen other resources (like this one) where the opposite is suggested. I've tried both, but neither option works.
Thanks for any advice.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.

Create database and user in mongoDB official docker image

I want to have a MongoDB service running in a Docker in order to serve a Flask app. What I've tried is create a container using docker-compose.yml:
my_mongo_service:
image: mongo
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_ROOT_USER}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MY_DATABASE_NAME}
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
command: mongod
Imagine we have an .env file like this:
MONGO_ROOT_USER=my_fancy_username
MONGO_ROOT_PASSWORD=my_fancy_password
MY_DATABASE_NAME=my_fancy_database
What I would expect (reading the doc) is that a database matching MY_DATABASE_NAME value is created and an user matching MONGO_ROOT_USER is created too and I could authenticate with the pair (MONGO_ROOT_USER,MONGO_ROOT_PASSWORD).
Ok, I launch my container with docker-compose up and enter on it with docker exec -it <container-id> bash. I put mongo on the console and when I try to authenticate it crashes:
> use my_fancy_database
switched to db my_fancy_database
> db.auth('my_fancy_username','my_fancy_password')
Error: Authentication failed.
0
On the log, the error I find is the following
[...] authentication failed for my_fancy_username on my_fancy_database from client [...] ; UserNotFound: Could not find user my_fancy_username#my_fancy_database
The docker-compose.yml configuration (as it was posted on official documentation) is not working. What I'm doing wrong?
Thanks in advance.
I don't get it. Are you using environmental variables, which are not in the environment? It sure looks so.
If you do echo $MY_DATABASE_NAME in your terminal and see empty output, then here is the answer to your question. You either first have to define the variable with export (or source for a file) or redefine your docker-compose.yml.
For that, it's best to use env_file directive:
my_mongo_service:
image: mongo
env_file:
- .env
ports:
- "27017:27017"
volumes:
- "/data/db:/data/db"
And set your .env as this:
MONGO_INITDB_ROOT_USERNAME=my_fancy_username
MONGO_INITDB_ROOT_PASSWORD=my_fancy_password
MONGO_INITDB_DATABASE=my_fancy_database
Side note: using command: mongod is not necessary, the base image is already using it.

Metadata fetch failed stack driver logging Google Compute Engine

I am integrating my go application with Stackdriver logging via cloud.google.com/go/logging. My application works perfectly fine when deployed in a GCP on Flex engine. However, when I run my app in local, as soon as I hit localhost:8080 I get the following error on my console and the application gets killed automatically:
Metadata fetch failed: Get http://metadata/computeMetadata/v1/instance/attributes/gae_project: dial tcp: lookup metadata on 127.0.0.
11:53: server misbehaving
My understanding is that when running locally, the code should not try to access Google's internal metadata, which is what is happening above. I dug deeper and looks like this part is handled in the code cloud.google.com/go/compute/metadata/metadata.go. I might be wrong here but it looks like I have to set an env variable for the code to work properly. Pasting from the documentation in metadata.go
// metadataHostEnv is the environment variable specifying the
// GCE metadata hostname. If empty, the default value of
// metadataIP ("169.254.169.254") is used instead.
// This is variable name is not defined by any spec, as far as
// I know; it was made up for the Go package.
metadataHostEnv = "GCE_METADATA_HOST"
If all of my understanding is true, what should I set GCE_METADATA_HOST to? If I am wrong about my understanding, why am I seeing this error? Is it possible that this error has something to do with my Docker and not with Stackdriver logging?
I am running my app with in a container with docker-compose. I am performing go install which generates the binary and then I am simply executing the binary.
EDIT: This is my compose file
version: '3'
services:
dev:
image: <gcr_image>
entrypoint:
- /bin/sh
- -c
- "cat ./config-scripts/config.sh >> /root/.bashrc; bash"
command: bash
stdin_open: true
tty: true
working_dir: /code
environment:
- ENV1=value1
- ENV2=value2
ports:
- "8080:8080"
volumes:
- .:/code
- ~/.npmrc:/root/.npmrc
- ~/.config/gcloud:/root/.config/gcloud
- /var/run/docker.sock:/var/run/docker.sock