Localstack error in Step function with DefinitionS3Location - aws-cloudformation

I am trying to deploy a lambda and a step function in localstack. I keep my step-function-definition in s3.
The cloudformation documentation says we can use DefinitionS3Location instead of using Definition or DefinitionString.
This is what I am doing.
template.yml:
# Lambda function
EndToEndTestLambdaFunction:
Type: "AWS::Serverless::Function"
Properties:
Handler: index.handler
Timeout: 30
Runtime: nodejs14.x
MemorySize: 1024
PackageType: Zip
CodeUri: s3://step-lambda-zip-artifacts/index.zip
Role: !GetAtt IAMRole.Arn
# Step function
EndToEndTestStepFunction:
Type: "AWS::StepFunctions::StateMachine"
Properties:
DefinitionS3Location:
Bucket: step-lambda-zip-artifacts
Key: endToEndTestRunner.json
DefinitionSubstitutions:
EndToEndTestLambdaFunctionArn: !GetAtt EndToEndTestLambdaFunction.Arn
RoleArn: !GetAtt IAMRole.Arn
docker-compose.yml
version: "3"
services:
localstack:
image: localstack/localstack:0.12.15
container_name: localstack_fundle
# restart: always
ports:
- "4566:4566"
environment:
- SERVICES=iam, lambda, dynamodb, apigateway, s3, sns, cloudwatch, ssm, stepfunctions, sqs, cloudformation
- DEBUG=1
- DATA_DIR=/tmp/localstack/data
- LAMBDA_EXECUTOR=docker-reuse
- DOCKER_HOST=unix:///var/run/docker.sock
- DEFAULT_REGION=eu-west-2
- STEPFUNCTIONS_LAMBDA_ENDPOINT=http://localhost:4566
volumes:
- my-datavolume:/tmp/localstack
- /var/run/docker.sock:/var/run/docker.sock
volumes:
my-datavolume:
As per the cloudformation doc, the definition is optional and I am using the s3 location for the definition. But each time I try to deploy I get the error:
DEBUG:localstack.utils.cloudformation.template_deployer: Error applying changes for CloudFormation stack "development-localstack-Test": Parameter validation failed:
| Missing required parameter in input: "definition"
It's weird because it does not go with the doc!
At first I had the impression that the problem is from the cloudformation but it is not. Found a localstack issue about this but no solution over there.
Is this actually a localstack issue and is there a way to make this work?

Related

docker compose map multiple files or directories as volume

I have a few config file that have to be mapped to files inside the container. I want to be able to change these config files on the host and that should reflect in the container. These are basically connection string files that I want to swap without having to rebuild the containers. What I have in my docker-compose.yml is:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: volume
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: volume
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
I fail to get this to work... I saw some examples where they did not supply the type (or instead of volume they made it "bind") but nothing seems to work for me.
If I build the images with docker compose up and then do docker inspect portal I can see that it has: "Mounts": []
My final plan is to have a docker-compose.yml that has a service called portal and mounts 2 or more files inside the container(NOT copy so that I can change it on my host at will) as well as a few directories. What is kicking me in the face is the files that have to be mapped into the container.
I think you need to change type: volume to type: mount
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: mount
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: mount
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
Also, you can add read-only: true to both of those mounts if you don't want the services to be able to modify parameters.yml or portal.conf.
Just mapping should do the job if the files and folders in the lhs exists in your local machine:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- ./local/parameters.local.yml:/var/www/portal/s/config/parameters.yml
- ./portal.conf:/etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
volumes:
awscreds:

Run DB migrations on cloud build connecting to cloud sql using private IP

I am trying to setup db migrations for a Nodejs app on cloud build connecting to cloud sql with a private IP via cloud sql proxy.
Cloud SQL connection always fail from cloud build.
Currently I am running migration manually from a compute engine.
I followed this SO to setup the build steps.
Run node.js database migrations on Google Cloud SQL during Google Cloud Build
cloudbuild.yaml
steps:
- name: node:12-slim
args: ["npm", "install"]
env:
- "NODE_ENV=${_NODE_ENV}"
- name: alpine:3.10
entrypoint: sh
args:
- -c
- "wget -O /workspace/cloud_sql_proxy https://storage.googleapis.com/cloudsql-proxy/v1.16/cloud_sql_proxy.linux.386 && chmod +x /workspace/cloud_sql_proxy"
- name: node:12
timeout: 100s
entrypoint: sh
args:
- -c
- "(/workspace/cloud_sql_proxy -dir=/workspace -instances=my-project-id:asia-south1:postgres-master=tcp:5432 & sleep 3) && npm run migrate"
env:
- "NODE_ENV=${_NODE_ENV}"
- "DB_NAME=${_DB_NAME}"
- "DB_PASS=${_DB_PASS}"
- "DB_USER=${_DB_USER}"
- "DB_HOST=${_DB_HOST}"
- "DB_PORT=${_DB_PORT}"
- name: "gcr.io/cloud-builders/gcloud"
entrypoint: "bash"
args:
[
"-c",
"gcloud secrets versions access latest --secret=backend-api-env > credentials.yaml",
]
- name: "gcr.io/cloud-builders/gcloud"
args: ["app", "deploy", "--stop-previous-version", "-v", "$SHORT_SHA"]
timeout: "600s"
Error:
KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
Step #2: at Client_PG.acquireConnection (/workspace/node_modules/knex/lib/client.js:349:26)
Cloud build roles:
Cloud Build Service Account
Cloud SQL Admin
Compute Network User
Service Account User
Secret Manager Secret Accessor
Serverless VPC Access Admin
CLOUD SQL ADMIN API is enabled too.
Versions:
NPM libs:
"pg": "8.0.3"
"knex": "0.21.1"
The Cloud SQL Private IP feature uses internal IP addresses hosted in a VPC network, which are only accessible from other resources within the same VPC network.
Since Cloud Build does not support VPC Networks, it is not possible to connect from Cloud Build to the private IP of a Cloud SQL instance.
You might want to take a look at the official Cloud SQL documentation regarding this topic to choose another alternative that suits your use case.
Connecting to public cloud sql
I use docker-compose & cloud sql proxy.
setup docker-compose for cloud build, here.
create service account (json file).
docker-compose file:
version: '3.7'
services:
app:
build:
context: .
dockerfile: Dockerfile
restart: "no"
links:
- database
tty: true
volumes:
- app:/var/www/html
env_file:
- ./.env
depends_on:
- database
database:
image: gcr.io/cloudsql-docker/gce-proxy
restart: on-failure
command:
- "/cloud_sql_proxy"
- "-instances=<INSTANCE_CONNECTION_NAME>=tcp:0.0.0.0:3306"
- "-credential_file=/config/sql_proxy.json"
volumes:
- ./sql_proxy.json:/config/sql_proxy.json:ro
volumes:
app:
cloudbuild.yml
- name: 'gcr.io/$PROJECT_ID/docker-compose'
id: Compose-build-cloudProxy
args: ['build']
- name: 'gcr.io/$PROJECT_ID/docker-compose'
id: Compose-up-cloudProxy
args: ['up', '--timeout', '1', '--no-build', '-d']
- name: 'bash'
id: Warm-up-cloudProxy
args: ['sleep', '5s']
- name: 'gcr.io/cloud-builders/docker'
id: Artisan-Migrate
args: ['exec', '-i', 'workspace_app_1', 'php', 'artisan', 'migrate']
- name: 'gcr.io/$PROJECT_ID/docker-compose'
id: Compose-down-cloudProxy
args: ['down', '-v']
build-success.png
I had the same issue as I am using AlloyDB and I was able to resolve it by setting up a worker pool under cloud build and I gave VPC access to the worker pool and the VPC has access to a serverless VPC that has access to AlloyDB so my migrations there were successful.
https://cloud.google.com/build/docs/private-pools/private-pools-overview

Localstack SNS: Unable to send message to ElasticMq

I have 2 applications:
1 application uses ElasticMq queue for listening to the messages.
The 2nd application publishes the messages on an SNS topic.
I am able to subscribe to the ElasticMq queue on the SNS topic. But when I publish on the topic local stack is unable to send the message to elasticmq eventhough subscription was successful.
awslocal sns list-subscriptions-by-topic --topic-arn arn:aws:sns:us-east-1:123456789012:classification-details-topic
{
"Subscriptions": [
{
"SubscriptionArn": "arn:aws:sns:us-east-1:123456789012:classification-details-topic:ea470c5a-c352-472e-9ae0-a1386044b750",
"Owner": "",
"Protocol": "sqs",
"Endpoint": "http://elasticmq-service:9324/queue/test",
"TopicArn": "arn:aws:sns:us-east-1:123456789012:classification-details-topic"
}
]
}
Below is the error message I receive:
awslocal sns publish --topic-arn
arn:aws:sns:us-east-1:123456789012:classification-details-topic
--message "My message"
An error occurred (InvalidParameter) when calling the Publish
operation: An error occurred (AWS.SimpleQueueService.NonExistentQueue)
when calling the SendMessage operation:
AWS.SimpleQueueService.NonExistentQueue; see the SQS docs.
Am I wrong in having elasticmq subscribed on local stack?
I am running localstack using docker-compose file
version: '2.1'
services:
localstack:
image: localstack/localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8001}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
default:
external:
name: my_network
I have the elasticmq and other services as a part of different docker-compose using the same docker network "my_network"
The below is the complete docker-compose. I tried reproducing it by combining the entries into one docker-compose file.
Steps to reproduce
version: '3'
services:
elasticmq:
build: ./elasticmq
ports:
- '9324:9324'
networks:
- my_network
dns:
- 172.16.198.101
localstack:
image: localstack/localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8001}:${PORT_WEB_UI-8080}"
environment:
- SERVICES=${SERVICES- }
- DEBUG=${DEBUG- }
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- LAMBDA_EXECUTOR=${LAMBDA_EXECUTOR- }
- KINESIS_ERROR_PROBABILITY=${KINESIS_ERROR_PROBABILITY- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
links:
- elasticmq:elasticmq-service
networks:
- my_network
dns:
- 172.16.198.101
networks:
my_network:
driver: bridge
ipam:
config:
- subnet: 172.16.198.0/24
After this one can run the following set of commands
awslocal sqs create-queue --queue-name test --endpoint http://elasticmq:9324/
awslocal sns create-topic --name test-topic
awslocal sns subscribe --topic-arn arn:aws:sns:us-east-1:123456789012:test-topic --protocol sqs --notification-endpoin http://elasticmq-service:9324/queue/test
Based on your comments, I would hazard a guess that the networks of your 2 docker-compose files are not setup correctly.
For simplicity's sake I would merge the elasticmq service in the above docker-compose and try again (if you post your second docker-compose and exact aws command to create the subscription someone can try it locally).
If you really want to keep 2 separate docker-compose files then if the above works then at least you can pin point your problem. I'm afraid I am not too familiar with setting this up but this answer might help.
EDIT:
Thanks for the additional details. I have a simplified version of a docker-compose that works for me. First of all according to this you will need to create a config file to set the hostname of your elasticmq instance since it will not pick up the container_name from docker-compose (similar to the HOSTNAME environment variable in LocalStack which I set below as you will see). The contents of this file named elasticmq.conf (in a folder named config) are:
include classpath("application.conf")
node-address {
host = elasticmq
}
queues {
test-queue {}
}
With that in place, the following docker-compose publishes the message without any errors:
version: '3'
services:
elasticmq:
image: s12v/elasticmq
container_name: elasticmq
ports:
- '9324:9324'
volumes:
- ./config/elasticmq.conf:/etc/elasticmq/elasticmq.conf
localstack:
image: localstack/localstack
container_name: localstack
environment:
- SERVICES=sns
- DEBUG=1
- PORT_WEB_UI=${PORT_WEB_UI- }
- HOSTNAME=localstack
ports:
- "4575:4575"
- "8080:8080"
awscli:
image: garland/aws-cli-docker
container_name: awscli
depends_on:
- localstack
- elasticmq
environment:
- AWS_DEFAULT_REGION=eu-west-2
- AWS_ACCESS_KEY_ID=xxx
- AWS_SECRET_ACCESS_KEY=xxx
command:
- /bin/sh
- -c
- |
sleep 20
aws --endpoint-url=http://localstack:4575 sns create-topic --name test_topic
aws --endpoint-url=http://localstack:4575 sns subscribe --topic-arn arn:aws:sns:eu-west-2:123456789012:test_topic --protocol http --notification-endpoint http://elasticmq:9324/queue/test-queue
aws --endpoint-url=http://localstack:4575 sns publish --topic-arn arn:aws:sns:eu-west-2:123456789012:test_topic --message "My message"
And the output:
Admittedly at this point I did not check elasticmq to see if the message got consumed but I leave that to you.

Cannot access restApiId & restApiRootResourceId for cross stack reference in serverless yml

Since I had an issue of 200 resource error, I found a way of using cross stack reference by dividing into different services. I managed to do that by using the cross-stack reference. The issue is I cannot give the restApiId & restApiRootResourceId dynamically. Right now, am statically setting ids into the service-2.
Basically the service-1 looks like,
provider:
name: aws
runtime: nodejs8.10
apiGateway:
restApiId:
Ref: ApiGatewayRestApi
restApiResources:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
custom:
stage: "${opt:stage, self:provider.stage}"
resources:
Resources:
ApiGatewayRestApi:
Type: AWS::ApiGateway::RestApi
Properties:
Name: ${self:service}-${self:custom.stage}-1
Outputs:
ApiGatewayRestApiId:
Value:
Ref: ApiGatewayRestApi
Export:
Name: ApiGatewayRestApi-restApiId
ApiGatewayRestApiRootResourceId:
Value:
Fn::GetAtt:
- ApiGatewayRestApi
- RootResourceId
Export:
Name: ApiGatewayRestApi-rootResourceId
And the service-2 looks like this,
provider:
name: aws
runtime: nodejs8.10
apiGateway-shared:
restApiId:
'Fn::ImportValue': ApiGatewayRestApi-restApiId
restApiRootResourceId:
'Fn::ImportValue': ApiGatewayRestApi-rootResourceId
As the above service-2 config, I cannot reference the Ids.
FYI: Both services are in different files.
So How what's wrong with this approach?
Serverless has special syntax on how to access stack output variables: {cf:stackName.outputKey}.
Note that using the Fn::ImportValue would work inside the resources section.

Cloud Foundry bosh Error 100: Can't find network

I'm attempting to setup a service broker to add postgres to our Cloud Foundry installation. We're running our system on vmWare. I'm using this release in order to do that:
cf-services-contrib-release
I need to setup the networks: section in the manifest, and what I'm setting there isn't working.
This is what my networks look like in the vmWare vCenter UI:
And this is what my clusters and resource pools look like in the vCenter UI:
I tried both with and without quotes, around the 'name' of the network. But I'm now getting an error saying that bosh can't find the network:
Failed compiling packages > rootfs_lucid64/9b3f611b46e076b94b37645c98f9100e7bcef5dd: Can't find network: VLAN1130_LB_100.114.130.0 (00:00:01)
Failed compiling packages > postgresql93/06163819b694f8d9836586d024f64c11efe30180: Can't find network: VLAN1130_LB_100.114.130.0 (00:00:01)
Failed compiling packages > postgresql92/2867893e714aae6e6b76bd06e7aa30d47023c46e: Can't find network: VLAN1130_LB_100.114.130.0 (00:00:01)
Error 100: Can't find network: VLAN1130_LB_100.114.130.0
Task 2430 error
This was my latest configuration attempt:
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: VLAN1130_LB_100.114.130.0
I also tried using single quotes as below. But I got the same error as above!
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: 'VLAN1130_LB_100.114.130.0'
Our network that we're on is this one: 100.114.130.0/24
So it makes sense to select VLAN1130_LB_100.114.130.0 in the config.
I've tried setting all of these options in the yaml file with no quotes. And none of them seem to work!
<ul>
<li>USH_UCS_CLOUD_FOUNDRY: <a href="https://gist.github.com/bluethundr/18ac490e96a5e02fad65">postgres_2432_debug.txt</li>
<li>USH_UCS_CLOUD_FOUNDRY_DVS: postgres_2433_debug.txt</li>
<li>USH_UCS_CLOUD_FO-DVUplinks-435272: postgres_2434_debug.txt </li>
<li>VLAN1129_LB_100.114.129.0: postgres_2435_debug.txt</li>
<li>VLAN1130_LB_100.114.130.0: postgres_2436_debug.txt</li>
<li>VLAN14-ESXI_MGMT-3.156.14.0: <a href="https://gist.github.com/bluethundr/dbde624e63842721a133">postgres_2437_debug.txt</li>
</ul>
I wouldn't expect VLAN1129_LB_100.114.129.0 to work, but I tried it anyway, just to be complete.
I've supplied debug dumps of each failed attempt next to each setting you see above. Surely one of them must work! But as you can see none of them did.
Here's my complete yaml file that I deployed with the 'bosh deploy' command:
name: cf-22b9f4d62bb6f0563b71
director_uuid: fd713790-b1bc-401a-8ea1-b8209f1cc90c
releases:
- name: cf-services-contrib
version: 6
compilation:
workers: 3
network: default
reuse_compilation_vms: true
cloud_properties:
ram: 5120
disk: 10240
cpu: 2
update:
canaries: 1
canary_watch_time: 30000-60000
update_watch_time: 30000-60000
max_in_flight: 4
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: VLAN1130_LB_100.114.130.0
resource_pools:
- name: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
network: default
stemcell:
name: bosh-vsphere-esxi-ubuntu-trusty-go_agent
version: '2865.1'
cloud_properties:
cpu: 2
ram: 4096
disk: 10240
datacenters:
- name: 'Universal City'
clusters:
- USH_UCS_CLOUD_FOUNDRY_NONPROD_01: {resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'}
jobs:
- name: gateways
release: cf-services-contrib
templates:
- name: postgresql_gateway_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
networks:
- name: default
default: [dns, gateway]
properties:
# Service credentials
uaa_client_id: "cf"
uaa_endpoint: http://uaa.devcloudwest.example.com
uaa_client_auth_credentials:
username: admin
password: secret
- name: postgresql_service_node
release: cf-services-contrib
template: postgresql_node_ng
instances: 1
resource_pool: 'USH_UCS_CLOUD_FOUNDRY_NONPROD_01_RP'
persistent_disk: 10000
properties:
postgresql_node:
plan: default
networks:
- name: default
default: [dns, gateway]
properties:
networks:
apps: default
management: default
cc:
srv_api_uri: http://api.devcloudwest.example.com
nats:
address: 100.114.130.11
port: 25555
user: nats #CHANGE
password: secret
authorization_timeout: 5
service_plans:
postgresql:
default:
description: "Developer, 250MB storage, 10 connections"
free: true
job_management:
high_water: 230
low_water: 20
configuration:
capacity: 125
max_clients: 10
quota_files: 4
quota_data_size: 240
enable_journaling: true
backup:
enable: false
lifecycle:
enable: false
serialization: enable
snapshot:
quota: 1
postgresql_gateway:
token: f75df200-4daf-45b5-b92a-cb7fa1a25660
default_plan: default
supported_versions: ["9.3"]
version_aliases:
current: "9.3"
cc_api_version: v2
postgresql_node:
supported_versions: ["9.3"]
default_version: "9.3"
max_tmp: 900
password: secret
How can we get past this issue?
From Amit's comment:
The name used in Cloud Properties must include any nested sub-folders. In the provided configuration the network is nested under USH_UCS_CLOUD_FOUNDRY, so the value for name should reflect that, i.e. USH_UCS_CLOUD_FOUNDRY/VLAN1130_LB_100.114.130.0 no quotes are required.
networks:
- name: default
type: manual
subnets:
- range: 100.114.130.0/24
gateway: 100.114.130.1
cloud_properties:
name: USH_UCS_CLOUD_FOUNDRY/VLAN1130_LB_100.114.130.0