AWS ECS Fargate cannot connect MongoDB from EC2 - mongodb

I've created Fargate cluster on ECS. But when I run my instance, I've encountered following error message:
Error: The hook orm is taking too long to load. Make sure it is
triggering its initialize() callback, or else set
`sails.config.orm._hookTimeout to a higher value (currently 20000) at
Timeout.tooLong as
_onTimeout
But in mongoDB EC2 instance, I've already configured bindIp like this
# network interfaces
net:
port: 27017
bindIp: 0.0.0.0
But when I try this docker instance from my local, I have not found that error message and when I deploy that source code in EC2, no error as well. Please let me know how to solve that issue. Thanks.
Here is my sample diagram

You're not specifying if mongodb that you run and connect to from your local docker instance is also local or whether it's the same MongoDB instance in AWS (which presumably you would either use VPN or ssh tunneling to connect to).
So why the docker instance works locally and not in AWS is going to be a bit hard to explain. I'd suggest that it's network connectivity related.
We run ECS Fargate to an EC2 instance that runs mongodb. The key to this is make sure to establish the security group relationship as well.
This could for instance look like below from a Cloudformation example. You have the Fargate rAppFargateSecurityGroup security group (exposing app via 8080) attached to your Fargate Service. And you have the mongodb rMongoDbEc2SecurityGroup security group attached to the mongodb EC2 instance (exposing mongodb via port 27017).
You will notice that the glue here is "SourceSecurityGroupId: !Ref rAppFargateSecurityGroup", which allows fargate to connect to mongodb.
rAppFargateSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Sub '${pAppName}-${pEnvironment} ECS Security Group'
VpcId: !Ref pVpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 8080
ToPort: 8080
SourceSecurityGroupId: !Ref rAppAlbSecurityGroup
rMongoDbEc2SecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: !Sub '${pAppName}-${pEnvironment} MongoDb Security Group'
VpcId: !Ref pVpcId
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 27017
ToPort: 27017
SourceSecurityGroupId: !Ref rAppFargateSecurityGroup
You would have the Fargate Service configured along the ways of:
rFargateService:
Type: AWS::ECS::Service
Properties:
...
NetworkConfiguration:
AwsvpcConfiguration:
SecurityGroups:
- !Ref pAppFargateSecurityGroup
Subnets:
- !Ref pPrivateSubnetA
- !Ref pPrivateSubnetB
- !Ref pPrivateSubnetC
The Fargate Service subnets would (need to) be configured in the same VPC as your mongodb host if you're not using e.g. VPC peering or Private Link.
I should also add that other things that could trip you up are NACLs. And of course local host firewalls (like iptables).

Related

Docker compose Amazon ECS

As I am quite familiar with docker compose and not so familiar with amazons cloudformation I found it extremely nice to be able to basically run your docker compose files via ecs integration and viola behind the scenes everything you need is created for you. So you get your load balancer (if not already created) and your ecs cluster with your services running and everything is connected and just works. When I started wanting to do a bit more advanced things I ran into a problem that I can't seem to find an answer to online.
I have 2 services in my docker compose, my spring boot web app and my postgres db. I wanted to implement ssl and redirect all traffic to https. After a lot of research and a lot of trial and error I finally got it to work by extending my compose file with x-aws-cloudformation and adding native cloudformation yaml. When doing all of this I was forced to choose an application load balancer over a network load balancer as it operates on layer 7 (http/https). However my problem is that now I have no way of reaching my postgres database and running queries against it via for example intellij. My spring boot app works find and can read/write to my database so that works fine. Before the whole ssl implementation I didn't specify a load balancer in my compose file and so it gave me a network load balancer every time I ran my compose file. Then I could connect to my database via intellij and run queries. I have tried adding an inbound rule on my security group that basically allows all inbound traffic to my database via 5432 but that didn't help. I may not be setting the correct host when applying my connection details in intellij but I have tried using the following:
dns name of load balancer
ip-adress of load balancer
public ip of my postgres db task (launch type: fargate)
I would just like to simply reach my database and run queries against it even though it is running inside aws ecs cluster behind an application load balancer. Is there a way of achieving what I am trying to do? Or do I have to have 2 separate load balancers (one application LB and one network LB)?
Here is my docker-compose file(I have omitted a few irrelevant env variables):
version: "3.9"
x-aws-loadbalancer: arn:my-application-load-balancer
services:
my-web-app:
build:
context: .
image: hub/my-web-app
x-aws-pull_credentials: xxxxxxxx
container_name: my-app-name
ports:
- "80:80"
networks:
- my-app-network
depends_on:
- postgres
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://postgres:5432/my-db?currentSchema=my-db_schema
- SPRING_DATASOURCE_USERNAME=dbpass
- SPRING_DATASOURCE_PASSWORD=dbpass
- SPRING_DATASOURCE_DRIVER-CLASS-NAME=org.postgresql.Driver
- SPRING_JPA_DATABASE_PLATFORM=org.hibernate.dialect.PostgreSQLDialect
postgres:
build:
context: docker/database
image: hub/my-db
container_name: my-db
networks:
- my-app-network
deploy:
replicas: 1
resources:
limits:
cpus: '0.5'
memory: 2048M
environment:
- POSTGRES_USER=dbpass
- POSTGRES_PASSWORD=dbpass
- POSTGRES_DB=my-db
networks:
my-app-network:
name: my-app-network
x-aws-cloudformation:
Resources:
MyWebAppTCP80TargetGroup:
Properties:
HealthCheckPath: /actuator/health
Matcher:
HttpCode: 200-499
MyWebAppTCP80Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTP
Port: 80
LoadBalancerArn: xxxxx
DefaultActions:
- Type: redirect
RedirectConfig:
Port: 443
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Protocol: HTTPS
StatusCode: HTTP_301
MyWebAppTCP443Listener:
Type: AWS::ElasticLoadBalancingV2::Listener
Properties:
Protocol: HTTPS
Port: 443
LoadBalancerArn: xxxxxxxxx
Certificates:
- CertificateArn: "xxxxxxxxxx"
DefaultActions:
- Type: forward
ForwardConfig:
TargetGroups:
- TargetGroupArn:
Ref: MyWebAppTCP80TargetGroup
MyWebAppTCP80RedirectRule:
Type: AWS::ElasticLoadBalancingV2::ListenerRule
Properties:
ListenerArn:
Ref: MyWebAppTCP80Listener
Priority: 1
Conditions:
- Field: host-header
HostHeaderConfig:
Values:
- "*.my-app.com"
- "www.my-app.com"
- "my-app.com"
Actions:
- Type: redirect
RedirectConfig:
Host: "#{host}"
Path: "/#{path}"
Query: "#{query}"
Port: 443
Protocol: HTTPS
StatusCode: HTTP_301

Cloudformation ECS / fargate - Run two containers in one task

I'm trying to run two containers in one task. The two containers must be resolvable using their DNS.
What I did ; I defined the two containers in the same task definition :
MyTwoContainerTaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
NetworkMode: awsvpc
RuntimePlatform:
OperatingSystemFamily: LINUX
RequiresCompatibilities:
- FARGATE
ContainerDefinitions:
- Name: container1
...
- Name: container2
...
...
And then I use two (one for each container) ServiceDiscovery resources and two Service resources to permit the DNS resolution :
Container1CloudmapDiscoveryservice:
Type: AWS::ServiceDiscovery::Service
...
Container1Service:
Type: 'AWS::ECS::Service'
Properties:
ServiceName: container1
DesiredCount: 1
LaunchType: FARGATE
TaskDefinition: !Ref MyTwoContainerTaskDefinition
ServiceRegistries:
- RegistryArn: !GetAtt Container1CloudmapDiscoveryservice.Arn
Port: 7070
...
And the same resources for container 2.
The deployment is working but when I go to AWS portal I have two tasks that are containing the two containers.
I would like to have only one task containing my two containers.
Do you know if it's possible and what I'm missing ?
Yes, it's possible to have multiple containers in one task definition. See here: AWS ECS start multiple containers in one task definition

AWS elasticsearch service with open access

I have this template that was working till February.
https://datameetgeobk.s3.amazonaws.com/cftemplates/EyeOfCustomer_updated.yaml.txt
Something related to Fine Grained access changed and I get the error...
Enable fine-grained access control or apply a restrictive access
policy to your domain (Service: AWSElasticsearch; Status Code: 400;
Error Code: ValidationException
This is just a test server and I do not want to protect it using Advanced security options.
The error you receive is because Amazon enabled the fine grained access control as part of its release in February 2020.
You can enable VPCOptions for the cluster and create a subnet + security group and allow access through that security group. Add VPC ID as a parameter say pVpc (default VPC in thise case)
Add vpc parameter
pVpc:
Description: VPC ID
Type: String
Default: default-xxadssad - your default vpc id
Add subnet & security group
ESSubnetA:
Type: AWS::EC2::Subnet
Properties:
VpcId:
Ref: !Ref pVpc
AvailabilityZone: ${self:provider.region}a
CidrBlock: !Ref pVpcCIDR
Tags:
- Key: Name
Value: es-subneta
ESSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: SecurityGroup for Elasticsearch
VpcId:
Ref: !Ref pVpc
SecurityGroupIngress:
- FromPort: '443'
IpProtocol: tcp
ToPort: '443'
CidrIp: 0.0.0.0/0
Tags:
- Key: Name
Value: es-sg
Enable VPCOptions
VPCOptions:
SubnetIds:
- !Ref ESSubnetA
SecurityGroupIds:
- !Ref ESSecurityGroup

504 Gateway Timeout using Application Load Balancer in ECS

Deploying a Laravel web application on ECS, in order to enable autoscaling I am using an Application Load Balancer. The application worked (and scaled) perfectly until I introduced a heavy weight page, where I started to get 504 Gateway Timeout errors after a minute or so.
I am pretty sure the single web server has a higher timeout (this never happens when the application is tested in local) so the problem must be related to something related to AWS environment (ECS / ALB).
Below you can find a snipped of the ALB setting
AdminLoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
SecurityGroups:
- !Ref 'AlbSecurityGroup'
Subnets:
- !Ref 'PublicSubnetAz1'
- !Ref 'PublicSubnetAz2'
Scheme: internet-facing
Name: !Join ['-', [!Ref 'AWS::StackName', 'lb']]
After some attempts, I solved the issue setting the idle timeout attribute of the load balancer, as explained here in theory, because nothing was wrong with the single ECS Tasks. In Cloudformation, it was enough to add the attribute setting of the parameter, and double the default value.
AdminLoadBalancer:
Type: AWS::ElasticLoadBalancingV2::LoadBalancer
Properties:
LoadBalancerAttributes:
- Key: 'idle_timeout.timeout_seconds'
Value: 120
SecurityGroups:
- !Ref 'AlbSecurityGroup'
Subnets:
- !Ref 'PublicSubnetAz1'
- !Ref 'PublicSubnetAz2'
Scheme: internet-facing
Name: !Join ['-', [!Ref 'AWS::StackName', 'lb']]

How can I structure cloudformation template to install mulesoft and jre packages?

I have a cloudformation template that launches an ec2 instance, but I want to install packages as well. It is not doing that.
The commands in my template successfully install the packages when run manually on the instance. But I do not understand the correct syntax in order to have cloudformation do the installs for me.
AWSTemplateFormatVersion: '2010-09-09'
Description: AWS CloudFormation Sample Template - spin up EC2 instance, install mule and jre
Parameters:
KeyName:
Description: Name of an existing EC2 KeyPair to enable SSH access to the instance
Default: app-key
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: must be the name of an existing EC2 KeyPair.
InstanceType:
Description: MuleSoft Enterprise Standalone EC2 instance
Type: String
Default: t2.small
AllowedValues:
- t2.small
- z1d.large
- r5d.large
- r5.large
- r5ad.large
- r5a.large
ConstraintDescription: must be a valid EC2 instance type.
SSHLocation:
Description: The IP address range that can be used to SSH to the EC2 instances
Type: String
MinLength: '9'
MaxLength: '18'
Default: 0.0.0.0/0
AllowedPattern: "(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})\\.(\\d{1,3})/(\\d{1,2})"
ConstraintDescription: must be a valid IP CIDR range of the form x.x.x.x/x.
Resources:
EC2Instance:
Type: AWS::EC2::Instance
Metadata:
AWS::CloudFormation::Init:
config:
commands:
01_update_yum:
command: "sudo yum update -y"
02_rm_jre1_7:
command: "sudo sudo yum -y erase java-1.7.0"
03_install__jre1_8:
command: "sudo yum install -y java-1.8.0-openjdk"
04_change_into_opt:
command: "cd /opt"
05_download_mulesoft:
command: "sudo wget https://s3-us-west-1.amazonaws.com/mulesoft/mule-enterprise-standalone-4.1.3.2.zip"
06_install_mulesoft:
command: "sudo unzip mule-enterprise-standalone-4.1.3.2.zip"
07_add_mule_user:
command: "sudo useradd mule"
08_mule_ownership:
command: "sudo chown -R mule /opt/mule-enterprise-standalone-4.1.3.2"
09_run_mule:
command: "sudo -u mule bash -x /opt/mule-enterprise-standalone-4.1.3.2/bin/mule console"
Properties:
InstanceType:
Ref: InstanceType
SecurityGroups:
- Ref: WebSecurityGroup
KeyName:
Ref: KeyName
ImageId: ami-0080e4c5bc078760e
WebSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable SSH, HTTP, HTTPS, Custom port
SecurityGroupIngress:
- CidrIp: 0.0.0.0/0
FromPort: '80'
IpProtocol: tcp
ToPort: '80'
- CidrIp: 0.0.0.0/0
FromPort: '443'
IpProtocol: tcp
ToPort: '443'
- CidrIp: 0.0.0.0/0
FromPort: '8443'
IpProtocol: tcp
ToPort: '8443'
- CidrIp:
Ref: SSHLocation
FromPort: '22'
IpProtocol: tcp
ToPort: '22'
Outputs:
InstanceId:
Description: InstanceId of the newly created EC2 instance
Value:
Ref: EC2Instance
AZ:
Description: Availability Zone of the newly created EC2 instance
Value:
Fn::GetAtt:
- EC2Instance
- AvailabilityZone
PublicDNS:
Description: Public DNSName of the newly created EC2 instance
Value:
Fn::GetAtt:
- EC2Instance
- PublicDnsName
PublicIP:
Description: Public IP address of the newly created EC2 instance
Value:
Fn::GetAtt:
- EC2Instance
- PublicIp
I would like this template to launch the instance and perform the commands to install the packages.
You should use Userdata and cloud-init both for installation. You have commands, but these commands are not called.
Try this https://www.bogotobogo.com/DevOps/AWS/aws-CloudFormation-Bootstrap-UserData.php This page has proper example for installation of packages.