Deploy a FargateService to an ECS that's living within a different Stack (preoject) - amazon-ecs

1- I have a project core-infra that encapasses all the core infra related compoents (VPCs, Subnets, ECS Cluster...etc)
2- I have microservice projects with independant stacks each used for deployment
I want to deploy a FargateService from a microservice project stack A to the already existing ECS living within the core-infra stack
Affected area/feature
Pulumi Service
ECS
Deploy microservice
FargateService
Pulumi github issue link

Pulumi Stack References are the answer here:
https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences
Your core-infra stack would output the ECS cluster ID and then stack B consumes that output so it can, for example, deploy an ECS service to the given cluster
(https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/).

I was able to deploy using aws classic.
PS: The setup is way more complex than with the awsx, the doc and resources aren't exhaustive.
Now I have few issues:
The Loadbalancer isn't reachable and keeps loading forever
I don't have any logs in the CloudWatch LogGoup
Not sure how to use the LB Listner with the ECS service / Not sure about the port mapping
Here is the complete code for reference (people who're husteling) and I'd appreciate if you could suggest any improvments/answers.
// Capture the EnvVars
const appName = process.env.APP_NAME;
const namespace = process.env.NAMESPACE;
const environment = process.env.ENVIRONMENT;
// Load the Deployment Environment config.
const configMapLoader = new ConfigMapLoader(namespace, environment);
const env = pulumi.getStack();
const infra = new pulumi.StackReference(`org/core-datainfra/${env}`);
// Fetch ECS Fargate cluster ID.
const ecsClusterId = infra.getOutput('ecsClusterId');
// Fetch DeVpc ID.
const deVpcId = infra.getOutput('deVpcId');
// Fetch DeVpc subnets IDS.
const subnets = ['subnet-aaaaaaaaaa', 'subnet-bbbbbbbbb'];
// Fetch DeVpc Security Group ID.
const securityGroupId = infra.getOutput('deSecurityGroupId');
// Define the Networking for our service.
const serviceLb = new aws.lb.LoadBalancer(`${appName}-lb`, {
internal: false,
loadBalancerType: 'application',
securityGroups: [securityGroupId],
subnets,
enableDeletionProtection: false,
tags: {
Environment: environment
}
});
const serviceTargetGroup = new aws.lb.TargetGroup(`${appName}-t-g`, {
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
vpcId: deVpcId,
targetType: 'ip'
});
const http = new aws.lb.Listener(`${appName}-listener`, {
loadBalancerArn: serviceLb.arn,
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
defaultActions: [
{
type: 'forward',
targetGroupArn: serviceTargetGroup.arn
}
]
});
// Create AmazonECSTaskExecutionRolePolicy
const taskExecutionPolicy = new aws.iam.Policy(
`${appName}-task-execution-policy`,
{
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Action: [
'ecr:GetAuthorizationToken',
'ecr:BatchCheckLayerAvailability',
'ecr:GetDownloadUrlForLayer',
'ecr:BatchGetImage',
'logs:CreateLogStream',
'logs:PutLogEvents',
'ec2:AuthorizeSecurityGroupIngress',
'ec2:Describe*',
'elasticloadbalancing:DeregisterInstancesFromLoadBalancer',
'elasticloadbalancing:DeregisterTargets',
'elasticloadbalancing:Describe*',
'elasticloadbalancing:RegisterInstancesWithLoadBalancer',
'elasticloadbalancing:RegisterTargets'
],
Resource: '*'
}
]
})
}
);
// IAM role that allows Amazon ECS to make calls to the load balancer
const taskExecutionRole = new aws.iam.Role(`${appName}-task-execution-role`, {
assumeRolePolicy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Principal: {
Service: ['ecs-tasks.amazonaws.com']
},
Action: 'sts:AssumeRole'
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ecs.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ec2.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
}
]
}),
tags: {
name: `${appName}-iam-role`
}
});
new aws.iam.RolePolicyAttachment(`${appName}-role-policy`, {
role: taskExecutionRole.name,
policyArn: taskExecutionPolicy.arn
});
// New image to be pulled
const image = `${configMapLoader.configMap.service.image.repository}:${process.env.IMAGE_TAG}`;
// Set up Log Group
const awsLogGroup = new aws.cloudwatch.LogGroup(`${appName}-awslogs-group`, {
name: `${appName}-awslogs-group`,
tags: {
Application: `${appName}`,
Environment: 'production'
}
});
const serviceTaskDefinition = new aws.ecs.TaskDefinition(
`${appName}-task-definition`,
{
family: `${appName}-task-definition`,
networkMode: 'awsvpc',
executionRoleArn: taskExecutionRole.arn,
requiresCompatibilities: ['FARGATE'],
cpu: configMapLoader.configMap.service.resources.limits.cpu,
memory: configMapLoader.configMap.service.resources.limits.memory,
containerDefinitions: JSON.stringify([
{
name: `${appName}-fargate`,
image,
cpu: parseInt(
configMapLoader.configMap.service.resources.limits.cpu
),
memory: parseInt(
configMapLoader.configMap.service.resources.limits.memory
),
essential: true,
portMappings: [
{
containerPort: 80,
hostPort: 80
}
],
environment: configMapLoader.getConfigAsEnvironment(),
logConfiguration: {
logDriver: 'awslogs',
options: {
'awslogs-group': `${appName}-awslogs-group`,
'awslogs-region': 'us-east-2',
'awslogs-stream-prefix': `${appName}`
}
}
}
])
}
);
// Create a Fargate service task that can scale out.
const fargateService = new aws.ecs.Service(`${appName}-fargate`, {
name: `${appName}-fargate`,
cluster: ecsClusterId,
taskDefinition: serviceTaskDefinition.arn,
desiredCount: 5,
loadBalancers: [
{
targetGroupArn: serviceTargetGroup.arn,
containerName: `${appName}-fargate`,
containerPort: configMapLoader.configMap.service.http.internalPort
}
],
networkConfiguration: {
subnets
}
});
// Export the Fargate Service Info.
export const fargateServiceName = fargateService.name;
export const fargateServiceUrl = serviceLb.dnsName;
export const fargateServiceId = fargateService.id;
export const fargateServiceImage = image;

Related

What should be the host name in nestjs hybrid microservice when deployed on Kubernetes

Tech stack -
nestjs - 2 microservice
kubernetes - AWS EKS
Ingress - nginx
Hybrid
const app = await NestFactory.create(AppModule);
const microservice = app.connectMicroservice<MicroserviceOptions>(
{
transport: Transport.TCP,
options: {
host: process.env.TCP_HOST,
port: parseInt(process.env.TCP_EVALUATION_PORT),
},
},
{ inheritAppConfig: true },
);
await app.startAllMicroservices();
await app.listen(parseInt(config.get(ConfigEnum.PORT)), '0.0.0.0');
env
TCP_HOST: '0.0.0.0'
TCP_CORE_PORT: 8080
TCP_EVALUATION_PORT: 8080
Error
"connect ECONNREFUSED 0.0.0.0:8080"
Do I need to expose this port in docker or add it somewhere in the security group?
Or may be need to pass a different host?
Note: App is deployed properly without any error and HTTP Rest API seems to be working fine but not the TCP #messagePattern!
Thanks
Create a service to match the instances you want to connect to and use the service name.
Basically in Hybrid application, main.ts configuration will be like below -
service1
const app = await NestFactory.create(AppModule);
const microservice = app.connectMicroservice<MicroserviceOptions>(
{
transport: Transport.TCP,
options: {
host: '0.0.0.0',
port: 6200,
},
},
{ inheritAppConfig: true },
);
await app.startAllMicroservices();
await app.listen(6300, '0.0.0.0');
In client
ClientsModule.register([
{
name: 'service1',
transport: Transport.TCP,
options: {
host: 'service1',
port: 6200,
},
},
]),

unable to provision postgres12 database cluster using the AWS CDK

given the following code:
// please note I created a wrapper around the cdk components, hence cdk.ec2 etc.
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql12');
// Create the Serverless Aurora DB cluster; set the engine to Postgres
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R5, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
Provided we use postgres11 - this code works without issue, when I try and install 12, I get the following error reported by the CDK:
The Parameter Group default.aurora-postgresql12 with DBParameterGroupFamily aurora-postgresql12 cannot be used for this instance. Please use a Paramet
er Group with DBParameterGroupFamily aurora-postgresql11 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: e
e90210d-070d-4593-9564-813b6fd4e331; Proxy: null)
I have tried loads of combinations for instanceType (most of which work in the RDS UI on the console) - but I cannot seem to install postgres12 - any ideas what I am doing wrong?
tried this as well:
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
//DEFINING VERSION 12.6 FOR ENGINE
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
//DEFINING 11 FOR PARAMETER GROUP
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql11');
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R6G, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
works like a dream - but installs engine v11.9 :( - I need >12 because I need to install pg_partman
somewhere along the line the engine is not. being properly set - or is hardcoded to 11
This works for me:
const AURORA_POSTGRES_ENGINE_VERSION = AuroraPostgresEngineVersion.VER_10_7
const RDS_MAJOR_VERSION = AURORA_POSTGRES_ENGINE_VERSION.auroraPostgresMajorVersion.split('.')[0]
const parameterGroup = ParameterGroup.fromParameterGroupName(
scope,
`DBPrameterGroup`,
`default.aurora-postgresql${RDS_MAJOR_VERSION}`,
)
new ServerlessCluster(scope, `Aurora${id}`, {
engine: DatabaseClusterEngine.auroraPostgres({
version: AURORA_POSTGRES_ENGINE_VERSION,
}),
parameterGroup,
defaultDatabaseName: DATABASE_NAME,
credentials: {
username: 'x',
},
vpc: this.vpc,
vpcSubnets: this.subnetSelection,
securityGroups: [securityGroup],
})

assumed-role is not authorized to perform: route53:ListHostZonesByDomain; Adding a Route53 Policy to a CodePipeline CodeBuildAction's Assumed Rule

My goal is to create a website at subdomain.mydomain.com pointing to a CloudFront CDN distributing a Lambda running Express that's rendering an S3 website. I'm using AWS CDK to do this.
I have an error that says
[Error at /plants-domain] User: arn:aws:sts::413025517373:assumed-role/plants-pipeline-BuildCDKRole0DCEDB8F-1BHVX6Z6H5X0H/AWSCodeBuild-39a582bf-8b89-447e-a6b4-b7f7f13c9db1 is not authorized to perform: route53:ListHostedZonesByName
It means:
[Error at /plants-domain] - error in the stack called plants-domain
User: arn:aws:sts::1234567890:assumed-role/plants-pipeline-BuildCDKRole0DCEDB8F-1BHVX6Z6H5X0H/AWSCodeBuild-39a582bf-8b89-447e-a6b4-b7f7f13c9db is the ARN of the Assumed Role associated with my object in the plants-pipeline executing route53.HostedZone.fromLookup() (but which object is it??)
is not authorized to perform: route53:ListHostedZonesByName the Assumed Role needs additional Route53 permissions
I believe this policy will permit the object in question to lookup the Hosted Zone:
const listHostZonesByNamePolicy = new IAM.PolicyStatement({
actions: ['route53:ListHostedZonesByName'],
resources: ['*'],
effect: IAM.Effect.ALLOW,
});
The code using Route53.HostedZone.fromLookup() is in the first stack domain.ts. My other stack consumes the domain.ts template using CodePipelineAction.CloudFormationCreateUpdateStackAction (see below)
domain.ts
// The addition of this zone lookup broke CDK
const zone = route53.HostedZone.fromLookup(this, 'baseZone', {
domainName: 'domain.com',
});
// Distribution I'd like to point my subdomain.domain.com to
const distribution = new CloudFront.CloudFrontWebDistribution(this, 'website-cdn', {
// more stuff goes here
});
// Create the subdomain aRecord pointing to my distribution
const aRecord = new route53.ARecord(this, 'aliasRecord', {
zone: zone,
recordName: 'subdomain',
target: route53.RecordTarget.fromAlias(new targets.CloudFrontTarget(distribution)),
});
pipeline.ts
const pipeline = new CodePipeline.Pipeline(this, 'Pipeline', {
pipelineName: props.name,
restartExecutionOnUpdate: false,
});
// My solution to the missing AssumedRole synth error: Create a Role, add the missing Policy to it (and the Pipeline, just in case)
const buildRole = new IAM.Role(this, 'BuildRole', {
assumedBy: new IAM.ServicePrincipal('codebuild.amazonaws.com'),
path: '/',
});
const listHostZonesByNamePolicy = new IAM.PolicyStatement({
actions: ['route53:ListHostedZonesByName'],
resources: ['*'],
effect: IAM.Effect.ALLOW,
});
buildRole.addToPrincipalPolicy(listHostZonesByNamePolicy);
pipeline.addStage({
// This is the action that fails, when it calls `cdk synth`
stageName: 'Build',
actions: [
new CodePipelineAction.CodeBuildAction({
actionName: 'CDK',
project: new CodeBuild.PipelineProject(this, 'BuildCDK', {
projectName: 'CDK',
buildSpec: CodeBuild.BuildSpec.fromSourceFilename('./aws/buildspecs/cdk.yml'),
role: buildRole, // this didn't work
}),
input: outputSources,
outputs: [outputCDK],
runOrder: 10,
role: buildRole, // this didn't work
}),
new CodePipelineAction.CodeBuildAction({
actionName: 'Assets',
// other stuff
}),
new CodePipelineAction.CodeBuildAction({
actionName: 'Render',
// other stuff
}),
]
})
pipeline.addStage({
stageName: 'Deploy',
actions: [
// This is the action calling the compiled domain stack template
new CodePipelineAction.CloudFormationCreateUpdateStackAction({
actionName: 'Domain',
templatePath: outputCDK.atPath(`${props.name}-domain.template.json`),
stackName: `${props.name}-domain`,
adminPermissions: true,
runOrder: 50,
role: buildRole, // this didn't work
}),
// other actions
]
});
With the above configuration, unfortunately, I still receive the same error:
[Error at /plants-domain] User: arn:aws:sts::413025517373:assumed-role/plants-pipeline-BuildCDKRole0DCEDB8F-1BHVX6Z6H5X0H/AWSCodeBuild-957b18fb-909d-4e22-94f0-9aa6281ddb2d is not authorized to perform: route53:ListHostedZonesByName
With the Assumed Role ARN, is it possible to track down the object missing permissions? Is there another way to solve my IAM/AssumedUser role problem?
Here is the answer from the official doco: https://docs.aws.amazon.com/cdk/api/latest/docs/pipelines-readme.html#context-lookups
TLDR:
pipeline by default cannot do lookups -> 2 options:
synth on dev machine (make sure a dev has permissions)
add policy for lookups
new CodePipeline(this, 'Pipeline', {
synth: new CodeBuildStep('Synth', {
input: // ...input...
commands: [
// Commands to load cdk.context.json from somewhere here
'...',
'npm ci',
'npm run build',
'npx cdk synth',
// Commands to store cdk.context.json back here
'...',
],
rolePolicyStatements: [
new iam.PolicyStatement({
actions: ['sts:AssumeRole'],
resources: ['*'],
conditions: {
StringEquals: {
'iam:ResourceTag/aws-cdk:bootstrap-role': 'lookup',
},
},
}),
],
}),
});
Based on the error, the pipeline role (and it would work at the stage or action...)
By default a new role is being created for the pipeline:
role?
Type: IRole (optional, default: a new IAM role will be created.)
The IAM role to be assumed by this Pipeline.
Instead, when you are constructing your pipeline add the buildRole there:
const pipeline = new CodePipeline.Pipeline(this, 'Pipeline', {
pipelineName: props.name,
restartExecutionOnUpdate: false,
role: buildRole
});
Based on your pipeline you never assigned the role to the relevant stage action according to the docs:
pipeline.addStage({
stageName: 'Deploy',
actions: [
// This is the action calling the compiled domain stack template
new CodePipelineAction.CloudFormationCreateUpdateStackAction({
...
role: buildRole, // this didn't work
}),
// other actions
]
});
Should be:
pipeline.addStage({
stageName: 'Deploy',
actions: [
// This is the action calling the compiled domain stack template
new CodePipelineAction.CloudFormationCreateUpdateStackAction({
....
deploymentRole: buildRole
}),
]
});
Why is it deploymentRole instead of just role, no one knows.

How to set container port and load balancer for aws fargate using pulumi?

I am trying to deploy simple flask python app on aws fargate using Pulumi. The dockerfile of python app exposes port 8000 from container. How could I set it up with load balancer using pulumi?
I have tried the following so far, with index.ts (pulumi):
import * as awsx from "#pulumi/awsx";
// Step 1: Create an ECS Fargate cluster.
const cluster = new awsx.ecs.Cluster("first_cluster");
// Step 2: Define the Networking for our service.
const alb = new awsx.elasticloadbalancingv2.ApplicationLoadBalancer(
"net-lb", { external: true, securityGroups: cluster.securityGroups });
const web = alb.createListener("web", { port: 80, external: true });
// Step 3: Build and publish a Docker image to a private ECR registry.
const img = awsx.ecs.Image.fromPath("app-img", "./app");
// Step 4: Create a Fargate service task that can scale out.
const appService = new awsx.ecs.FargateService("app-svc", {
cluster,
taskDefinitionArgs: {
container: {
image: img,
cpu: 102 /*10% of 1024*/,
memory: 50 /*MB*/,
portMappings: [{ containerPort: 8000, }],
},
},
desiredCount: 5,
});
// Step 5: Export the Internet address for the service.
export const url = web.endpoint.hostname;
And when I curl the url curl http://$(pulumi stack output url), I get:
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
</body>
</html>
How could I map load balancer port to container port which is 8000?
You can specify the target port on the application load balancer:
const atg = alb.createTargetGroup(
"app-tg", { port: 8000, deregistrationDelay: 0 });
Then you can simply pass the listener to the service port mappings:
const appService = new awsx.ecs.FargateService("app-svc", {
// ...
taskDefinitionArgs: {
container: {
// ...
portMappings: [web],
},
},
});
Here is a full repro with a public docker container, so that anybody could start with a working sample:
import * as awsx from "#pulumi/awsx";
const cluster = new awsx.ecs.Cluster("first_cluster");
const alb = new awsx.elasticloadbalancingv2.ApplicationLoadBalancer(
"app-lb", { external: true, securityGroups: cluster.securityGroups });
const atg = alb.createTargetGroup(
"app-tg", { port: 8080, deregistrationDelay: 0 });
const web = atg.createListener("web", { port: 80 });
const appService = new awsx.ecs.FargateService("app-svc", {
cluster,
taskDefinitionArgs: {
container: {
image: "gcr.io/google-samples/kubernetes-bootcamp:v1",
portMappings: [web],
},
},
desiredCount: 1,
});
export const url = web.endpoint.hostname;

How to edit a pulumi resource after it's been declared

I've declared a kubernetes deployment like:
const ledgerDeployment = new k8s.extensions.v1beta1.Deployment("ledger", {
spec: {
template: {
metadata: {
labels: {name: "ledger"},
name: "ledger",
// namespace: namespace,
},
spec: {
containers: [
...
],
volumes: [
{
emptyDir: {},
name: "gunicorn-socket-dir"
}
]
}
}
}
});
Later on in my index.ts I want to conditionally modify the volumes of the deployment. I think this is a quirk of pulumi I haven't wrapped my head around but here's my current attempt:
if(myCondition) {
ledgerDeployment.spec.template.spec.volumes.apply(volumes =>
volumes.push(
{
name: "certificates",
secret: {
items: [
{key: "tls.key", path: "proxykey"},
{key: "tls.crt", path: "proxycert"}],
secretName: "star.builds.qwil.co"
}
})
)
)
When I do this I get the following error: Property 'mode' is missing in type '{ key: string; path: string; }' but required in type 'KeyToPath'
I suspect I'm using apply incorrectly. When I try to directly modify ledgerDeployment.spec.template.spec.volumes.push() I get an error Property 'push' does not exist on type 'Output<Volume[]>'.
What is the pattern for modifying resources in Pulumi? How can I add a new volume to my deployment?
It's not possible to modify the resource inputs after you created the resource. Instead, you should place all the logic that defines the shape of inputs before you call the constructor.
In your example, this could be:
let volumes = [
{
emptyDir: {},
name: "gunicorn-socket-dir"
}
]
if (myCondition) {
volumes.push({...});
}
const ledgerDeployment = new k8s.extensions.v1beta1.Deployment("ledger", {
// <-- use `volumes` here
});