How to set container port and load balancer for aws fargate using pulumi? - pulumi

I am trying to deploy simple flask python app on aws fargate using Pulumi. The dockerfile of python app exposes port 8000 from container. How could I set it up with load balancer using pulumi?
I have tried the following so far, with index.ts (pulumi):
import * as awsx from "#pulumi/awsx";
// Step 1: Create an ECS Fargate cluster.
const cluster = new awsx.ecs.Cluster("first_cluster");
// Step 2: Define the Networking for our service.
const alb = new awsx.elasticloadbalancingv2.ApplicationLoadBalancer(
"net-lb", { external: true, securityGroups: cluster.securityGroups });
const web = alb.createListener("web", { port: 80, external: true });
// Step 3: Build and publish a Docker image to a private ECR registry.
const img = awsx.ecs.Image.fromPath("app-img", "./app");
// Step 4: Create a Fargate service task that can scale out.
const appService = new awsx.ecs.FargateService("app-svc", {
cluster,
taskDefinitionArgs: {
container: {
image: img,
cpu: 102 /*10% of 1024*/,
memory: 50 /*MB*/,
portMappings: [{ containerPort: 8000, }],
},
},
desiredCount: 5,
});
// Step 5: Export the Internet address for the service.
export const url = web.endpoint.hostname;
And when I curl the url curl http://$(pulumi stack output url), I get:
<html>
<head><title>503 Service Temporarily Unavailable</title></head>
<body bgcolor="white">
<center><h1>503 Service Temporarily Unavailable</h1></center>
</body>
</html>
How could I map load balancer port to container port which is 8000?

You can specify the target port on the application load balancer:
const atg = alb.createTargetGroup(
"app-tg", { port: 8000, deregistrationDelay: 0 });
Then you can simply pass the listener to the service port mappings:
const appService = new awsx.ecs.FargateService("app-svc", {
// ...
taskDefinitionArgs: {
container: {
// ...
portMappings: [web],
},
},
});
Here is a full repro with a public docker container, so that anybody could start with a working sample:
import * as awsx from "#pulumi/awsx";
const cluster = new awsx.ecs.Cluster("first_cluster");
const alb = new awsx.elasticloadbalancingv2.ApplicationLoadBalancer(
"app-lb", { external: true, securityGroups: cluster.securityGroups });
const atg = alb.createTargetGroup(
"app-tg", { port: 8080, deregistrationDelay: 0 });
const web = atg.createListener("web", { port: 80 });
const appService = new awsx.ecs.FargateService("app-svc", {
cluster,
taskDefinitionArgs: {
container: {
image: "gcr.io/google-samples/kubernetes-bootcamp:v1",
portMappings: [web],
},
},
desiredCount: 1,
});
export const url = web.endpoint.hostname;

Related

What should be the host name in nestjs hybrid microservice when deployed on Kubernetes

Tech stack -
nestjs - 2 microservice
kubernetes - AWS EKS
Ingress - nginx
Hybrid
const app = await NestFactory.create(AppModule);
const microservice = app.connectMicroservice<MicroserviceOptions>(
{
transport: Transport.TCP,
options: {
host: process.env.TCP_HOST,
port: parseInt(process.env.TCP_EVALUATION_PORT),
},
},
{ inheritAppConfig: true },
);
await app.startAllMicroservices();
await app.listen(parseInt(config.get(ConfigEnum.PORT)), '0.0.0.0');
env
TCP_HOST: '0.0.0.0'
TCP_CORE_PORT: 8080
TCP_EVALUATION_PORT: 8080
Error
"connect ECONNREFUSED 0.0.0.0:8080"
Do I need to expose this port in docker or add it somewhere in the security group?
Or may be need to pass a different host?
Note: App is deployed properly without any error and HTTP Rest API seems to be working fine but not the TCP #messagePattern!
Thanks
Create a service to match the instances you want to connect to and use the service name.
Basically in Hybrid application, main.ts configuration will be like below -
service1
const app = await NestFactory.create(AppModule);
const microservice = app.connectMicroservice<MicroserviceOptions>(
{
transport: Transport.TCP,
options: {
host: '0.0.0.0',
port: 6200,
},
},
{ inheritAppConfig: true },
);
await app.startAllMicroservices();
await app.listen(6300, '0.0.0.0');
In client
ClientsModule.register([
{
name: 'service1',
transport: Transport.TCP,
options: {
host: 'service1',
port: 6200,
},
},
]),

Deploy a FargateService to an ECS that's living within a different Stack (preoject)

1- I have a project core-infra that encapasses all the core infra related compoents (VPCs, Subnets, ECS Cluster...etc)
2- I have microservice projects with independant stacks each used for deployment
I want to deploy a FargateService from a microservice project stack A to the already existing ECS living within the core-infra stack
Affected area/feature
Pulumi Service
ECS
Deploy microservice
FargateService
Pulumi github issue link
Pulumi Stack References are the answer here:
https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences
Your core-infra stack would output the ECS cluster ID and then stack B consumes that output so it can, for example, deploy an ECS service to the given cluster
(https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/).
I was able to deploy using aws classic.
PS: The setup is way more complex than with the awsx, the doc and resources aren't exhaustive.
Now I have few issues:
The Loadbalancer isn't reachable and keeps loading forever
I don't have any logs in the CloudWatch LogGoup
Not sure how to use the LB Listner with the ECS service / Not sure about the port mapping
Here is the complete code for reference (people who're husteling) and I'd appreciate if you could suggest any improvments/answers.
// Capture the EnvVars
const appName = process.env.APP_NAME;
const namespace = process.env.NAMESPACE;
const environment = process.env.ENVIRONMENT;
// Load the Deployment Environment config.
const configMapLoader = new ConfigMapLoader(namespace, environment);
const env = pulumi.getStack();
const infra = new pulumi.StackReference(`org/core-datainfra/${env}`);
// Fetch ECS Fargate cluster ID.
const ecsClusterId = infra.getOutput('ecsClusterId');
// Fetch DeVpc ID.
const deVpcId = infra.getOutput('deVpcId');
// Fetch DeVpc subnets IDS.
const subnets = ['subnet-aaaaaaaaaa', 'subnet-bbbbbbbbb'];
// Fetch DeVpc Security Group ID.
const securityGroupId = infra.getOutput('deSecurityGroupId');
// Define the Networking for our service.
const serviceLb = new aws.lb.LoadBalancer(`${appName}-lb`, {
internal: false,
loadBalancerType: 'application',
securityGroups: [securityGroupId],
subnets,
enableDeletionProtection: false,
tags: {
Environment: environment
}
});
const serviceTargetGroup = new aws.lb.TargetGroup(`${appName}-t-g`, {
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
vpcId: deVpcId,
targetType: 'ip'
});
const http = new aws.lb.Listener(`${appName}-listener`, {
loadBalancerArn: serviceLb.arn,
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
defaultActions: [
{
type: 'forward',
targetGroupArn: serviceTargetGroup.arn
}
]
});
// Create AmazonECSTaskExecutionRolePolicy
const taskExecutionPolicy = new aws.iam.Policy(
`${appName}-task-execution-policy`,
{
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Action: [
'ecr:GetAuthorizationToken',
'ecr:BatchCheckLayerAvailability',
'ecr:GetDownloadUrlForLayer',
'ecr:BatchGetImage',
'logs:CreateLogStream',
'logs:PutLogEvents',
'ec2:AuthorizeSecurityGroupIngress',
'ec2:Describe*',
'elasticloadbalancing:DeregisterInstancesFromLoadBalancer',
'elasticloadbalancing:DeregisterTargets',
'elasticloadbalancing:Describe*',
'elasticloadbalancing:RegisterInstancesWithLoadBalancer',
'elasticloadbalancing:RegisterTargets'
],
Resource: '*'
}
]
})
}
);
// IAM role that allows Amazon ECS to make calls to the load balancer
const taskExecutionRole = new aws.iam.Role(`${appName}-task-execution-role`, {
assumeRolePolicy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Principal: {
Service: ['ecs-tasks.amazonaws.com']
},
Action: 'sts:AssumeRole'
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ecs.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ec2.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
}
]
}),
tags: {
name: `${appName}-iam-role`
}
});
new aws.iam.RolePolicyAttachment(`${appName}-role-policy`, {
role: taskExecutionRole.name,
policyArn: taskExecutionPolicy.arn
});
// New image to be pulled
const image = `${configMapLoader.configMap.service.image.repository}:${process.env.IMAGE_TAG}`;
// Set up Log Group
const awsLogGroup = new aws.cloudwatch.LogGroup(`${appName}-awslogs-group`, {
name: `${appName}-awslogs-group`,
tags: {
Application: `${appName}`,
Environment: 'production'
}
});
const serviceTaskDefinition = new aws.ecs.TaskDefinition(
`${appName}-task-definition`,
{
family: `${appName}-task-definition`,
networkMode: 'awsvpc',
executionRoleArn: taskExecutionRole.arn,
requiresCompatibilities: ['FARGATE'],
cpu: configMapLoader.configMap.service.resources.limits.cpu,
memory: configMapLoader.configMap.service.resources.limits.memory,
containerDefinitions: JSON.stringify([
{
name: `${appName}-fargate`,
image,
cpu: parseInt(
configMapLoader.configMap.service.resources.limits.cpu
),
memory: parseInt(
configMapLoader.configMap.service.resources.limits.memory
),
essential: true,
portMappings: [
{
containerPort: 80,
hostPort: 80
}
],
environment: configMapLoader.getConfigAsEnvironment(),
logConfiguration: {
logDriver: 'awslogs',
options: {
'awslogs-group': `${appName}-awslogs-group`,
'awslogs-region': 'us-east-2',
'awslogs-stream-prefix': `${appName}`
}
}
}
])
}
);
// Create a Fargate service task that can scale out.
const fargateService = new aws.ecs.Service(`${appName}-fargate`, {
name: `${appName}-fargate`,
cluster: ecsClusterId,
taskDefinition: serviceTaskDefinition.arn,
desiredCount: 5,
loadBalancers: [
{
targetGroupArn: serviceTargetGroup.arn,
containerName: `${appName}-fargate`,
containerPort: configMapLoader.configMap.service.http.internalPort
}
],
networkConfiguration: {
subnets
}
});
// Export the Fargate Service Info.
export const fargateServiceName = fargateService.name;
export const fargateServiceUrl = serviceLb.dnsName;
export const fargateServiceId = fargateService.id;
export const fargateServiceImage = image;

unable to provision postgres12 database cluster using the AWS CDK

given the following code:
// please note I created a wrapper around the cdk components, hence cdk.ec2 etc.
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql12');
// Create the Serverless Aurora DB cluster; set the engine to Postgres
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R5, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
Provided we use postgres11 - this code works without issue, when I try and install 12, I get the following error reported by the CDK:
The Parameter Group default.aurora-postgresql12 with DBParameterGroupFamily aurora-postgresql12 cannot be used for this instance. Please use a Paramet
er Group with DBParameterGroupFamily aurora-postgresql11 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: e
e90210d-070d-4593-9564-813b6fd4e331; Proxy: null)
I have tried loads of combinations for instanceType (most of which work in the RDS UI on the console) - but I cannot seem to install postgres12 - any ideas what I am doing wrong?
tried this as well:
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
//DEFINING VERSION 12.6 FOR ENGINE
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
//DEFINING 11 FOR PARAMETER GROUP
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql11');
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R6G, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
works like a dream - but installs engine v11.9 :( - I need >12 because I need to install pg_partman
somewhere along the line the engine is not. being properly set - or is hardcoded to 11
This works for me:
const AURORA_POSTGRES_ENGINE_VERSION = AuroraPostgresEngineVersion.VER_10_7
const RDS_MAJOR_VERSION = AURORA_POSTGRES_ENGINE_VERSION.auroraPostgresMajorVersion.split('.')[0]
const parameterGroup = ParameterGroup.fromParameterGroupName(
scope,
`DBPrameterGroup`,
`default.aurora-postgresql${RDS_MAJOR_VERSION}`,
)
new ServerlessCluster(scope, `Aurora${id}`, {
engine: DatabaseClusterEngine.auroraPostgres({
version: AURORA_POSTGRES_ENGINE_VERSION,
}),
parameterGroup,
defaultDatabaseName: DATABASE_NAME,
credentials: {
username: 'x',
},
vpc: this.vpc,
vpcSubnets: this.subnetSelection,
securityGroups: [securityGroup],
})

How can I refer to the generated domain name of `elasticsearch.CfnDomain` in AWS CDK?

I created a CfnDomain in AWS CDK and I was trying to get the generated domain name to create an alarm.
const es = new elasticsearch.CfnDomain(this, id, esProps);
new cloudwatch.CfnAlarm(this, "test", {
...
dimensions: [
{
name: "DomainName",
value: es.domainName,
},
],
});
But it seems that the domainName attribute is actually the argument that I pass in (I passed none so it will be autogenerated), so it's actually undefined and can't be used.
Is there any way that I can specify it such that it will wait for the elasticsearch cluster to be created so that I can obtain the generated domain name, or is there any other way to created an alarm for the metrics of the cluster?
You use CfnDomain.ref as the domain value for your dimension. Sample alarm creation for red cluster status:
const domain: CfnDomain = ...;
const elasticDimension = {
"DomainName": domain.ref,
};
const metricRed = new Metric({
namespace: "AWS/ES",
metricName: "ClusterStatus.red",
statistic: "maximum",
period: Duration.minutes(1),
dimensions: elasticDimension
});
const redAlarm = metricRed.createAlarm(construct, "esRedAlarm", {
alarmName: "esRedAlarm",
evaluationPeriods: 1,
threshold: 1
});

Unable to create a new Kubanetes deployment using node 'kubernetes-client'

const k8s = require('kubernetes-client');
const endpoint = 'https://' + IP;
const ext = new k8s.Extensions({
url: endpoint,
version: 'v1beta1',
insecureSkipTlsVerify: true,
namespace,
auth: {
bearer: token,
},
});
const body = {
spec: {
template: {
spec: {
metadata: [{
name,
image,
}]
}
}
}
};
ext.namespaces.deployments(name).put({body}, (err, response => { console.log(response); })
The above functions seem to authenticate with GET and PUSH, however I get the following error message when using POST.
the server does not allow this method on the requested resource
Blockquote
I think the problem might be, that due to changes of Kubernetes 1.6 to RCAB your pod has not the right privileges to schedule pods, get logs, ... through the API server.
Make sure you are using the admin.conf kubeconfig.
But be aware that giving the node cluster admin permissions sets anyone who can access the node to cluster admin ;)