given the following code:
// please note I created a wrapper around the cdk components, hence cdk.ec2 etc.
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql12');
// Create the Serverless Aurora DB cluster; set the engine to Postgres
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R5, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
Provided we use postgres11 - this code works without issue, when I try and install 12, I get the following error reported by the CDK:
The Parameter Group default.aurora-postgresql12 with DBParameterGroupFamily aurora-postgresql12 cannot be used for this instance. Please use a Paramet
er Group with DBParameterGroupFamily aurora-postgresql11 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: e
e90210d-070d-4593-9564-813b6fd4e331; Proxy: null)
I have tried loads of combinations for instanceType (most of which work in the RDS UI on the console) - but I cannot seem to install postgres12 - any ideas what I am doing wrong?
tried this as well:
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
//DEFINING VERSION 12.6 FOR ENGINE
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
//DEFINING 11 FOR PARAMETER GROUP
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql11');
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R6G, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
works like a dream - but installs engine v11.9 :( - I need >12 because I need to install pg_partman
somewhere along the line the engine is not. being properly set - or is hardcoded to 11
This works for me:
const AURORA_POSTGRES_ENGINE_VERSION = AuroraPostgresEngineVersion.VER_10_7
const RDS_MAJOR_VERSION = AURORA_POSTGRES_ENGINE_VERSION.auroraPostgresMajorVersion.split('.')[0]
const parameterGroup = ParameterGroup.fromParameterGroupName(
scope,
`DBPrameterGroup`,
`default.aurora-postgresql${RDS_MAJOR_VERSION}`,
)
new ServerlessCluster(scope, `Aurora${id}`, {
engine: DatabaseClusterEngine.auroraPostgres({
version: AURORA_POSTGRES_ENGINE_VERSION,
}),
parameterGroup,
defaultDatabaseName: DATABASE_NAME,
credentials: {
username: 'x',
},
vpc: this.vpc,
vpcSubnets: this.subnetSelection,
securityGroups: [securityGroup],
})
Related
1- I have a project core-infra that encapasses all the core infra related compoents (VPCs, Subnets, ECS Cluster...etc)
2- I have microservice projects with independant stacks each used for deployment
I want to deploy a FargateService from a microservice project stack A to the already existing ECS living within the core-infra stack
Affected area/feature
Pulumi Service
ECS
Deploy microservice
FargateService
Pulumi github issue link
Pulumi Stack References are the answer here:
https://www.pulumi.com/docs/intro/concepts/stack/#stackreferences
Your core-infra stack would output the ECS cluster ID and then stack B consumes that output so it can, for example, deploy an ECS service to the given cluster
(https://www.pulumi.com/registry/packages/aws/api-docs/ecs/service/).
I was able to deploy using aws classic.
PS: The setup is way more complex than with the awsx, the doc and resources aren't exhaustive.
Now I have few issues:
The Loadbalancer isn't reachable and keeps loading forever
I don't have any logs in the CloudWatch LogGoup
Not sure how to use the LB Listner with the ECS service / Not sure about the port mapping
Here is the complete code for reference (people who're husteling) and I'd appreciate if you could suggest any improvments/answers.
// Capture the EnvVars
const appName = process.env.APP_NAME;
const namespace = process.env.NAMESPACE;
const environment = process.env.ENVIRONMENT;
// Load the Deployment Environment config.
const configMapLoader = new ConfigMapLoader(namespace, environment);
const env = pulumi.getStack();
const infra = new pulumi.StackReference(`org/core-datainfra/${env}`);
// Fetch ECS Fargate cluster ID.
const ecsClusterId = infra.getOutput('ecsClusterId');
// Fetch DeVpc ID.
const deVpcId = infra.getOutput('deVpcId');
// Fetch DeVpc subnets IDS.
const subnets = ['subnet-aaaaaaaaaa', 'subnet-bbbbbbbbb'];
// Fetch DeVpc Security Group ID.
const securityGroupId = infra.getOutput('deSecurityGroupId');
// Define the Networking for our service.
const serviceLb = new aws.lb.LoadBalancer(`${appName}-lb`, {
internal: false,
loadBalancerType: 'application',
securityGroups: [securityGroupId],
subnets,
enableDeletionProtection: false,
tags: {
Environment: environment
}
});
const serviceTargetGroup = new aws.lb.TargetGroup(`${appName}-t-g`, {
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
vpcId: deVpcId,
targetType: 'ip'
});
const http = new aws.lb.Listener(`${appName}-listener`, {
loadBalancerArn: serviceLb.arn,
port: configMapLoader.configMap.service.http.externalPort,
protocol: configMapLoader.configMap.service.http.protocol,
defaultActions: [
{
type: 'forward',
targetGroupArn: serviceTargetGroup.arn
}
]
});
// Create AmazonECSTaskExecutionRolePolicy
const taskExecutionPolicy = new aws.iam.Policy(
`${appName}-task-execution-policy`,
{
policy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Action: [
'ecr:GetAuthorizationToken',
'ecr:BatchCheckLayerAvailability',
'ecr:GetDownloadUrlForLayer',
'ecr:BatchGetImage',
'logs:CreateLogStream',
'logs:PutLogEvents',
'ec2:AuthorizeSecurityGroupIngress',
'ec2:Describe*',
'elasticloadbalancing:DeregisterInstancesFromLoadBalancer',
'elasticloadbalancing:DeregisterTargets',
'elasticloadbalancing:Describe*',
'elasticloadbalancing:RegisterInstancesWithLoadBalancer',
'elasticloadbalancing:RegisterTargets'
],
Resource: '*'
}
]
})
}
);
// IAM role that allows Amazon ECS to make calls to the load balancer
const taskExecutionRole = new aws.iam.Role(`${appName}-task-execution-role`, {
assumeRolePolicy: JSON.stringify({
Version: '2012-10-17',
Statement: [
{
Effect: 'Allow',
Principal: {
Service: ['ecs-tasks.amazonaws.com']
},
Action: 'sts:AssumeRole'
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ecs.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
},
{
Action: 'sts:AssumeRole',
Principal: {
Service: 'ec2.amazonaws.com'
},
Effect: 'Allow',
Sid: ''
}
]
}),
tags: {
name: `${appName}-iam-role`
}
});
new aws.iam.RolePolicyAttachment(`${appName}-role-policy`, {
role: taskExecutionRole.name,
policyArn: taskExecutionPolicy.arn
});
// New image to be pulled
const image = `${configMapLoader.configMap.service.image.repository}:${process.env.IMAGE_TAG}`;
// Set up Log Group
const awsLogGroup = new aws.cloudwatch.LogGroup(`${appName}-awslogs-group`, {
name: `${appName}-awslogs-group`,
tags: {
Application: `${appName}`,
Environment: 'production'
}
});
const serviceTaskDefinition = new aws.ecs.TaskDefinition(
`${appName}-task-definition`,
{
family: `${appName}-task-definition`,
networkMode: 'awsvpc',
executionRoleArn: taskExecutionRole.arn,
requiresCompatibilities: ['FARGATE'],
cpu: configMapLoader.configMap.service.resources.limits.cpu,
memory: configMapLoader.configMap.service.resources.limits.memory,
containerDefinitions: JSON.stringify([
{
name: `${appName}-fargate`,
image,
cpu: parseInt(
configMapLoader.configMap.service.resources.limits.cpu
),
memory: parseInt(
configMapLoader.configMap.service.resources.limits.memory
),
essential: true,
portMappings: [
{
containerPort: 80,
hostPort: 80
}
],
environment: configMapLoader.getConfigAsEnvironment(),
logConfiguration: {
logDriver: 'awslogs',
options: {
'awslogs-group': `${appName}-awslogs-group`,
'awslogs-region': 'us-east-2',
'awslogs-stream-prefix': `${appName}`
}
}
}
])
}
);
// Create a Fargate service task that can scale out.
const fargateService = new aws.ecs.Service(`${appName}-fargate`, {
name: `${appName}-fargate`,
cluster: ecsClusterId,
taskDefinition: serviceTaskDefinition.arn,
desiredCount: 5,
loadBalancers: [
{
targetGroupArn: serviceTargetGroup.arn,
containerName: `${appName}-fargate`,
containerPort: configMapLoader.configMap.service.http.internalPort
}
],
networkConfiguration: {
subnets
}
});
// Export the Fargate Service Info.
export const fargateServiceName = fargateService.name;
export const fargateServiceUrl = serviceLb.dnsName;
export const fargateServiceId = fargateService.id;
export const fargateServiceImage = image;
I try to create RDS on already existing subnet.
There are three subnet.
subnet-0b5985476dee1f20c public on 1d
subnet-085c85398f27adbfd isolated on 1c
subnet-0fdd37150bfff91f0 isolated on 1d
So, I want to use second and third subnet as subnet group/
My code is here below.
const VPCID='vpc-0867d6797e6XXXXXb';
const vpc = ec2.Vpc.fromLookup(this, "VPC", {
vpcId:VPCID
});
const mySecurityGroup = new ec2.SecurityGroup(this, 'sg-allfordevelop', {
vpc,
description: 'Allow sql access to database',
allowAllOutbound: true,
securityGroupName: `cdk-st-${targetEnv}-sg`
});
mySecurityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(3306),'allow mysql port');
const dbInstance = new rds.DatabaseInstance(this, 'Instance', {
engine: rds.DatabaseInstanceEngine.mysql({
version: rds.MysqlEngineVersion.VER_8_0_19,
}),
vpc,
securityGroups:[mySecurityGroup],
instanceIdentifier:`cdk-${targetEnv}-rds`,
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
},
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MICRO),
removalPolicy: cdk.RemovalPolicy.DESTROY,
databaseName:`st${targetEnv}`,
credentials: rds.Credentials.fromPassword('django',new cdk.SecretValue("mypass"))
});
However it makes template here below.
There are not existed ids here.
Does it mean trying to make new subnet ?
How can I indicate to use already existed subnets?
"InstanceSubnetGroupF2CBA54F": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"DBSubnetGroupDescription": "Subnet group for Instance database",
"SubnetIds": [
"subnet-0b5985476dee1f20c",
"subnet-0d7c1590c61b62782"
]
},
"Metadata": {
"aws:cdk:path": "st-dev-base-stack/Instance/SubnetGroup/Default"
}
},
Solved.
There are old information cached in cdk.context.json.
I delete this file, it works.
I'm trying to migrate my mongo database from Compose to IBM Cloud Databases for Mongo and in their documnetations (https://www.compose.com/articles/exporting-databases-from-compose-for-mongodb-to-ibm-cloud/): "With a new Databases for MongoDB deployment, you'll be provided with a replica set of two endpoints to connect to your database. Databases for MongoDB also uses a TLS certificate, so you'll need to configure your MongoDB application driver to accept two hosts and a TLS certificate"
How can I set the TLS certificate provided by IBM Cloud in Mongoose connection ?
Nothing I've tried worked :(
I can see my database if I'm using the IBM cli but from my node.js application I cannot connect to it
var mongoose = require('mongoose');
mongoose.Promise = Promise;
var uri="mongodb://admin:passSftgdsdfvrrdfs#host1-1231243242.databases.appdomain.cloud:32605,host2-1231243242,host1-1231243242/testDatabaseName?authSource=admin&replicaSet=replset"
myDb.db = mongoose.createConnection(uri, {
tls: true,
tlsCAFile:`076baeec-1337-11e9-8c9b-ae5t6r3d1b17` (this is the name of the certificate and is placed in the root)
// tlsCAFile: require('fs').readFileSync('041baeec-1272-11e9-8c9b-ae2e3a9c1b17') // I have also tried something like this
absolute nothing is working even the database is there
Please help me
I'm also facing same problem
this works for me
mongoose.connect(‘mongodb+srv://username:password#host/db_name?authSource=admin&replicaSet=repliasetname&tls=true&tlsCAFile=/root/ca-certificate.crt’,{some config})
Try the following:
var key = fs.readFileSync('/home/node/mongodb/mongodb.pem');
var ca = [fs.readFileSync('/home/node/mongodb/ca.pem')];
var o = {
server: {
ssl: true,
sslValidate:true,
sslCA: ca,
sslKey: key,
sslCert:key
},
user: '****',
pass: '****'
};
m.connect('mongodb://dbAddr/dbName', o)```
I did it locally, you need to install the tunnel first
$ ssh -i "IF YOU HAVE PEM.pem" -L <27017:YOUR_AMAZON_HOST:27017> <server_user_name#server_ip_OR_server_url> -N
I managed to implement it as follows
const CERTIFICATE_PATH = 'rds-combined-ca-bundle.pem'
const certificateCA = CERTIFICATE_PATH && [fs.readFileSync(CERTIFICATE_PATH)];
const sslOptions = certificateCA
? ({
ssl: true,
tlsAllowInvalidHostnames: true,
sslCA: certificateCA,
user: MONGODB_USER,
pass: MONGODB_PASSWORD,
} as ConnectionOptions)
: {};
const options: ConnectionOptions = {
...sslOptions,
};
export const connectMongoDb = async (): Promise<void> => {
await mongoose.connect('mongodb://localhost:27017/test', options);
console.log('📊 Successfully connected to the database');
};
You need to set
MONGODB_USER
MONGODB_PASSWORD
I created a CfnDomain in AWS CDK and I was trying to get the generated domain name to create an alarm.
const es = new elasticsearch.CfnDomain(this, id, esProps);
new cloudwatch.CfnAlarm(this, "test", {
...
dimensions: [
{
name: "DomainName",
value: es.domainName,
},
],
});
But it seems that the domainName attribute is actually the argument that I pass in (I passed none so it will be autogenerated), so it's actually undefined and can't be used.
Is there any way that I can specify it such that it will wait for the elasticsearch cluster to be created so that I can obtain the generated domain name, or is there any other way to created an alarm for the metrics of the cluster?
You use CfnDomain.ref as the domain value for your dimension. Sample alarm creation for red cluster status:
const domain: CfnDomain = ...;
const elasticDimension = {
"DomainName": domain.ref,
};
const metricRed = new Metric({
namespace: "AWS/ES",
metricName: "ClusterStatus.red",
statistic: "maximum",
period: Duration.minutes(1),
dimensions: elasticDimension
});
const redAlarm = metricRed.createAlarm(construct, "esRedAlarm", {
alarmName: "esRedAlarm",
evaluationPeriods: 1,
threshold: 1
});
I'm setting up AWS S3 bucket to upload audio files to using MongoDB Stitch (here are the docs mongo s3 docs . After following the instructions and authenticating my user I keep geting this error when trying to upload the selected file: error image from console
On line 119 where the error is coming from I'm just catching the error after running AWS build:
const aws = stitchClient.getServiceClient(AwsServiceClient.factory, "AWS");
convertAudioToBSONBinaryObject(file).then((result) => {
const audiofile = mongodb.db("data").collection("audiofile");
//now we need an instance of AWS service client
const key = `${stitchClient.auth.user.id}-${file.name}`;
// const key = `${stitchClient.auth.user.id}-${file.name}`;
const bucket = "myBucketName";
const url =
"http://" + bucket + ".s3.amazonaws.com/" + encodeURIComponent(key);
const args = {
ACL: "public-read",
Bucket: bucket,
ContentType: file.type,
Key: key,
Body: result,
// aws_service: "s3",
};
// building the request
const request = new AwsRequest.Builder()
.withService("s3")
.withAction("PutObject")
.withRegion("us-east-1")
.withArgs(args);
aws
.execute(request.build)
.then((result) => {
console.log(result);
console.log(url);
return audiofile.insertOne({
owner_id: stitchClient.auth.user.id,
url,
file: {
name: file.name,
type: file.type,
},
Etag: result.Etag,
ts: new Date(),
});
})
.then((result) => {
console.log("last result", result);
})
.catch((err) => {
console.log(err);
});
});
My Stitch rule for s3 looks like this: Stitch rule for AWS s3
So it seems to me that everything is set up the way it's inteded to, but the error tells me I'm not passing all the needed args. I'd really appreciate any thoughts on how to fix this error.
P.S. If I change "AWS" to "AWS_S3" in this line :
const aws = stitchClient.getServiceClient(AwsServiceClient.factory, "AWS");
The error message changes to this:
StitchServiceError {message: "service not found: 'AWS_S3'", name: "StitchServiceError", errorCode: 18, errorCodeName: "ServiceNotFound",
And the log in Stitch shows this for information for both errors: Stitch Logs
The answer to this is a simple typo in this line:
aws
.execute(request.build)
.then((result)
build is a function so I just needed to call it - (request.build()).then((result).
Issue solved, thanks all!