Use existing subnet group for RDS - aws-cloudformation

I try to create RDS on already existing subnet.
There are three subnet.
subnet-0b5985476dee1f20c public on 1d
subnet-085c85398f27adbfd isolated on 1c
subnet-0fdd37150bfff91f0 isolated on 1d
So, I want to use second and third subnet as subnet group/
My code is here below.
const VPCID='vpc-0867d6797e6XXXXXb';
const vpc = ec2.Vpc.fromLookup(this, "VPC", {
vpcId:VPCID
});
const mySecurityGroup = new ec2.SecurityGroup(this, 'sg-allfordevelop', {
vpc,
description: 'Allow sql access to database',
allowAllOutbound: true,
securityGroupName: `cdk-st-${targetEnv}-sg`
});
mySecurityGroup.addIngressRule(ec2.Peer.anyIpv4(), ec2.Port.tcp(3306),'allow mysql port');
const dbInstance = new rds.DatabaseInstance(this, 'Instance', {
engine: rds.DatabaseInstanceEngine.mysql({
version: rds.MysqlEngineVersion.VER_8_0_19,
}),
vpc,
securityGroups:[mySecurityGroup],
instanceIdentifier:`cdk-${targetEnv}-rds`,
vpcSubnets: {
subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
},
instanceType: ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MICRO),
removalPolicy: cdk.RemovalPolicy.DESTROY,
databaseName:`st${targetEnv}`,
credentials: rds.Credentials.fromPassword('django',new cdk.SecretValue("mypass"))
});
However it makes template here below.
There are not existed ids here.
Does it mean trying to make new subnet ?
How can I indicate to use already existed subnets?
"InstanceSubnetGroupF2CBA54F": {
"Type": "AWS::RDS::DBSubnetGroup",
"Properties": {
"DBSubnetGroupDescription": "Subnet group for Instance database",
"SubnetIds": [
"subnet-0b5985476dee1f20c",
"subnet-0d7c1590c61b62782"
]
},
"Metadata": {
"aws:cdk:path": "st-dev-base-stack/Instance/SubnetGroup/Default"
}
},

Solved.
There are old information cached in cdk.context.json.
I delete this file, it works.

Related

unable to provision postgres12 database cluster using the AWS CDK

given the following code:
// please note I created a wrapper around the cdk components, hence cdk.ec2 etc.
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql12');
// Create the Serverless Aurora DB cluster; set the engine to Postgres
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R5, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
Provided we use postgres11 - this code works without issue, when I try and install 12, I get the following error reported by the CDK:
The Parameter Group default.aurora-postgresql12 with DBParameterGroupFamily aurora-postgresql12 cannot be used for this instance. Please use a Paramet
er Group with DBParameterGroupFamily aurora-postgresql11 (Service: AmazonRDS; Status Code: 400; Error Code: InvalidParameterCombination; Request ID: e
e90210d-070d-4593-9564-813b6fd4e331; Proxy: null)
I have tried loads of combinations for instanceType (most of which work in the RDS UI on the console) - but I cannot seem to install postgres12 - any ideas what I am doing wrong?
tried this as well:
const vpc = new cdk.ec2.Vpc(this, `${appName}VPC`);
const dbCredentials = cdk.rds.Credentials.fromGeneratedSecret('postgres');
//DEFINING VERSION 12.6 FOR ENGINE
const engine = cdk.rds.DatabaseClusterEngine.auroraPostgres({ version: cdk.rds.PostgresEngineVersion.VER_12_6 });
//DEFINING 11 FOR PARAMETER GROUP
const parameterGroup = cdk.rds.ParameterGroup.fromParameterGroupName(this, 'ParameterGroup', 'default.aurora-postgresql11');
const dbcluster = new cdk.rds.DatabaseCluster(this, `${appName}Cluster`, {
engine,
parameterGroup,
defaultDatabaseName: `${appName}DB`,
credentials: dbCredentials,
instanceProps: {
instanceType: cdk.ec2.InstanceType.of(cdk.ec2.InstanceClass.R6G, cdk.ec2.InstanceSize.LARGE),
vpc,
vpcSubnets: {
subnetType: cdk.ec2.SubnetType.PUBLIC
},
publiclyAccessible: true,
scaling: { autoPause: cdk.core.Duration.seconds(600) } // Optional. If not set, then instance will pause after 5 minutes
}
});
works like a dream - but installs engine v11.9 :( - I need >12 because I need to install pg_partman
somewhere along the line the engine is not. being properly set - or is hardcoded to 11
This works for me:
const AURORA_POSTGRES_ENGINE_VERSION = AuroraPostgresEngineVersion.VER_10_7
const RDS_MAJOR_VERSION = AURORA_POSTGRES_ENGINE_VERSION.auroraPostgresMajorVersion.split('.')[0]
const parameterGroup = ParameterGroup.fromParameterGroupName(
scope,
`DBPrameterGroup`,
`default.aurora-postgresql${RDS_MAJOR_VERSION}`,
)
new ServerlessCluster(scope, `Aurora${id}`, {
engine: DatabaseClusterEngine.auroraPostgres({
version: AURORA_POSTGRES_ENGINE_VERSION,
}),
parameterGroup,
defaultDatabaseName: DATABASE_NAME,
credentials: {
username: 'x',
},
vpc: this.vpc,
vpcSubnets: this.subnetSelection,
securityGroups: [securityGroup],
})

Ocelot api gateway - kubernetes - error: "namespace:serviceservice:managementservice Unable to use ,it is invalid. Address must contain host only...."

The problem that i am facing is that the ocelot kubernetservicediscorverProvider does not seem to find the other services on the name space in kubernetes.My goal is to use api gateway to call apis in the other services in the same namespace. I currently get a http 404 Not Found error. And the api gateway pod, logs the following:
Ocelot.Provider.Kubernetes.KubernetesServiceDiscoveryProvider[0]
requestId: 0HM93C93DL2T0:00000003, previousRequestId: no previous request id, message: namespace:serviceservice:managementservice Unable to use ,it is invalid. Address must contain host only e.g. localhost and port must be greater than 0
warn: Ocelot.Responder.Middleware.ResponderMiddleware[0]
requestId: 0HM93C93DL2T0:00000003, previousRequestId: no previous request id, message: Error Code: ServicesAreEmptyError Message: There were no services in NoLoadBalancer errors found in ResponderMiddleware. Setting error response for request path:/api/management/User/3910, request method: GET
I suspect that i have mis configured something. I first tried using the Ocelot documentation, regarding kubernetes, but the documentation is out dated. (an example is the Type the sugest value does not work for more info go this github issue Docs/Kubernetes provider are wrong)
Then i went on searching online through github issues, stack overflow posts and even the source code. But i do not see have what i am lacking in my config.
I currently have kubernetes running localy, with minikube. The only things that i have seen online is that others have misconfigured the ocelot.json. But i do not see what i have done incorrectly in my config.
(Before trying ocelot on kubernetes i first try it with local hosts, to try out if it works and to see what it lacks. It apparantly lacked a middleware that could control jwt with different roles which had right to acces the end point. I have now written the middleware my self and it works on the local host config for ocelot)
My ocelot.json config file looks like this for kubernetes:
{
"Routes": [
{
"UpstreamPathTemplate": "/api/management/User/{everything}",
"UpstreamHttpMethod": [ "POST", "PUT", "GET" ],
"DownstreamPathTemplate": "/api/management/User/{everything}",
"DownstreamScheme": "http",
"ServiceName": "managementservice",
"AuthenticationOptions": {
"AuthenticationProviderKey": "Bearer",
"AllowedScopes": [ "CompanyId", "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier" ]
},
"RouteClaimsRequirement": { "role": "1,2,3" },
"AddHeadersToRequest": {
"CompanyId": "Claims[CompanyId] > value",
"UserId": "Claims[http://schemas.xmlsoap.org/ws/2005/05/identity/claims/nameidentifier] > value"
}
}
],
"GlobalConfiguration": {
"ServiceDiscoveryProvider": {
"Host": "127.0.0.1",
"Port": 8083,
"Namespace": "service",
"Type": "KubernetesServiceDiscoveryProvider"
}
}
}
my startup.cs ConfigureServices method looks like this
public void ConfigureServices(IServiceCollection services)
{
services.AddCors(c =>
{
c.AddPolicy("AllowOrigin", options => options.WithOrigins(Configuration["Cors:AllowOrigins"])
.AllowAnyHeader()
.AllowAnyMethod().AllowCredentials());
});
#region Authication settings
TokenValidationParameters tokenValidationParameters = new TokenValidationParameters
{
ValidateIssuerSigningKey = true,
IssuerSigningKey = new SymmetricSecurityKey(Encoding.ASCII.GetBytes(Configuration["Jwt:Key"])),
ValidateIssuer = false,
ValidateAudience = false,
ValidateLifetime = true,
ClockSkew = TimeSpan.Zero
};
services.AddSingleton(tokenValidationParameters);
services.AddAuthentication(
x =>
{
x.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
x.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
}
)
.AddJwtBearer(JwtBearerDefaults.AuthenticationScheme, x =>
{
x.RequireHttpsMetadata = false;
x.SaveToken = true;
x.TokenValidationParameters = tokenValidationParameters;
});
#endregion
//Some more code
services.AddOcelot().AddKubernetes();
}
my startup.cs Configure method looks like this
public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{
// some more code
app.UseOcelot(configuration);
}
my program.cs CreateHostBuilder method looks like this
public static IHostBuilder CreateHostBuilder(string[] args) =>
Host.CreateDefaultBuilder(args)
.ConfigureAppConfiguration((hostingContext, config) =>
{
config.AddJsonFile("secrets/appsettings.kubernetes.json", optional: true)
.AddJsonFile("ocelot.json");
})
.ConfigureWebHostDefaults(webBuilder =>
{
webBuilder.UseStartup<Startup>();
});
It turns out that the real problem that i was having was in the permissions in kubernetes. The ocelot documentation also mentions this. But the command in the documentation is incorrect (most likely out dated).
This is the command that i used. Be warned the kubernetes documenation strongly disrecommend usage of this command permissive-rbac-permissions. But it is at least a way for you to test your api gateway in ocelot locally.
kubectl create clusterrolebinding permissive-binding --clusterrole=cluster-admin --user=admin --user=kubelet --group=system:serviceaccounts

How can I refer to the generated domain name of `elasticsearch.CfnDomain` in AWS CDK?

I created a CfnDomain in AWS CDK and I was trying to get the generated domain name to create an alarm.
const es = new elasticsearch.CfnDomain(this, id, esProps);
new cloudwatch.CfnAlarm(this, "test", {
...
dimensions: [
{
name: "DomainName",
value: es.domainName,
},
],
});
But it seems that the domainName attribute is actually the argument that I pass in (I passed none so it will be autogenerated), so it's actually undefined and can't be used.
Is there any way that I can specify it such that it will wait for the elasticsearch cluster to be created so that I can obtain the generated domain name, or is there any other way to created an alarm for the metrics of the cluster?
You use CfnDomain.ref as the domain value for your dimension. Sample alarm creation for red cluster status:
const domain: CfnDomain = ...;
const elasticDimension = {
"DomainName": domain.ref,
};
const metricRed = new Metric({
namespace: "AWS/ES",
metricName: "ClusterStatus.red",
statistic: "maximum",
period: Duration.minutes(1),
dimensions: elasticDimension
});
const redAlarm = metricRed.createAlarm(construct, "esRedAlarm", {
alarmName: "esRedAlarm",
evaluationPeriods: 1,
threshold: 1
});

Cloudformation - how to set filter policy of SNS subscription in code?

UPDATE: Cloudformation now supports SNS Topic Filters, so this question is not relevant anymore, no custom plugins or code is needed.
I am building a system with a number of SNS topics, and a number of Lambdas which are each reading messages from their assigned SQS queue. The SQS queues are subscribed to the SNS topics, but also have a filter policy so the messages will end up in the relevant SQS queues.
It works well when I set up the subscriptions in the AWS console.
Now I'm trying to do the same in my code, but the AWS Cloudformation documentation does not describe how to add a filter policy to a subscription. Based on the python examples here, I tried the following:
StopOperationSubscription:
Type: "AWS::SNS::Subscription"
Properties:
Protocol: sqs
TopicArn:
Ref: StatusTopic
Endpoint:
Fn::GetAtt: [StopActionQueue, Arn]
FilterPolicy: '{"value": ["stop"]}'
But then I get this error:
An error occurred: StopOperationSubscription - Encountered unsupported property FilterPolicy.
How can I set the filter policy that I need, using CloudFormation? And If that's not supported, what do you suggest as an alternative?
I want it to be set up automatically when I deploy my serverless app, with no manual steps required.
Cloudformation just started to support FilterPolicy yesterday. I have been struggling for a while too :)
Syntax
JSON
{
"Type" : "AWS::SNS::Subscription",
"Properties" : {
"DeliveryPolicy" : JSON object,
"Endpoint" : String,
"FilterPolicy" : JSON object,
"Protocol" : String,
"RawMessageDelivery" : Boolean,
"Region" : String,
"TopicArn" : String
}
}
YAML
Type: "AWS::SNS::Subscription"
Properties:
DeliveryPolicy: JSON object
Endpoint: String
FilterPolicy: JSON object
Protocol: String
RawMessageDelivery: Boolean,
Region: String
TopicArn: String
Ref:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-sns-subscription.html#cfn-sns-subscription-filterpolicy
https://aws.amazon.com/blogs/compute/managing-amazon-sns-subscription-attributes-with-aws-cloudformation/
I fixed it like this:
serverless.yml
plugins:
- serverless-plugin-scripts
custom:
scripts:
commands:
update-topic-filters: sls invoke local -f configureSubscriptions --path resources/lambdaTopicFilters.json
hooks:
before:deploy:finalize: sls update-topic-filters
functions:
configureSubscriptions:
handler: src/configurationLambdas/configureSubscriptions.main
# Only invoked when deploying - therefore, no permissions or triggers are needed.
configureSubscriptions.js
import AWS from 'aws-sdk'
const nameFromArn = arn => arn.split(':').pop()
const lambdaNameFromArn = arn =>
nameFromArn(arn)
.split('-')
.pop()
exports.main = async event => {
const sns = new AWS.SNS({ apiVersion: '2010-03-31' })
const params = {}
const { Topics } = await sns.listTopics(params).promise()
for (const { TopicArn } of Topics) {
const topicName = nameFromArn(TopicArn)
const filtersForTopic = event[topicName]
if (!filtersForTopic) {
continue
}
const { Subscriptions } = await sns.listSubscriptionsByTopic({ TopicArn }).promise()
for (const { Protocol, Endpoint, SubscriptionArn } of Subscriptions) {
if (Protocol === 'lambda') {
const lambdaName = lambdaNameFromArn(Endpoint)
const filterForLambda = filtersForTopic[lambdaName]
if (!filterForLambda) {
continue
}
const setPolicyParams = {
AttributeName: 'FilterPolicy',
SubscriptionArn,
AttributeValue: JSON.stringify(filterForLambda),
}
await sns.setSubscriptionAttributes(setPolicyParams).promise()
// eslint-disable-next-line no-console
console.log('Subscription filters has been set')
}
}
}
}
Top level is the different topic names, next level is the lambda names, and the third level is the filter policies for the related subscriptions:
lambdaTopicFilters.json
{
"user-event": {
"activateUser": {
"valueType": ["status"],
"value": ["awaiting_activation"]
},
"findActivities": {
"messageType": ["event"],
"value": ["awaiting_activity_data"],
"valueType": ["status"]
}
},
"system-event": {
"startStopProcess": {
"valueType": ["status"],
"value": ["activated", "aborted", "limit_reached"]
}
}
}
If you are using serverless it is now supporting sns filter natively
functions:
pets:
handler: pets.handler
events:
- sns:
topicName: pets
filterPolicy:
pet:
- dog
- cat
https://serverless.com/framework/docs/providers/aws/events/sns#setting-a-filter-policy

How do I connect to a MongoDB Database using SSL with Loopback

I am trying to connect to a MongoDB Database in Rackspace w/ SSL using loopback, but it's not working. It seems to connect fine; if I enter wrong credentials (on purpose) I get an error message saying "Can't connect", but when I use the correct credentials no error shows so I THINK I'm connecting fine. But when I try to query the database it always timesout, any idea whats happening?
My datasources.json looks something like:
"db": {
"name": "mongodb",
"url": "mongodb://username:password#iad-mongos2.objectrocket.com:port/dbName?ssl=true",
"debug": true,
"connector": "mongodb"
}
I keep reading things about needing a certificate file, but not sure if that applies in this case.
Any help would be greatly appreciated!
use datasources.env.js as below
var cfenv = require('cfenv');
var appenv = cfenv.getAppEnv();
// Within the application environment (appenv) there's a services object
var services = appenv.services;
// The services object is a map named by service so we extract the one for MongoDB
var mongodb_services = services["compose-for-mongodb"];
var credentials = mongodb_services[0].credentials;
// Within the credentials, an entry ca_certificate_base64 contains the SSL pinning key
// We convert that from a string into a Buffer entry in an array which we use when
// connecting.
var ca = [new Buffer(credentials.ca_certificate_base64, 'base64')];
var datasource = {
name: "db",
connector: "mongodb",
url:credentials.uri,
ssl: true,
sslValidate: false,
sslCA: ca
};
module.exports = {
'db': datasource
};
http://madkoding.gitlab.io/2016/08/26/loopback-mongo-ssl/
https://loopback.io/doc/en/lb3/Environment-specific-configuration.html#data-source-configuration
Create a Datasource using lb4 datasource command, edit the datasource generated by adding the SSL details to the config object: 'ssl', 'sslvalidated', 'checkserverIdentity, sslCA, sslKey etc.
import fs from 'fs';
import path from 'path';
const ca = fs.readFileSync(
path.join(__dirname, '../../utils/certs/mongodbca.cert'),
'utf8',
);
const config = {
name: 'test_db',
debug: true,
connector: 'mongodb',
url: false,
host:'hostname',
port: port,
user: 'user',
password: 'password',
database: 'databasename',
authSource: 'admin',
useNewUrlParser: true,
ssl: true,
sslValidate: true,
checkServerIdentity: false,
sslCA: [ca],
};
This worked for me, You can monkey patch the Mongo.connect() function by which you can add the option parameter.
Make a boot script file which can use the MongoDB option parameters of SSL certificate to make a secured connection to MongoDB, below code snippet, is written in a boot script js.
//Below code is written in a boot script
var monog_cert_file = fs.readFileSync(path.join(__dirname, '../certificate_dir/mongodb.pem'));
var monog_ca_file = fs.readFileSync(path.join(__dirname, '../certificate_dir/rootCA.pem'));
var monog_key_file = fs.readFileSync(path.join(__dirname, '../certificate_dir/mongodb.pem'));
const mongoOptions = {
ssl: true,
sslValidate: false,
sslCA:monog_ca_file,
sslKey:monog_key_file,
sslCert:monog_cert_file,
authSource:"auth_db_name"
};
//Patching Mongo connect For option variable
const mongodb = require('mongodb').MongoClient;
const ogConnect = mongodb.connect;
const connectWrapper = function(url,cb) {
return ogConnect(url, mongoOptions, cb);
}
mongodb.connect = connectWrapper;
//Patching Mongo connect For option variable
use datasources.json as below
app_db: {
"host": "127.0.0.1",
"port": 27017,
"database": "test",
"name": "app_db",
"username": "youruser",
"password": "yourpassword",
"connector": "mongodb",
"ssl":true,
"server": {
"auto_reconnect": true,
"reconnectTries": 100,
"reconnectInterval": 1000,
"sslValidate":false,
"checkServerIdentity":false,
"sslKey":fs.readFileSync('path to key'),
"sslCert":fs.readFileSync('path to certificate'),
"sslCA":fs.readFileSync('path to CA'),
"sslPass":"yourpassphrase if any"
}
username,
password,
auto_reconnect,
tries and interval all are optional.
use below link to get the certificates using OpenSSL
https://docs.mongodb.com/manual/tutorial/configure-ssl/