Is there a way to not allocate an elastic IP (EIP) when creating a VPC using aws-cdk - amazon-vpc

I'm using the following code to create subnets that will be imported by another stack and used for a dedicated EFS VPC. If I don't create a PUBLIC subnet I get errors on creation. However the side effect is this code allocates an elastic IP address and I don't want one allocated, they are a precious resource.
How do I get rid of the elastic IP address? None of the methods allow you get anything that has an EIP attribute or method:
const fileSystemVpc = new ec2.Vpc(this, 'vpcForEfs', {
subnetConfiguration: [
{
cidrMask: 20,
name: 'nfsisolated',
subnetType: ec2.SubnetType.ISOLATED,
},
{
cidrMask: 20,
name: 'nfsprivate',
subnetType: ec2.SubnetType.PRIVATE,
},
{
cidrMask: 20,
name: 'nfspublic',
subnetType: ec2.SubnetType.PUBLIC,
},
],
});
If I comment out the PUBLIC section I get the following error on creation:
If you configure PRIVATE subnets in 'subnetConfiguration', you must also
configure PUBLIC subnets to put the NAT gateways into (got
[{"cidrMask":20,"name":"nfsisolated","subnetType":"Isolated"},
{"cidrMask":20,"name":"nfsprivate","subnetType":"Private"}].
Relevant issues that don't solve the problem but are similar:
https://github.com/aws/aws-cdk/issues/1305
https://github.com/aws/aws-cdk/issues/3704

This is the commit that added that check: https://github.com/aws/aws-cdk/commit/9a96c37b225b135c9afbf523ed3fbc67cba5ca50
Essentially if CDK wasn't stopping you with that message it would fail when CloudFormation tried to deploy the stack.
Here is more info from AWS on it as well: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
You can see the description of the ticket that was referenced in it and the AWS docs, if whatever you want to put in the PRIVATE subnets doesn't require internet access you should be using ISOLATED instead. This is because a PRIVATE subnet requires a NatGateway and a NatGateway is required to exist in a PUBLIC subnet with an elastic IP. Again, if you don't require outbound access to the internet from your PRIVATE subnet just use ISOLATED

Related

Azure DevOps IP addresses

I have an application running on Web App that needs to communicate with Azure DevOps Microsoft hosted agent. I've set some IP restrictions to deny everything and now in the process of whitelisting agent's IPs. When I read this page it refers to weekly json that contains objects about everything what I need (CIDRs per region). I've parsed the json, added them to my allow list, however the agent's public IP address is not from the range mentioned in the json. The way I checked it was running bash task on the agent to curl icanhazip.com. Does anyone know if the list is complete or should I look somewhere else?
I.e. example in my case:
I use this data (since my ADO org is in West Europe):
{
"name": "AzureDevOps.WestEurope",
"id": "AzureDevOps.WestEurope",
"properties": {
"changeNumber": 1,
"region": "westeurope",
"regionId": 18,
"platform": "Azure",
"systemService": "AzureDevOps",
"addressPrefixes": [
"40.74.28.0/23"
],
"networkFeatures": null
}
}
but the agent initiates connection from the IP: 20.238.71.171, which is not in any of the CIDRs privided by that json file (checked all other regions with ADO).
Any thoughts / help?
You would need to whitelist ALL ranges from, for instance, Azure West Europe. Those are a lot of different IP ranges, as Azure DevOps hosted agents do not have a service Tag.
Since this opens up your firewall to literally every VM running in West Europe, this is usually not really desired, as it is just a bit short of opening up your App to the entire world.
Hence, what people usually do is the following:
First task in a build job, fetch the public IP address of the executing build agent, using something like ipfy.org
Use AZ CLI to add this IP as a single IP allow rule to your app
Do your deployment etc
Remove the IP rule again
If you mean MS-hosted agent:
You should use AzureCloud service tag
The IP address ranges for the hosted agents are listed in the weekly file under AzureCloud., such as AzureCloud.westus for the West US region.
Docs:
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/hosted?view=azure-devops&tabs=yaml#networking

With CDK is it possible to modify the default subnets after they have been created?

I've created a VPC like this
vpc = new Vpc(theStack, vpcName,
VpcProps.builder()
.cidr("10.0.0.0/16")
.build());
In eu-west-1, by default I get 3 public and 3 private subnets. The private subnets will have a NAT Gateway.
Now, we're trying to remove the NAT Gateways (because of cost), so I tried this
vpc = new Vpc(theStack, vpcName,
VpcProps.builder()
.maxAzs(3)
.cidr("10.0.0.0/16")
.subnetConfiguration(List.of(
SubnetConfiguration.builder()
.subnetType(SubnetType.PUBLIC)
.name("Public")
.cidrMask(24)
.build(),
SubnetConfiguration.builder()
.subnetType(SubnetType.ISOLATED)
.name("Private")
.cidrMask(24)
.build()))
.build());
Creating this in a fresh stack works fine. I get a VPC with the same subnets as before and no NAT GW:s. But, running this to modify the VPC created above, results in name clashes.
Is there some way I can get cdk/cloudformation to understand that I want to modify the existing private subnets and not create new ones?
I double-checked the subnets that where created without specifying the subnets. The cdir-mask was /19 not /24 like i entered in the second version.
So, changing the cidrMask to 19 works fine. Now CloudFormation doesn't create new subnets and deletes the NAT Gateways.

How can I handle database schema migration when using lambda and aurora postgresql?

I am deploying application on lambda and using aurora postgresql as database. During development process the database schema changes quite frequently and I am looking for a way to migrate the schema. I know that flyway can do the job but it works fine for an application deployed on EC2 instance rather than lambda. What is the best way to do the job in lambda?
I can think of a workaround solution. My lambda is in typescript so it is running inside nodejs environment.
I am using loopback4 to create my models and database schema. I have an AWS Custom resources that's calls the handler to migrate the schema to RDS.
here is what you need to do:
create a security group for RDS
1.1. Add inbound rule: allow from lambda SG on TCP 3306
1.2. Add Outbound rule: allow all protocols, all ports on all
create a security group for the lambda
2.1. Add Outbound rule: allow all protocols, all ports on all
here is my code using CDK:
/** Lambda Security Group */
const lambdaSecurityGroup = new SecurityGroup(this, "LambdaSecurityGroup", {
securityGroupName: "lambda-security-group",
description: "Lambda security group",
vpc: vpc,
allowAllOutbound: true,
});
/** Security Group */
const securityGroup = new SecurityGroup(this, "SecurityGroup", {
securityGroupName: "rds-security-group",
description: "instance security group",
vpc: vpc,
allowAllOutbound: true,
});
/** Security Group Inbound rules - Lambda security group*/
securityGroup.addIngressRule(
SecurityGroup.fromSecurityGroupId(
this,
"LambdaSecurityGroupId",
lambdaSecurityGroup.securityGroupId
),
Port.tcp(config.DatabasePort),
"Allow from Lambda security group on TCP 3306"
);
const customResourceMigrateProvider = new CustomResources.Provider(
this,
"CustomResourceMigrateProvider",
{
onEventHandler: new Function(this, "CustomResourceMigrateLambda", {
runtime: Runtime.NODEJS_12_X,
code: /*this.lambdaCode ||*/ Code.fromAsset("dist"),
handler: "loopback/handlers/custom-resource-migrate.handler",
timeout: Duration.seconds(30),
vpc: vpc,
vpcSubnets: { subnets: [appSubnet1aId, appSubnet1bId] },
securityGroups: [lambdaSecurityGroup],
environment: environmentVariables,
role: customRole,
layers: [layer],
}),
//isCompleteHandler: isComplete,
logRetention: logs.RetentionDays.ONE_DAY,
}
);
I am using loopback4 with lambda and I have created a custom resource that will connect to the RDS and run the migrate script to update the schema.

Retrieve auto scaling group instance ip's and provide it to ansible

Im currently developing terraform script and ansible roles in order to install mongodb with the replication. im using auto scaling group and i need to pass, ec2 instance private ip's to ansible as extra vars. is there any way to do that?
When it's come to rs.initiate() is there any way to add ec2 private ip to mongo cluster when terraform creating the instances.
Not really sure about how it's done in ASGs, probably a combination of user-data and EC2 metadata would be helpful.
But I do it as below in case we have a fixed number of nodes. Posting this answer as it can be helpful to someone in some way.
Using EC2 dynamic inventory scripts.
Ref - https://docs.ansible.com/ansible/2.5/user_guide/intro_dynamic_inventory.html
This is basically a python script i.e ec2.py which gets the instance private IP using tags etc. It comes with a config file named ec2.ini.
Tag your instance in TF script (you add a role tag) -
resource "aws_instance" "ec2" {
....
tags = "${merge(var.tags, map(
"description","mongodb-node",
"role", "mongodb-node",
"Environment", "${local.env}",))}"
}
output "ip" {
value = ["${aws_instance.ec2.private_ip}"]
}
Get the instance private IP in playbook -
- hosts: localhost
connection: local
tasks:
- debug: msg="MongoDB Node IP is - {{ hostvars[groups['tag_role_mongodb-node'][0]].inventory_hostname }}"
Now run the playbook using TF null_resource -
resource null_resource "ansible_run" {
triggers {
ansible_file = "${sha1(file("${path.module}/${var.ansible_play}"))}"
}
provisioner "local-exec" {
command = "ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -i ./ec2.py --private-key ${var.private_key} ${var.ansible_play}"
}
}
You got to make sure AWS related environment variables are present/exported for ansible to fetch AWS EC2 metadata. Also make sure ec2.py is executable.
If you want to get the private IP, change the following config in ec2.ini -
destination_variable = private_ip_address
vpc_destination_variable = private_ip_address

BOSH working with Dynamic IP Addresses

What's the best way to work with dynamic IP addresses with BOSH? Currently we're setting static IP addresses for each machine we want to use, but we only really care that one of those VMs has a static IP address.
Is there a way to get information about other VMs running in the BOSH network from within a BOSH VM? Or just get dynamic information about the deployment from within the VM? Such as which machines are currently running on which IP addresses?
It sounds like the recent introduction of "links" is worth a look for your use case.
Previously, if network communication was required between jobs, release authors had to add job properties to accept other job’s network addresses (e.g. a db_ips property). Operators then had to explicitly assign static IPs or DNS names for each instance group and fill out network address properties
This lets each job either expose or consume connections.
i.e. a DB exposes its connection
# Database job spec file.
name: database_job
# ...
provides:
- name: database_conn
type: conn
# Links always carry certain information, like its address and AZ.
# Optionally, the provider can specify other properties in the link.
properties:
- port
- adapter
- username
- password
- name
And a Application can consume it.
# Application job spec file.
name: application_job
# ...
consumes:
- name: database_conn
type: conn
The consuming job is provided with extra properties to use these addresses/info as needed, i.e.
#!/bin/bash
# Application's templated control script.
# ...
export DATABASE_HOST="<%= link('database_conn').instances[0].address %>"
export DATABASE_PORT="<%= link('database_conn').p('port') %>"